Deterministic methods in radiation transport
International Nuclear Information System (INIS)
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS
Directory of Open Access Journals (Sweden)
MICULEAC Melania Elena
2014-06-01
Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.
Method to deterministically study photonic nanostructures in different experimental instruments
Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the
Non deterministic methods for charged particle transport
International Nuclear Information System (INIS)
Besnard, D.C.; Buresi, E.; Hermeline, F.; Wagon, F.
1985-04-01
The coupling of Monte-Carlo methods for solving Fokker Planck equation with ICF inertial confinement fusion codes requires them to be economical and to preserve gross conservation properties. Besides, the presence in FPE Fokker-Planck equation of diffusion terms due to collisions between test particles and the background plasma challenges standard M.C. (Monte-Carlo) techniques if this phenomenon is dominant. We address these problems through the use of a fixed mesh in phase space which allows us to handle highly variable sources, avoiding any Russian Roulette for lowering the size of the sample. Also on this mesh are solved diffusion equations obtained from a splitting of FPE. Any non linear diffusion terms of FPE can be handled in this manner. Another method, also presented here is to use a direct particle method for solving the full FPE
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Comparison of deterministic and Monte Carlo methods in shielding design
International Nuclear Information System (INIS)
Oliveira, A. D.; Oliveira, C.
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
Are deterministic methods suitable for short term reserve planning?
International Nuclear Information System (INIS)
Voorspools, Kris R.; D'haeseleer, William D.
2005-01-01
Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
Applicability of deterministic methods in seismic site effects modeling
International Nuclear Information System (INIS)
Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.
2005-01-01
The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)
Use of deterministic methods in survey calculations for criticality problems
International Nuclear Information System (INIS)
Hutton, J.L.; Phenix, J.; Course, A.F.
1991-01-01
A code package using deterministic methods for solving the Boltzmann Transport equation is the WIMS suite. This has been very successful for a range of situations. In particular it has been used with great success to analyse trends in reactivity with a range of changes in state. The WIMS suite of codes have a range of methods and are very flexible in the way they can be combined. A wide variety of situations can be modelled ranging through all the current Thermal Reactor variants to storage systems and items of chemical plant. These methods have recently been enhanced by the introduction of the CACTUS method. This is based on a characteristics technique for solving the Transport equation and has the advantage that complex geometrical situations can be treated. In this paper the basis of the method is outlined and examples of its use are illustrated. In parallel with these developments the validation for out of pile situations has been extended to include experiments with relevance to criticality situations. The paper will summarise this evidence and show how these results point to a partial re-adoption of deterministic methods for some areas of criticality. The paper also presents results to illustrate the use of WIMS in criticality situations and in particular show how it can complement codes such as MONK when used for surveying the reactivity effect due to changes in geometry or materials. (Author)
Convergence studies of deterministic methods for LWR explicit reflector methodology
International Nuclear Information System (INIS)
Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.
2013-01-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
International Nuclear Information System (INIS)
S. GOLUOGLU, C. BENTLEY, R. DEMEGLIO, M. DUNN, K. NORTON, R. PEVEY I.SUSLOV AND H.L. DODDS
1998-01-01
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems
Deterministic and fuzzy-based methods to evaluate community resilience
Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo
2018-04-01
Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.
Deterministic methods for multi-control fuel loading optimization
Rahman, Fariz B. Abdul
We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.
Method to deterministically study photonic nanostructures in different experimental instruments.
Husken, B H; Woldering, L A; Blum, C; Vos, W L
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.
Deterministic methods to solve the integral transport equation in neutronic
International Nuclear Information System (INIS)
Warin, X.
1993-11-01
We present a synthesis of the methods used to solve the integral transport equation in neutronic. This formulation is above all used to compute solutions in 2D in heterogeneous assemblies. Three kinds of methods are described: - the collision probability method; - the interface current method; - the current coupling collision probability method. These methods don't seem to be the most effective in 3D. (author). 9 figs
Transmission power control in WSNs : from deterministic to cognitive methods
Chincoli, M.; Liotta, A.; Gravina, R.; Palau, C.E.; Manso, M.; Liotta, A.; Fortino, G.
2018-01-01
Communications in Wireless Sensor Networks (WSNs) are affected by dynamic environments, variable signal fluctuations and interference. Thus, prompt actions are necessary to achieve dependable communications and meet Quality of Service (QoS) requirements. To this end, the deterministic algorithms
International Nuclear Information System (INIS)
Yokose, Yoshio; Noguchi, So; Yamashita, Hideo
2002-01-01
Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)
Deterministic factor analysis: methods of integro-differentiation of non-integral order
Directory of Open Access Journals (Sweden)
Valentina V. Tarasova
2016-12-01
Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes
International Nuclear Information System (INIS)
Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David
2006-01-01
We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A.F.; Roussin, R.W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A. F.; Roussin, R. W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Bearing-only SLAM: comparison between probabilistic and deterministic methods
Joly , Cyril; Rives , Patrick
2008-01-01
This work deals with the problem of simultaneous localization and mapping (SLAM). Classical methods for solving the SLAM problem are based on the Extended Kalman Filter (EKF-SLAM) or particle filter (FastSLAM). These kinds of algorithms allow on-line solving but could be inconsistent. In this report, the above-mentioned algorithms are not studied but global ones. Global approaches need all measurements from the initial step to the final step in order to compute the trajectory of the robot and...
Non-Deterministic, Non-Traditional Methods (NDNTM)
Cruse, Thomas A.; Chamis, Christos C. (Technical Monitor)
2001-01-01
The review effort identified research opportunities related to the use of nondeterministic, nontraditional methods to support aerospace design. The scope of the study was restricted to structural design rather than other areas such as control system design. Thus, the observations and conclusions are limited by that scope. The review identified a number of key results. The results include the potential for NASA/AF collaboration in the area of a design environment for advanced space access vehicles. The following key points set the context and delineate the key results. The Principal Investigator's (PI's) context for this study derived from participation as a Panel Member in the Air Force Scientific Advisory Board (AF/SAB) Summer Study Panel on 'Whither Hypersonics?' A key message from the Summer Study effort was a perceived need for a national program for a space access vehicle whose operating characteristics of cost, availability, deployability, and reliability most closely match the NASA 3rd Generation Reusable Launch Vehicle (RLV). The Panel urged the AF to make a significant joint commitment to such a program just as soon as the AF defined specific requirements for space access consistent with the AF Aerospace Vision 2020. The review brought home a concurrent need for a national vehicle design environment. Engineering design system technology is at a time point from which a revolution as significant as that brought about by the finite element method is possible, this one focusing on information integration on a scale that far surpasses current design environments. The study therefore fully supported the concept, if not some of the details of the Intelligent Synthesis Environment (ISE). It became abundantly clear during this study that the government (AF, NASA) and industry are not moving in the same direction in this regard, in fact each is moving in its own direction. NASA/ISE is not yet in an effective leadership position in this regard. However, NASA does
Optimization of structures subjected to dynamic load: deterministic and probabilistic methods
Directory of Open Access Journals (Sweden)
Élcio Cassimiro Alves
Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
Energy Technology Data Exchange (ETDEWEB)
Giffard, F.X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Energy Technology Data Exchange (ETDEWEB)
Giffard, F X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
2D deterministic radiation transport with the discontinuous finite element method
International Nuclear Information System (INIS)
Kershaw, D.; Harte, J.
1993-01-01
This report provides a complete description of the analytic and discretized equations for 2D deterministic radiation transport. This computational model has been checked against a wide variety of analytic test problems and found to give excellent results. We make extensive use of the discontinuous finite element method
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Frequency domain fatigue damage estimation methods suitable for deterministic load spectra
Energy Technology Data Exchange (ETDEWEB)
Henderson, A.R.; Patel, M.H. [University Coll., Dept. of Mechanical Engineering, London (United Kingdom)
2000-07-01
The evaluation of fatigue damage due to load spectra, directly in the frequency domain, is a complex phenomena but with the benefit of significant computation time savings. Various formulae have been suggested but have usually relating to a specific application only. The Dirlik method is the exception and is applicable to general cases of continuous stochastic spectra. This paper describes three approaches for evaluating discrete deterministic load spectra generated by the floating wind turbine model developed the UCL/RAL research project. (Author)
International Nuclear Information System (INIS)
Liu, Shichang; Wang, Guanbo; Wu, Gaochen; Wang, Kan
2015-01-01
Highlights: • DRAGON and DONJON are applied and verified in calculations of research reactors. • Continuous-energy Monte Carlo calculations by RMC are chosen as the references. • “ECCO” option of DRAGON is suitable for the calculations of research reactors. • Manual modifications of cross-sections are not necessary with DRAGON and DONJON. • DRAGON and DONJON agree well with RMC if appropriate treatments are applied. - Abstract: Simulation of the behavior of the plate-type research reactors such as JRR-3M and CARR poses a challenge for traditional neutronics calculation tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity and large leakage of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON and DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic approach. The goal of this research is to examine the capability of the deterministic code system DRAGON and DONJON to reliably simulate the research reactors. The results indicate that the DRAGON and DONJON code system agrees well with the continuous-energy Monte Carlo simulation on both k eff and flux distributions if the appropriate treatments (such as the ECCO option) are applied
Comparison of Monte Carlo method and deterministic method for neutron transport calculation
International Nuclear Information System (INIS)
Mori, Takamasa; Nakagawa, Masayuki
1987-01-01
The report outlines major features of the Monte Carlo method by citing various applications of the method and techniques used for Monte Carlo codes. Major areas of its application include analysis of measurements on fast critical assemblies, nuclear fusion reactor neutronics analysis, criticality safety analysis, evaluation by VIM code, and calculation for shielding. Major techniques used for Monte Carlo codes include the random walk method, geometric expression method (combinatorial geometry, 1, 2, 4-th degree surface and lattice geometry), nuclear data expression, evaluation method (track length, collision, analog (absorption), surface crossing, point), and dispersion reduction (Russian roulette, splitting, exponential transform, importance sampling, corrected sampling). Major features of the Monte Carlo method are as follows: 1) neutron source distribution and systems of complex geometry can be simulated accurately, 2) physical quantities such as neutron flux in a place, on a surface or at a point can be evaluated, and 3) calculation requires less time. (Nogami, K.)
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
International Nuclear Information System (INIS)
Norris, Edward T.; Liu, Xin; Hsieh, Jiang
2015-01-01
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer
Theory and application of deterministic multidimensional pointwise energy lattice physics methods
International Nuclear Information System (INIS)
Zerkle, M.L.
1999-01-01
The theory and application of deterministic, multidimensional, pointwise energy lattice physics methods are discussed. These methods may be used to solve the neutron transport equation in multidimensional geometries using near-continuous energy detail to calculate equivalent few-group diffusion theory constants that rigorously account for spatial and spectral self-shielding effects. A dual energy resolution slowing down algorithm is described which reduces the computer memory and disk storage requirements for the slowing down calculation. Results are presented for a 2D BWR pin cell depletion benchmark problem
Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives
DEFF Research Database (Denmark)
Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle
2002-01-01
In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...... properties. The method provides a systematic way to derive a nominal average model as well as a structures multiplicative input uncertainty model, and it is demonstrated how to apply mu-theory to design a controller based on the models obtained that meets certain robust performance criteria....
Strelkov, S. A.; Sushkevich, T. A.; Maksakova, S. V.
2017-11-01
We are talking about russian achievements of the world level in the theory of radiation transfer, taking into account its polarization in natural media and the current scientific potential developing in Russia, which adequately provides the methodological basis for theoretically-calculated research of radiation processes and radiation fields in natural media using supercomputers and mass parallelism. A new version of the matrix transfer operator is proposed for solving problems of polarized radiation transfer in heterogeneous media by the method of influence functions, when deterministic and stochastic methods can be combined.
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
Energy Technology Data Exchange (ETDEWEB)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.
International Nuclear Information System (INIS)
Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.
2008-01-01
In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)
International Nuclear Information System (INIS)
Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.
2009-01-01
In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)
Application of deterministic and probabilistic methods in replacement of nuclear systems
International Nuclear Information System (INIS)
Vianna Filho, Alfredo Marques
2007-01-01
The economic equipment replacement problem is one of the oldest questions in Production Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost, etc. New equipment, however, require a higher initial investment and thus a higher opportunity cost, and impose special training of the labor force. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs but in contrast having lower financial, insurance, and opportunity costs. The weighting of all these costs can be made with the various methods presented. The aim of this paper is to discuss deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. (author)
International Nuclear Information System (INIS)
Maerker, R.E.; Worley, B.A.
1989-01-01
Interest in research into the field of uncertainty analysis has recently been stimulated as a result of a need in high-level waste repository design assessment for uncertainty information in the form of response complementary cumulative distribution functions (CCDFs) to show compliance with regulatory requirements. The solution to this problem must obviously rely on the analysis of computer code models, which, however, employ parameters that can have large uncertainties. The motivation for the research presented in this paper is a search for a method involving a deterministic uncertainty analysis approach that could serve as an improvement over those methods that make exclusive use of statistical techniques. A deterministic uncertainty analysis (DUA) approach based on the use of first derivative information is the method studied in the present procedure. The present method has been applied to a high-level nuclear waste repository problem involving use of the codes ORIGEN2, SAS, and BRINETEMP in series, and the resulting CDF of a BRINETEMP result of interest is compared with that obtained through a completely statistical analysis
Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods
Energy Technology Data Exchange (ETDEWEB)
Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)
2012-09-15
Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)
Optimal power flow: a bibliographic survey I. Formulations and deterministic methods
Energy Technology Data Exchange (ETDEWEB)
Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [University of Jyvaskyla, Department of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)
2012-09-15
Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey (this article) provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)
Developments based on stochastic and determinist methods for studying complex nuclear systems
International Nuclear Information System (INIS)
Giffard, F.X.
2000-01-01
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
A plateau–valley separation method for textured surfaces with a deterministic pattern
DEFF Research Database (Denmark)
Godi, Alessandro; Kühle, Anders; De Chiffre, Leonardo
2014-01-01
The effective characterization of textured surfaces presenting a deterministic pattern of lubricant reservoirs is an issue with which many researchers are nowadays struggling. Existing standards are not suitable for the characterization of such surfaces, providing at times values without physical...... meaning. A new method based on the separation between the plateau and valley regions is hereby presented allowing independent functional analyses of the detected features. The determination of a proper threshold between plateaus and valleys is the first step of a procedure resulting in an efficient...
Biomedical applications of two- and three-dimensional deterministic radiation transport methods
International Nuclear Information System (INIS)
Nigg, D.W.
1992-01-01
Multidimensional deterministic radiation transport methods are routinely used in support of the Boron Neutron Capture Therapy (BNCT) Program at the Idaho National Engineering Laboratory (INEL). Typical applications of two-dimensional discrete-ordinates methods include neutron filter design, as well as phantom dosimetry. The epithermal-neutron filter for BNCT that is currently available at the Brookhaven Medical Research Reactor (BMRR) was designed using such methods. Good agreement between calculated and measured neutron fluxes was observed for this filter. Three-dimensional discrete-ordinates calculations are used routinely for dose-distribution calculations in three-dimensional phantoms placed in the BMRR beam, as well as for treatment planning verification for live canine subjects. Again, good agreement between calculated and measured neutron fluxes and dose levels is obtained
International Nuclear Information System (INIS)
Deco, Gustavo; Marti, Daniel
2007-01-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability
Deco, Gustavo; Martí, Daniel
2007-03-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.
Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S; Henry, Roland G
2013-01-01
sensitivity (79%) as determined from cortical IES compared to deterministic q-ball (50%), probabilistic DTI (36%), and deterministic DTI (10%). The sensitivity using the q-ball algorithm (65%) was significantly higher than using DTI (23%) (p probabilistic algorithms (58%) were more sensitive than deterministic approaches (30%) (p = 0.003). Probabilistic q-ball fiber tracks had the smallest offset to the subcortical stimulation sites. The offsets between diffusion fiber tracks and subcortical IES sites were increased significantly for those cases where the diffusion fiber tracks were visibly thinner than expected. There was perfect concordance between the subcortical IES function (e.g. hand stimulation) and the cortical connection of the nearest diffusion fiber track (e.g. upper extremity cortex). This study highlights the tremendous utility of intraoperative stimulation sites to provide a gold standard from which to evaluate diffusion MRI fiber tracking methods and has provided an object standard for evaluation of different diffusion models and approaches to fiber tracking. The probabilistic q-ball fiber tractography was significantly better than DTI methods in terms of sensitivity and accuracy of the course through the white matter. The commonly used DTI fiber tracking approach was shown to have very poor sensitivity (as low as 10% for deterministic DTI fiber tracking) for delineation of the lateral aspects of the corticospinal tract in our study. Effects of the tumor/edema resulted in significantly larger offsets between the subcortical IES and the preoperative fiber tracks. The provided data show that probabilistic HARDI tractography is the most objective and reproducible analysis but given the small sample and number of stimulation points a generalization about our results should be given with caution. Indeed our results inform the capabilities of preoperative diffusion fiber tracking and indicate that such data should be used carefully when making pre-surgical and
Stephenson, C L; Harris, C A
2016-09-01
Glyphosate is a herbicide used to control broad-leaved weeds. Some uses of glyphosate in crop production can lead to residues of the active substance and related metabolites in food. This paper uses data on residue levels, processing information and consumption patterns, to assess theoretical lifetime dietary exposure to glyphosate. Initial estimates were made assuming exposure to the highest permitted residue levels in foods. These intakes were then refined using median residue levels from trials, processing information, and monitoring data to achieve a more realistic estimate of exposure. Estimates were made using deterministic and probabilistic methods. Exposures were compared to the acceptable daily intake (ADI)-the amount of a substance that can be consumed daily without an appreciable health risk. Refined deterministic intakes for all consumers were at or below 2.1% of the ADI. Variations were due to cultural differences in consumption patterns and the level of aggregation of the dietary information in calculation models, which allows refinements for processing. Probabilistic exposure estimates ranged from 0.03% to 0.90% of the ADI, depending on whether optimistic or pessimistic assumptions were made in the calculations. Additional refinements would be possible if further data on processing and from residues monitoring programmes were available. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
International Nuclear Information System (INIS)
Cacuci, D.G.
1984-07-01
This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations
International Nuclear Information System (INIS)
Maheri, Alireza
2014-01-01
Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a
A deterministic alternative to the full configuration interaction quantum Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Tubman, Norm M.; Lee, Joonho; Takeshita, Tyler Y.; Head-Gordon, Martin; Whaley, K. Birgitta [University of California, Berkeley, Berkeley, California 94720 (United States)
2016-07-28
Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr{sub 2} molecule. We demonstrate for systems like Cr{sub 2} that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C{sub 2}.
International Nuclear Information System (INIS)
Matijevic, M.; Grgic, D.; Jecmenica, R.
2016-01-01
This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first
Directory of Open Access Journals (Sweden)
Emmanouil Styvaktakis
2007-01-01
Full Text Available This paper presents the two main types of classification methods for power quality disturbances based on underlying causes: deterministic classification, giving an expert system as an example, and statistical classification, with support vector machines (a novel method as an example. An expert system is suitable when one has limited amount of data and sufficient power system expert knowledge; however, its application requires a set of threshold values. Statistical methods are suitable when large amount of data is available for training. Two important issues to guarantee the effectiveness of a classifier, data segmentation, and feature extraction are discussed. Segmentation of a sequence of data recording is preprocessing to partition the data into segments each representing a duration containing either an event or a transition between two events. Extraction of features is applied to each segment individually. Some useful features and their effectiveness are then discussed. Some experimental results are included for demonstrating the effectiveness of both systems. Finally, conclusions are given together with the discussion of some future research directions.
Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay
2017-11-01
Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Deterministic flows of order-parameters in stochastic processes of quantum Monte Carlo method
International Nuclear Information System (INIS)
Inoue, Jun-ichi
2010-01-01
In terms of the stochastic process of quantum-mechanical version of Markov chain Monte Carlo method (the MCMC), we analytically derive macroscopically deterministic flow equations of order parameters such as spontaneous magnetization in infinite-range (d(= ∞)-dimensional) quantum spin systems. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding (d + 1)-dimensional classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. In the steady state, we show that the equations are identical to the saddle point equations for the equilibrium state of the same system. The equation for the dynamical Ising model is recovered in the classical limit. We also check the validity of the static approximation by making use of computer simulations for finite size systems and discuss several possible extensions of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we shall use our procedure to evaluate the decoding process of Bayesian image restoration. With the assistance of the concept of dynamical replica theory (the DRT), we derive the zero-temperature flow equation of image restoration measure showing some 'non-monotonic' behaviour in its time evolution.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
International Nuclear Information System (INIS)
Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu
2011-01-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S n ) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
mouloud, Hamidatou
2016-04-01
The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Energy Technology Data Exchange (ETDEWEB)
Heo, W.; Kim, W.; Kim, Y. [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of); Yun, S. [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Liu, Yonghe; Feng, Jinming; Liu, Xiu; Zhao, Yadi
2017-12-01
Statistical downscaling (SD) is a method that acquires the local information required for hydrological impact assessment from large-scale atmospheric variables. Very few statistical and deterministic downscaling models for daily precipitation have been conducted for local sites influenced by the East Asian monsoon. In this study, SD models were constructed by selecting the best predictors and using generalized linear models (GLMs) for Feixian, a site in the Yishu River Basin and Shandong Province. By calculating and mapping Spearman rank correlation coefficients between the gridded standardized values of five large-scale variables and daily observed precipitation, different cyclonic circulation patterns were found for monsoonal precipitation in summer (June-September) and winter (November-December and January-March); the values of the gridded boxes with the highest absolute correlations for observed precipitation were selected as predictors. Data for predictors and predictands covered the period 1979-2015, and different calibration and validation periods were divided when fitting and validating the models. Meanwhile, the bootstrap method was also used to fit the GLM. All the above thorough validations indicated that the models were robust and not sensitive to different samples or different periods. Pearson's correlations between downscaled and observed precipitation (logarithmically transformed) on a daily scale reached 0.54-0.57 in summer and 0.56-0.61 in winter, and the Nash-Sutcliffe efficiency between downscaled and observed precipitation reached 0.1 in summer and 0.41 in winter. The downscaled precipitation partially reflected exact variations in winter and main trends in summer for total interannual precipitation. For the number of wet days, both winter and summer models were able to reflect interannual variations. Other comparisons were also made in this study. These results demonstrated that when downscaling, it is appropriate to combine a correlation
International Nuclear Information System (INIS)
Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias
2007-01-01
We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information
Energy Technology Data Exchange (ETDEWEB)
Patanarapeelert, K. [Faculty of Science, Department of Mathematics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand); Frank, T.D. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany)]. E-mail: tdfrank@uni-muenster.de; Friedrich, R. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany); Beek, P.J. [Faculty of Human Movement Sciences and Institute for Fundamental and Clinical Human Movement Sciences, Vrije Universiteit, Van der Boechorststraat 9, 1081 BT Amsterdam (Netherlands); Tang, I.M. [Faculty of Science, Department of Physics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand)
2006-12-18
A method is proposed to identify deterministic components of stable and unstable time-delayed systems subjected to noise sources with finite correlation times (colored noise). Both neutral and retarded delay systems are considered. For vanishing correlation times it is shown how to determine their noise amplitudes by minimizing appropriately defined Kullback measures. The method is illustrated by applying it to simulated data from stochastic time-delayed systems representing delay-induced bifurcations, postural sway and ship rolling.
International Nuclear Information System (INIS)
Kim, Jong Woo; Woo, Myeong Hyeon; Kim, Jae Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung
2017-01-01
In this study hybrid Monte Carlo/Deterministic method is explained for radiation transport analysis in global system. FW-CADIS methodology construct the weight window parameter and it useful at most global MC calculation. However, Due to the assumption that a particle is scored at a tally, less particles are transported to the periphery of mesh tallies. For compensation this space-dependency, we modified the module in the ADVANTG code to add the proposed method. We solved the simple test problem for comparing with result from FW-CADIS methodology, it was confirmed that a uniform statistical error was secured as intended. In the future, it will be added more practical problems. It might be useful to perform radiation transport analysis using the Hybrid Monte Carlo/Deterministic method in global transport problems.
International Nuclear Information System (INIS)
Terry, W.K.; Gougar, H.D.; Ougouag, A.M.
2002-01-01
A new deterministic method has been developed for the neutronics analysis of a pebble-bed reactor (PBR). The method accounts for the flow of pebbles explicitly and couples the flow to the neutronics. The method allows modeling of once-through cycles as well as cycles in which pebbles are recirculated through the core an arbitrary number of times. This new work is distinguished from older methods by the systematically semi-analytical approach it takes. In particular, whereas older methods use the finite-difference approach (or an equivalent one) for the discretization and the solution of the burnup equation, the present work integrates the relevant differential equation analytically in discrete and complementary sub-domains of the reactor. Like some of the finite-difference codes, the new method obtains the asymptotic fuel-loading pattern directly, without modeling any intermediate loading pattern. This is a significant advantage for the design and optimization of the asymptotic fuel-loading pattern. The new method is capable of modeling directly both the once-through-then-out fuel cycle and the pebble recirculating fuel cycle. Although it currently includes a finite-difference neutronics solver, the new method has been implemented into a modular code that incorporates the framework for the future coupling to an efficient solver such as a nodal method and to modern cross section preparation capabilities. In its current state, the deterministic method presented here is capable of quick and efficient design and optimization calculations for the in-core PBR fuel cycle. The method can also be used as a practical 'scoping' tool. It could, for example, be applied to determine the potential of the PBR for resisting nuclear-weapons proliferation and to optimize proliferation-resistant features. However, the purpose of this paper is to show that the method itself is viable. Refinements to the code are under way, with the objective of producing a powerful reactor physics
International Nuclear Information System (INIS)
Liu, Shichang; Wang, Guanbo; Liang, Jingang; Wu, Gaochen; Wang, Kan
2015-01-01
Highlights: • DRAGON & DONJON were applied in burnup calculations of plate-type research reactors. • Continuous-energy Monte Carlo burnup calculations by RMC were chosen as references. • Comparisons of keff, isotopic densities and power distribution were performed. • Reasons leading to discrepancies between two different approaches were analyzed. • DRAGON & DONJON is capable of burnup calculations with appropriate treatments. - Abstract: The burnup-dependent core neutronics analysis of the plate-type research reactors such as JRR-3M poses a challenge for traditional neutronics calculational tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity, large leakage and the particular neutron spectrum of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the burnup-dependent core neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON & DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic one. In the first stage, the homogenizations of few-group cross sections by DRAGON and the full core diffusion calculations by DONJON have been verified by comparing with the detailed Monte Carlo simulations. In the second stage, the burnup-dependent calculations of both assembly level and the full core level were carried out, to examine the capability of the deterministic code system DRAGON & DONJON to reliably simulate the burnup-dependent behavior of research reactors. The results indicate that both RMC and DRAGON & DONJON code system are capable of burnup-dependent neutronics analysis of research reactors, provided that appropriate treatments are applied in both assembly and core levels for the deterministic codes
International Nuclear Information System (INIS)
Jinaphanh, A.
2012-01-01
Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for k eff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to k eff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)
Proteus-MOC: A 3D deterministic solver incorporating 2D method of characteristics
International Nuclear Information System (INIS)
Marin-Lafleche, A.; Smith, M. A.; Lee, C.
2013-01-01
A new transport solution methodology was developed by combining the two-dimensional method of characteristics with the discontinuous Galerkin method for the treatment of the axial variable. The method, which can be applied to arbitrary extruded geometries, was implemented in PROTEUS-MOC and includes parallelization in group, angle, plane, and space using a top level GMRES linear algebra solver. Verification tests were performed to show accuracy and stability of the method with the increased number of angular directions and mesh elements. Good scalability with parallelism in angle and axial planes is displayed. (authors)
Proteus-MOC: A 3D deterministic solver incorporating 2D method of characteristics
Energy Technology Data Exchange (ETDEWEB)
Marin-Lafleche, A.; Smith, M. A.; Lee, C. [Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL 60439 (United States)
2013-07-01
A new transport solution methodology was developed by combining the two-dimensional method of characteristics with the discontinuous Galerkin method for the treatment of the axial variable. The method, which can be applied to arbitrary extruded geometries, was implemented in PROTEUS-MOC and includes parallelization in group, angle, plane, and space using a top level GMRES linear algebra solver. Verification tests were performed to show accuracy and stability of the method with the increased number of angular directions and mesh elements. Good scalability with parallelism in angle and axial planes is displayed. (authors)
International Nuclear Information System (INIS)
Karriem, Z.; Ivanov, K.; Zamonsky, O.
2011-01-01
This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)
Deterministic methods for the relativistic Vlasov-Maxwell equations and the Van Allen belts dynamics
International Nuclear Information System (INIS)
Le Bourdiec, S.
2007-03-01
Artificial satellites operate in an hostile radiation environment, the Van Allen radiation belts, which partly condition their reliability and their lifespan. In order to protect them, it is necessary to characterize the dynamics of the energetic electrons trapped in these radiation belts. This dynamics is essentially determined by the interactions between the energetic electrons and the existing electromagnetic waves. This work consisted in designing a numerical scheme to solve the equations modelling these interactions: the relativistic Vlasov-Maxwell system of equations. Our choice was directed towards methods of direct integration. We propose three new spectral methods for the momentum discretization: a Galerkin method and two collocation methods. All of them are based on scaled Hermite functions. The scaling factor is chosen in order to obtain the proper velocity resolution. We present in this thesis the discretization of the one-dimensional Vlasov-Poisson system and the numerical results obtained. Then we study the possible extensions of the methods to the complete relativistic problem. In order to reduce the computing time, parallelization and optimization of the algorithms were carried out. Finally, we present 1Dx-3Dv (mono-dimensional for x and three-dimensional for velocity) computations of Weibel and whistler instabilities with one or two electrons species. (author)
Impulse response identification with deterministic inputs using non-parametric methods
International Nuclear Information System (INIS)
Bhargava, U.K.; Kashyap, R.L.; Goodman, D.M.
1985-01-01
This paper addresses the problem of impulse response identification using non-parametric methods. Although the techniques developed herein apply to the truncated, untruncated, and the circulant models, we focus on the truncated model which is useful in certain applications. Two methods of impulse response identification will be presented. The first is based on the minimization of the C/sub L/ Statistic, which is an estimate of the mean-square prediction error; the second is a Bayesian approach. For both of these methods, we consider the effects of using both the identity matrix and the Laplacian matrix as weights on the energy in the impulse response. In addition, we present a method for estimating the effective length of the impulse response. Estimating the length is particularly important in the truncated case. Finally, we develop a method for estimating the noise variance at the output. Often, prior information on the noise variance is not available, and a good estimate is crucial to the success of estimating the impulse response with a nonparametric technique
Analysis of natural circulation BWR dynamics with stochastic and deterministic methods
International Nuclear Information System (INIS)
VanderHagen, T.H.; Van Dam, H.; Hoogenboom, J.E.; Kleiss, E.B.J.; Nissen, W.H.M.; Oosterkamp, W.J.
1986-01-01
Reactor kinetic, thermal hydraulic and total plant stability of a natural convection cooled BWR was studied using noise analysis and by evaluation of process responses to control rod steps and to steamflow control valve steps. An estimate of the fuel thermal time constant and an impression of the recirculation flow response to power variations was obtained. A sophisticated noise analysis method resulted in more insight into the fluctuations of the coolant velocity
Stephenson, C L; Harris, C A; Clarke, R
2018-02-01
Use of glyphosate in crop production can lead to residues of the active substance and related metabolites in food. Glyphosate has never been considered acutely toxic; however, in 2015 the European Food Safety Authority (EFSA) proposed an acute reference dose (ARfD). This differs from the Joint FAO/WHO Meeting on Pesticide Residues (JMPR) who in 2016, in line with their existing position, concluded that an ARfD was not necessary for glyphosate. This paper makes a comprehensive assessment of short-term dietary exposure to glyphosate from potentially treated crops grown in the EU and imported third-country food sources. European Union and global deterministic models were used to make estimates of short-term dietary exposure (generally defined as up to 24 h). Estimates were refined using food-processing information, residues monitoring data, national dietary exposure models, and basic probabilistic approaches to estimating dietary exposure. Calculated exposures levels were compared to the ARfD, considered to be the amount of a substance that can be consumed in a single meal, or 24-h period, without appreciable health risk. Acute dietary intakes were Probabilistic exposure estimates showed that the acute intake on no person-days exceeded 10% of the ARfD, even for the pessimistic scenario.
International Nuclear Information System (INIS)
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
International Nuclear Information System (INIS)
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
International Nuclear Information System (INIS)
Wagner, J.C.; Peplow, D.E.; Mosher, S.W.; Evans, T.M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications. (author)
Energy Technology Data Exchange (ETDEWEB)
Li, M
1998-08-01
In this thesis, two methods for solving the multigroup Boltzmann equation have been studied: the interface-current method and the Monte Carlo method. A new version of interface-current (IC) method has been develop in the TDT code at SERMA, where the currents of interface are represented by piecewise constant functions in the solid angle space. The convergence of this method to the collision probability (CP) method has been tested. Since the tracking technique is used for both the IC and CP methods, it is necessary to normalize he collision probabilities obtained by this technique. Several methods for this object have been studied and implemented in our code, we have compared their performances and chosen the best one as the standard choice. The transfer matrix treatment has been a long-standing difficulty for the multigroup Monte Carlo method: when the cross-sections are converted into multigroup form, important negative parts will appear in the angular transfer laws represented by low-order Legendre polynomials. Several methods based on the preservation of the first moments, such as the discrete angles methods and the equally-probable step function method, have been studied and implemented in the TRIMARAN-II code. Since none of these codes has been satisfactory, a new method, the non equally-probably step function method, has been proposed and realized in our code. The comparisons for these methods have been done in several aspects: the preservation of the moments required, the calculation of a criticality problem and the calculation of a neutron-transfer in water problem. The results have showed that the new method is the best one in all these comparisons, and we have proposed that it should be a standard choice for the multigroup transfer matrix. (author) 76 refs.
International Nuclear Information System (INIS)
Ghassoun, Jillali; Jehoauni, Abdellatif
2000-01-01
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
Olariu, Victor; Manesso, Erica; Peterson, Carsten
2017-06-01
Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis-Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming.
Wang, Chunxiang; Watanabe, Naoki; Marui, Hideaki
2013-04-01
The hilly slopes of Mt. Medvednica are stretched in the northwestern part of Zagreb City, Croatia, and extend to approximately 180km2. In this area, landslides, e.g. Kostanjek landslide and Črešnjevec landslide, have brought damage to many houses, roads, farmlands, grassland and etc. Therefore, it is necessary to predict the potential landslides and to enhance landslide inventory for hazard mitigation and security management of local society in this area. We combined deterministic method and probabilistic method to assess potential landslides including their locations, size and sliding surfaces. Firstly, this study area is divided into several slope units that have similar topographic and geological characteristics using the hydrology analysis tool in ArcGIS. Then, a GIS-based modified three-dimensional Hovland's method for slope stability analysis system is developed to identify the sliding surface and corresponding three-dimensional safety factor for each slope unit. Each sliding surface is assumed to be the lower part of each ellipsoid. The direction of inclination of the ellipsoid is considered to be the same as the main dip direction of the slope unit. The center point of the ellipsoid is randomly set to the center point of a grid cell in the slope unit. The minimum three-dimensional safety factor and corresponding critical sliding surface are also obtained for each slope unit. Thirdly, since a single value of safety factor is insufficient to evaluate the slope stability of a slope unit, the ratio of the number of calculation cases in which the three-dimensional safety factor values less than 1.0 to the total number of trial calculation is defined as the failure probability of the slope unit. If the failure probability is more than 80%, the slope unit is distinguished as 'unstable' from other slope units and the landslide hazard can be mapped for the whole study area.
International Nuclear Information System (INIS)
Adams, Marvin L.
2001-01-01
We discuss deterministic transport methods used today in neutronic analysis of nuclear reactors. This discussion is not exhaustive; our goal is to provide an overview of the methods that are most widely used for analyzing light water reactors (LWRs) and that (in our opinion) hold the most promise for the future. The current practice of LWR analysis involves the following steps: 1. Evaluate cross sections from measurements and models. 2. Obtain weighted-average cross sections over dozens to hundreds of energy intervals; the result is a 'fine-group' cross-section set. 3. [Optional] Modify the fine-group set: Further collapse it using information specific to your class of reactors and/or alter parameters so that computations better agree with experiments. The result is a 'many-group library'. 4. Perform pin cell transport calculations (usually one-dimensional cylindrical); use the results to collapse the many-group library to a medium-group set, and/or spatially average the cross sections over the pin cells. 5. Perform assembly-level transport calculations with the medium-group set. It is becoming common practice to use essentially exact geometry (no pin cell homogenization). It may soon become common to skip step 4 and use the many-group library. The output is a library of few-group cross sections, spatially averaged over the assembly, parameterized to cover the full range of operating conditions. 6. Perform full-core calculations with few-group diffusion theory that contains significant homogenizations and limited transport corrections. We discuss steps 4, 5, and 6 and focus mainly on step 5. One cannot review a large topic in a short summary without simplifying reality, omitting important details, and neglecting some methods that deserve attention; for this we apologize in advance. (author)
Directory of Open Access Journals (Sweden)
Ling Fiona W.M.
2017-01-01
Full Text Available Rapid prototyping of microchannel gain lots of attention from researchers along with the rapid development of microfluidic technology. The conventional methods carried few disadvantages such as high cost, time consuming, required high operating pressure and temperature and involve expertise in operating the equipment. In this work, new method adapting xurography method is introduced to replace the conventional method of fabrication of microchannels. The novelty in this study is replacing the adhesion film with clear plastic film which was used to cut the design of the microchannel as the material is more suitable for fabricating more complex microchannel design. The microchannel was then mold using polymethyldisiloxane (PDMS and bonded with a clean glass to produce a close microchannel. The microchannel produced had a clean edge indicating good master mold was produced using the cutting plotter and the bonding between the PDMS and glass was good where no leakage was observed. The materials used in this method is cheap and the total time consumed is less than 5 hours where this method is suitable for rapid prototyping of microchannel.
Mavris, Dimitri N.; Schutte, Jeff S.
2016-01-01
This report documents work done by the Aerospace Systems Design Lab (ASDL) at the Georgia Institute of Technology, Daniel Guggenheim School of Aerospace Engineering for the National Aeronautics and Space Administration, Aeronautics Research Mission Directorate, Integrated System Research Program, Environmentally Responsible Aviation (ERA) Project. This report was prepared under contract NNL12AA12C, "Application of Deterministic and Probabilistic System Design Methods and Enhancement of Conceptual Design Tools for ERA Project". The research within this report addressed the Environmentally Responsible Aviation (ERA) project goal stated in the NRA solicitation "to advance vehicle concepts and technologies that can simultaneously reduce fuel burn, noise, and emissions." To identify technology and vehicle solutions that simultaneously meet these three metrics requires the use of system-level analysis with the appropriate level of fidelity to quantify feasibility, benefits and degradations, and associated risk. In order to perform the system level analysis, the Environmental Design Space (EDS) [Kirby 2008, Schutte 2012a] environment developed by ASDL was used to model both conventional and unconventional configurations as well as to assess technologies from the ERA and N+2 timeframe portfolios. A well-established system design approach was used to perform aircraft conceptual design studies, including technology trade studies to identify technology portfolios capable of accomplishing the ERA project goal and to obtain accurate tradeoffs between performance, noise, and emissions. The ERA goal, shown in Figure 1, is to simultaneously achieve the N+2 benefits of a cumulative noise margin of 42 EPNdB relative to stage 4, a 75 percent reduction in LTO NOx emissions relative to CAEP 6 and a 50 percent reduction in fuel burn relative to the 2005 best in class aircraft. There were 5 research task associated with this research: 1) identify technology collectors, 2) model
Directory of Open Access Journals (Sweden)
L. M. Kimball
2002-01-01
Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.
International Nuclear Information System (INIS)
2004-09-01
The efficient feedback of operating experience (OE) is a valuable source of information for improving the safety and reliability of nuclear power plants (NPPs). It is therefore essential to collect information on abnormal events from both internal and external sources. Internal operating experience is analysed to obtain a complete understanding of an event and of its safety implications. Corrective or improvement measures may then be developed, prioritized and implemented in the plant if considered appropriate. Information from external events may also be analysed in order to learn lessons from others' experience and prevent similar occurrences at our own plant. The traditional ways of investigating operational events have been predominantly qualitative. In recent years, a PSA-based method called probabilistic precursor event analysis has been developed, used and applied on a significant scale in many places for a number of plants. The method enables a quantitative estimation of the safety significance of operational events to be incorporated. The purpose of this report is to outline a synergistic process that makes more effective use of operating experience event information by combining the insights and knowledge gained from both approaches, traditional deterministic event investigation and PSA-based event analysis. The PSA-based view on operational events and PSA-based event analysis can support the process of operational event analysis at the following stages of the operational event investigation: (1) Initial screening stage. (It introduces an element of quantitative analysis into the selection process. Quantitative analysis of the safety significance of nuclear plant events can be a very useful measure when it comes to selecting internal and external operating experience information for its relevance.) (2) In-depth analysis. (PSA based event evaluation provides a quantitative measure for judging the significance of operational events, contributors to
Czech Academy of Sciences Publication Activity Database
Růžička, V.; Malíková, Lucie; Seitl, Stanislav
2017-01-01
Roč. 11, č. 42 (2017), s. 128-135 ISSN 1971-8993 R&D Projects: GA ČR GA17-01589S Institutional support: RVO:68081723 Keywords : Over-deterministic * Fracture mechanics * Rounding numbers * Stress field * Williams’ expansion Subject RIV: JL - Materials Fatigue, Friction Mechanics OBOR OECD: Audio engineering, reliability analysis
Energy Technology Data Exchange (ETDEWEB)
Le Bourdiec, S
2007-03-15
Artificial satellites operate in an hostile radiation environment, the Van Allen radiation belts, which partly condition their reliability and their lifespan. In order to protect them, it is necessary to characterize the dynamics of the energetic electrons trapped in these radiation belts. This dynamics is essentially determined by the interactions between the energetic electrons and the existing electromagnetic waves. This work consisted in designing a numerical scheme to solve the equations modelling these interactions: the relativistic Vlasov-Maxwell system of equations. Our choice was directed towards methods of direct integration. We propose three new spectral methods for the momentum discretization: a Galerkin method and two collocation methods. All of them are based on scaled Hermite functions. The scaling factor is chosen in order to obtain the proper velocity resolution. We present in this thesis the discretization of the one-dimensional Vlasov-Poisson system and the numerical results obtained. Then we study the possible extensions of the methods to the complete relativistic problem. In order to reduce the computing time, parallelization and optimization of the algorithms were carried out. Finally, we present 1Dx-3Dv (mono-dimensional for x and three-dimensional for velocity) computations of Weibel and whistler instabilities with one or two electrons species. (author)
Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.
2016-12-01
Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
International Nuclear Information System (INIS)
Cai, Li
2014-01-01
In the framework of the Generation IV reactors neutronic research, new core calculation tools are implemented in the code system APOLLO3 for the deterministic part. These calculation methods are based on the discretization concept of nuclear energy data (named multi-group and are generally produced by deterministic codes) and should be validated and qualified with respect to some Monte-Carlo reference calculations. This thesis aims to develop an alternative technique of producing multi-group nuclear properties by a Monte-Carlo code (TRIPOLI-4). At first, after having tested the existing homogenization and condensation functionalities with better precision obtained nowadays, some inconsistencies are revealed. Several new multi-group parameters estimators are developed and validated for TRIPOLI-4 code with the aid of itself, since it has the possibility to use the multi-group constants in a core calculation. Secondly, the scattering anisotropy effect which is necessary for handling neutron leakage case is studied. A correction technique concerning the diagonal line of the first order moment of the scattering matrix is proposed. This is named the IGSC technique and is based on the usage of an approximate current which is introduced by Todorova. An improvement of this IGSC technique is then presented for the geometries which hold an important heterogeneity property. This improvement uses a more accurate current quantity which is the projection on the abscissa X. The later current can represent the real situation better but is limited to 1D geometries. Finally, a B1 leakage model is implemented in the TRIPOLI-4 code for generating multi-group cross sections with a fundamental mode based critical spectrum. This leakage model is analyzed and validated rigorously by the comparison with other codes: Serpent and ECCO, as well as an analytical case.The whole development work introduced in TRIPOLI-4 code allows producing multi-group constants which can then be used in the core
International Nuclear Information System (INIS)
Artioli, Carlo; Sarotto, Massimo; Grasso, Giacomo; Krepel, Jiri
2009-01-01
neutronic analysis, adopting both deterministic and stochastic approaches, has been carried out. It becomes crucial indeed to estimate accurately the self-shielding phenomenon of the innovative FARs in order to achieve the aimed performances (a reactivity worth of about 3000 pcm for scram). (author)
Directory of Open Access Journals (Sweden)
S. Mariani
2005-01-01
Full Text Available In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as 'Montserrat-2000' event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs, several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard 'eyeball' analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
International Nuclear Information System (INIS)
Hussein, H.M.; Sakr, A.M.; Amin, E.H.
2011-01-01
The objective of this paper is to assess the suitability and the accuracy of the deterministic diffusion method for the neutronic calculations of the TRIGA type research reactors in proposed condensed energy spectra of five and seven groups with one and three thermal groups respectively, using the calculational line: WIMSD-IAEA-69 nuclear data library/ WIMSD-5B lattice and cell calculations code/ CITVAP v3.1 core calculations code. Firstly, The assessment goes through analyzing the integral parameters - k e ff, ρ 238 , σ 235 , σ 238 , and C * - of the TRX and BAPL benchmark lattices and comparison with experimental and previous reference results using other ENDLs at the full energy spectra, which show good agreement with the references at both spectra. Secondly, evaluation of the 3D nuclear characteristics of three different cores of the TRR-1/M1 TRIGA Mark- III Thai research reactor, using the CITVAP v3.1 code and macroscopic cross-section libraries generated using the WIMSD-5B code at the proposed energy spectra separately. The results include the excess reactivities and the worth of control rods, which were compared with previous Monte Carlo results and experimental values, that show good agreement with the references at both energy spectra, albeit better accuracies are shown with the five groups spectrum. The results also includes neutron flux distributions which are settled for future comparisons with other calculational techniques, even, they are comparable to reactors and fuels of the same type. The study reflects the adequacy of using the pre-stated calculational line at the condensed energy spectra for evaluation of the neutronic parameters of the TRIGA type reactors, and future comparisons of the un-benchmarked results could assure this result for wider range of neutronics or safety-related parameters
Stochastic optimization methods
Marti, Kurt
2005-01-01
Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.
International Nuclear Information System (INIS)
Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli
2016-01-01
Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid
Computational methods working group
International Nuclear Information System (INIS)
Gabriel, T.A.
1997-09-01
During the Cold Moderator Workshop several working groups were established including one to discuss calculational methods. The charge for this working group was to identify problems in theory, data, program execution, etc., and to suggest solutions considering both deterministic and stochastic methods including acceleration procedures.
Probabilistic methods for physics
International Nuclear Information System (INIS)
Cirier, G
2013-01-01
We present an asymptotic method giving a probability of presence of the iterated spots of R d by a polynomial function f. We use the well-known Perron Frobenius operator (PF) that lets certain sets and measure invariant by f. Probabilistic solutions can exist for the deterministic iteration. If the theoretical result is already known, here we quantify these probabilities. This approach seems interesting to use for computing situations when the deterministic methods don't run. Among the examined applications, are asymptotic solutions of Lorenz, Navier-Stokes or Hamilton's equations. In this approach, linearity induces many difficult problems, all of whom we have not yet resolved.
STOCHASTIC METHODS IN RISK ANALYSIS
Directory of Open Access Journals (Sweden)
Vladimíra OSADSKÁ
2017-06-01
Full Text Available In this paper, we review basic stochastic methods which can be used to extend state-of-the-art deterministic analytical methods for risk analysis. We can conclude that the standard deterministic analytical methods highly depend on the practical experience and knowledge of the evaluator and therefore, the stochastic methods should be introduced. The new risk analysis methods should consider the uncertainties in input values. We present how large is the impact on the results of the analysis solving practical example of FMECA with uncertainties modelled using Monte Carlo sampling.
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
Energy Technology Data Exchange (ETDEWEB)
Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.
2014-02-01
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
Nayfeh, Ali H
2008-01-01
1. Introduction 1 2. Straightforward Expansions and Sources of Nonuniformity 23 3. The Method of Strained Coordinates 56 4. The Methods of Matched and Composite Asymptotic Expansions 110 5. Variation of Parameters and Methods of Averaging 159 6. The Method of Multiple Scales 228 7. Asymptotic Solutions of Linear Equations 308 References and Author Index 387 Subject Index 417
International Nuclear Information System (INIS)
Konecny, C.
1975-01-01
Two main methods of separation using the distillation method are given and evaluated, namely evaporation and distillation in carrier gas flow. Two basic apparatus are described for illustrating the methods used. The use of the distillation method in radiochemistry is documented by a number of examples of the separation of elements in elemental state, volatile halogenides and oxides. Tables give a survey of distillation methods used for the separation of the individual elements and give conditions under which this separation takes place. The suitability of the use of distillation methods in radiochemistry is discussed with regard to other separation methods. (L.K.)
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
African Journals Online (AJOL)
user
The assumed deflection shapes used in the approximate methods such as in the Galerkin's method were normally ... to direct compressive forces Nx, was derived by Navier. [3]. ..... tend to give higher frequency and stiffness, as well as.
Energy Technology Data Exchange (ETDEWEB)
Kim, Young Shik; Lee, Kyung Woon; Kim, Oak Hwan; Kim, Dae Kyung [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)
1996-12-01
The reducing coal market has been enforcing the coal industry to make exceptional rationalization and restructuring efforts since the end of the eighties. To the competition from crude oil and natural gas has been added the growing pressure from rising wages and rising production cost as the workings get deeper. To improve the competitive position of the coal mines against oil and gas through cost reduction, studies to improve mining system have been carried out. To find fields requiring improvements most, the technologies using in Tae Bak Colliery which was selected one of long running mines were investigated and analyzed. The mining method appeared the field needing improvements most to reduce the production cost. The present method, so-called inseam roadway caving method presently is using to extract the steep and thick seam. However, this method has several drawbacks. To solve the problems, two mining methods are suggested for a long term and short term method respectively. Inseam roadway caving method with long-hole blasting method is a variety of the present inseam roadway caving method modified by replacing timber sets with steel arch sets and the shovel loaders with chain conveyors. And long hole blasting is introduced to promote caving. And pillar caving method with chock supports method uses chock supports setting in the cross-cut from the hanging wall to the footwall. Two single chain conveyors are needed. One is installed in front of chock supports to clear coal from the cutting face. The other is installed behind the supports to transport caved coal from behind. This method is superior to the previous one in terms of safety from water-inrushes, production rate and productivity. The only drawback is that it needs more investment. (author). 14 tabs., 34 figs.
DEFF Research Database (Denmark)
Wagner, Falko Jens; Poulsen, Mikael Zebbelin
1999-01-01
When trying to solve a DAE problem of high index with more traditional methods, it often causes instability in some of the variables, and finally leads to breakdown of convergence and integration of the solution. This is nicely shown in [ESF98, p. 152 ff.].This chapter will introduce projection...... methods as a way of handling these special problems. It is assumed that we have methods for solving normal ODE systems and index-1 systems....
Maria Kikila; Ioannis Koutelekos
2012-01-01
Child discipline is one of the most important elements of successful parenting. As discipline is defined the process that help children to learn appropriate behaviors and make good choices. Aim: The aim of the present study was to review the literature about the discipline methods. The method οf this study included bibliography research from both the review and the research literature, mainly in the pubmed data base which referred to the discipline methods. Results: In the literature it is ci...
International Nuclear Information System (INIS)
Sanchis, H.; Aucher, P.
1990-01-01
The maintenance method applied at the Hague is summarized. The method was developed in order to solve problems relating to: the different specialist fields, the need for homogeneity in the maintenance work, the equipment diversity, the increase of the materials used at the Hague's new facilities. The aim of the method is to create a knowhow formalism, to facilitate maintenance, to ensure the running of the operations and to improve the estimation of the maintenance cost. One of the method's difficulties is the demonstration of the profitability of the maintenance operations [fr
International Nuclear Information System (INIS)
Ivanovich, M.; Murray, A.
1992-01-01
The principles involved in the interaction of nuclear radiation with matter are described, as are the principles behind methods of radiation detection. Different types of radiation detectors are described and methods of detection such as alpha, beta and gamma spectroscopy, neutron activation analysis are presented. Details are given of measurements of uranium-series disequilibria. (UK)
DEFF Research Database (Denmark)
Ernst, Erik
2002-01-01
. Method mixins use shared name spaces to transfer information between caller and callee, as opposed to traditional invocation which uses parameters and returned results. This relieves a caller from dependencies on the callee, and it allows direct transfer of information further down the call stack, e......The procedure call mechanism has conquered the world of programming, with object-oriented method invocation being a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior, where traditional...
DEFF Research Database (Denmark)
Ernst, Erik
2002-01-01
invocation is optimized for as-is reuse of existing behavior. Tight coupling reduces flexibility, and traditional invocation tightly couples transfer of information and transfer of control. Method mixins decouple these two kinds of transfer, thereby opening the doors for new kinds of abstraction and reuse......The procedure call mechanism has conquered the world of programming, with object-oriented method invocation being a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior, where traditional....... Method mixins use shared name spaces to transfer information between caller and callee, as opposed to traditional invocation which uses parameters and returned results. This relieves a caller from dependencies on the callee, and it allows direct transfer of information further down the call stack, e...
DEFF Research Database (Denmark)
McLaughlin, W.L.; Miller, A.; Kovacs, A.
2003-01-01
Chemical and physical radiation dosimetry methods, used for the measurement of absorbed dose mainly during the practical use of ionizing radiation, are discussed with respect to their characteristics and fields of application....
DEFF Research Database (Denmark)
Ernst, Erik
2005-01-01
The world of programming has been conquered by the procedure call mechanism, including object-oriented method invocation which is a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior, ...... the call stack, e.g., to a callee's callee. The mechanism has been implemented in the programming language gbeta. Variants of the mechanism could be added to almost any imperative programming language.......The world of programming has been conquered by the procedure call mechanism, including object-oriented method invocation which is a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior...
Risk-based and deterministic regulation
International Nuclear Information System (INIS)
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been
DEFF Research Database (Denmark)
Ernst, Erik
2005-01-01
The world of programming has been conquered by the procedure call mechanism, including object-oriented method invocation which is a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior...... of abstraction and reuse. Method mixins use shared name spaces to transfer information between caller and callee, as opposed to traditional invocation which uses parameters and returned results. This relieves the caller from dependencies on the callee, and it allows direct transfer of information further down...... the call stack, e.g., to a callee's callee. The mechanism has been implemented in the programming language gbeta. Variants of the mechanism could be added to almost any imperative programming language....
Szulc, Stefan
1965-01-01
Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then
Halberstam, Heine
2011-01-01
Derived from the techniques of analytic number theory, sieve theory employs methods from mathematical analysis to solve number-theoretical problems. This text by a noted pair of experts is regarded as the definitive work on the subject. It formulates the general sieve problem, explores the theoretical background, and illustrates significant applications.""For years to come, Sieve Methods will be vital to those seeking to work in the subject, and also to those seeking to make applications,"" noted prominent mathematician Hugh Montgomery in his review of this volume for the Bulletin of the Ameri
Efficient Asymptotic Preserving Deterministic methods for the Boltzmann Equation
2011-04-01
35 ∗Department of Mathematics, University of Ferrara , Ferrara , Italy †Department of Mathematics and...Department of Mathematics, University of Ferrara , Ferrara , Italy 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND...Computer Science, University of Catania, Catania, Italy VKI - 1 - RTO-EN-AVT-194 8 - 1 Report Documentation Page Form ApprovedOMB No. 0704
Quadratic Finite Element Method for 1D Deterministic Transport
International Nuclear Information System (INIS)
Tolar, D R Jr.; Ferguson, J M
2004-01-01
In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ((und r)) and angular ((und (Omega))) dependences on the angular flux ψ(und r),(und (Omega))are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of ψ(und r),(und (Omega)). Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable (μ) in developing the one-dimensional (1D) spherical geometry S N equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S N algorithms
Energy Technology Data Exchange (ETDEWEB)
Glass, J.T. [North Carolina State Univ., Raleigh (United States)
1993-01-01
Methods discussed in this compilation of notes and diagrams are Raman spectroscopy, scanning electron microscopy, transmission electron microscopy, and other surface analysis techniques (auger electron spectroscopy, x-ray photoelectron spectroscopy, electron energy loss spectroscopy, and scanning tunnelling microscopy). A comparative evaluation of different techniques is performed. In-vacuo and in-situ analyses are described.
Rogers, R.
2013-01-01
In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals
International Nuclear Information System (INIS)
Marhol, M.; Stary, J.
1975-01-01
The characteristics are given of chromatographic separation and the methods are listed. Methods and data on materials used in partition, adsorption, precipitation and ion exchange chromatography are listed and conditions are described under which ion partition takes place. Special attention is devoted to ion exchange chromatography where tables are given to show the course of values of the partition coefficients of different ions in dependence on the concentration of agents and the course of equilibrium sorptions on different materials in dependence on the solution pH. A theoretical analysis is given and the properties of the most widely used ion exchangers are listed. Experimental conditions and apparatus used for each type of chromatography are listed. (L.K.)
Dahlquist, Germund
1974-01-01
""Substantial, detailed and rigorous . . . readers for whom the book is intended are admirably served."" - MathSciNet (Mathematical Reviews on the Web), American Mathematical Society.Practical text strikes fine balance between students' requirements for theoretical treatment and needs of practitioners, with best methods for large- and small-scale computing. Prerequisites are minimal (calculus, linear algebra, and preferably some acquaintance with computer programming). Text includes many worked examples, problems, and an extensive bibliography.
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...
International Nuclear Information System (INIS)
Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.
2002-01-01
Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories
International Nuclear Information System (INIS)
Furukawa, Toshiharu; Shibuya, Kiichiro.
1985-01-01
Purpose: To provide a method of eliminating radioactive contaminations capable of ease treatment for decontaminated liquid wastes and grinding materials. Method: Those organic grinding materials such as fine wall nuts shell pieces cause no secondary contaminations since they are softer as compared with inorganic grinding materials, less pulverizable upon collision against the surface to be treated, being capable of reusing and producing no fine scattering powder. In addition, they can be treated by burning. The organic grinding material and water are sprayed by a nozzle to the surface to be treated, and decontaminated liquid wastes are separated into solid components mainly composed of organic grinding materials and liquid components mainly composed of water by filtering. The thus separated solid components are recovered in a storage tank for reuse as the grinding material and, after repeating use, subjected to burning treatment. While on the other hand, water is recovered into a storage tank and, after repeating use, purified by passing through an ion exchange resin-packed column and decontaminated to discharge. (Horiuchi, T.)
Cornell, A.A.; Dunbar, J.V.; Ruffner, J.H.
1959-09-29
A semi-automatic method is described for the weld joining of pipes and fittings which utilizes the inert gasshielded consumable electrode electric arc welding technique, comprising laying down the root pass at a first peripheral velocity and thereafter laying down the filler passes over the root pass necessary to complete the weld by revolving the pipes and fittings at a second peripheral velocity different from the first peripheral velocity, maintaining the welding head in a fixed position as to the specific direction of revolution, while the longitudinal axis of the welding head is disposed angularly in the direction of revolution at amounts between twenty minutas and about four degrees from the first position.
Marsden, Kenneth C.; Meyer, Mitchell K.; Grover, Blair K.; Fielding, Randall S.; Wolfensberger, Billy W.
2012-12-18
A casting device includes a covered crucible having a top opening and a bottom orifice, a lid covering the top opening, a stopper rod sealing the bottom orifice, and a reusable mold having at least one chamber, a top end of the chamber being open to and positioned below the bottom orifice and a vacuum tap into the chamber being below the top end of the chamber. A casting method includes charging a crucible with a solid material and covering the crucible, heating the crucible, melting the material, evacuating a chamber of a mold to less than 1 atm absolute through a vacuum tap into the chamber, draining the melted material into the evacuated chamber, solidifying the material in the chamber, and removing the solidified material from the chamber without damaging the chamber.
International Nuclear Information System (INIS)
Geary, W.J.
1986-01-01
This little volume is one of an extended series of basic textbooks on analytical chemistry produced by the Analytical Chemistry by Open Learning project in the UK. Prefatory sections explain its mission, and how to use the Open Learning format. Seventeen specific sections organized into five chaptrs begin with a general discussion of nuclear properties, types, and laws of nuclear decay and proceeds to specific discussions of three published papers (reproduced in their entirety) giving examples of radiochemical methods which were discussed in the previous chapter. Each section begins with an overview, contains one or more practical problems (called self-assessment questions or SAQ's), and concludes with a summary and a list of objectives for the student. Following the main body are answers to the SAQ's, and several tables of physical constants, SI prefixes, etc. A periodic table graces the inside back cover
Moment methods and Lanczos methods
International Nuclear Information System (INIS)
Whitehead, R.R.
1980-01-01
In contrast to many of the speakers at this conference I am less interested in average properties of nuclei than in detailed spectroscopy. I will try to show, however, that the two are very closely connected and that shell-model calculations may be used to give a great deal of information not normally associated with the shell-model. It has been demonstrated clearly to us that the level spacing fluctuations in nuclear spectra convey very little physical information. This is true when the fluctuations are averaged over the entire spectrum but not if one's interest is in the lowest few states, whose spacings are relatively large. If one wishes to calculate a ground state (say) accurately, that is with an error much smaller than the excitation energy of the first excited state, very high moments, μ/sub n/, n approx. 200, are needed. As I shall show, we use such moments as a matter of course, albeit without actually calculating them; in fact I will try to show that, if at all possible, the actual calculations of moments is to be avoided like the plague. At the heart of the new shell-model methods embodied in the Glasgow shell-model program and one or two similar ones is the so-called Lanczos method and this, it turns out, has many deep and subtle connections with the mathematical theory of moments. It is these connections that I will explore here
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-12-01
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Directory of Open Access Journals (Sweden)
Frederik Kortlandt
2018-01-01
Full Text Available The basis of linguistic reconstruction is the comparative method, which starts from the assumption that there is “a stronger affinity, both in the roots of verbs and in the forms of grammar, than could possibly have been produced by accident”, implying the existence of a common source (thus Sir William Jones in 1786. It follows that there must be a possible sequence of developments from the reconstructed system to the attested data. These developments must have been either phonetically regular or analogical. The latter type of change requires a model and a motivation. A theory which does not account for the data in terms of sound laws and well-motivated analogical changes is not a linguistic reconstruction but philosophical speculation.The pre-laryngealist idea that any Proto-Indo-European long vowel became acute in Balto-Slavic is a typical example of philosophical speculation contradicted by the comparative evidence. Other examples are spontaneous glottalization (Jasanoff’s “acute assignment”, unattested anywhere in the world, Jasanoff’s trimoraic long vowels, Eichner’s law, Osthoff’s law, and Szemerényi’s law, which is an instance of circular reasoning. The Balto-Slavic acute continues the Proto-Indo-European laryngeals and the glottalic feature of the traditional Proto-Indo-European “unaspirated voiced” obstruents (Winter’s law. My reconstruction of Proto-Indo-European glottalic obstruents is based on direct evidence from Indo-Iranian, Armenian, Baltic and Germanic and indirect evidence from Indo-Iranian, Greek, Latin and Slavic.
Methods for design flood estimation in South Africa
African Journals Online (AJOL)
2012-07-04
Jul 4, 2012 ... 1970s and are in need of updating with more than 40 years of additional data ... This paper reviews methods used for design flood estimation in South Africa and .... transposition of past experience, or a deterministic approach,.
A simple method for estimating the convection- dispersion equation ...
African Journals Online (AJOL)
Jane
2011-08-31
Aug 31, 2011 ... approach of modeling solute transport in porous media uses the deterministic ... Methods of estimating CDE transport parameters can be divided into statistical ..... diffusion-type model for longitudinal mixing of fluids in flow.
A contribution Monte Carlo method
International Nuclear Information System (INIS)
Aboughantous, C.H.
1994-01-01
A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time
The stochastic energy-Casimir method
Arnaudon, Alexis; Ganaba, Nader; Holm, Darryl D.
2018-04-01
In this paper, we extend the energy-Casimir stability method for deterministic Lie-Poisson Hamiltonian systems to provide sufficient conditions for stability in probability of stochastic dynamical systems with symmetries. We illustrate this theory with classical examples of coadjoint motion, including the rigid body, the heavy top, and the compressible Euler equation in two dimensions. The main result is that stable deterministic equilibria remain stable in probability up to a certain stopping time that depends on the amplitude of the noise for finite-dimensional systems and on the amplitude of the spatial derivative of the noise for infinite-dimensional systems. xml:lang="fr"
Assessment of seismic margin calculation methods
International Nuclear Information System (INIS)
Kennedy, R.P.; Murray, R.C.; Ravindra, M.K.; Reed, J.W.; Stevenson, J.D.
1989-03-01
Seismic margin review of nuclear power plants requires that the High Confidence of Low Probability of Failure (HCLPF) capacity be calculated for certain components. The candidate methods for calculating the HCLPF capacity as recommended by the Expert Panel on Quantification of Seismic Margins are the Conservative Deterministic Failure Margin (CDFM) method and the Fragility Analysis (FA) method. The present study evaluated these two methods using some representative components in order to provide further guidance in conducting seismic margin reviews. It is concluded that either of the two methods could be used for calculating HCLPF capacities. 21 refs., 9 figs., 6 tabs
Probabilistic Analysis Methods for Hybrid Ventilation
DEFF Research Database (Denmark)
Brohus, Henrik; Frier, Christian; Heiselberg, Per
This paper discusses a general approach for the application of probabilistic analysis methods in the design of ventilation systems. The aims and scope of probabilistic versus deterministic methods are addressed with special emphasis on hybrid ventilation systems. A preliminary application...... of stochastic differential equations is presented comprising a general heat balance for an arbitrary number of loads and zones in a building to determine the thermal behaviour under random conditions....
Evaluation of the streaming-matrix method for discrete-ordinates duct-streaming calculations
International Nuclear Information System (INIS)
Clark, B.A.; Urban, W.T.; Dudziak, D.J.
1983-01-01
A new deterministic streaming technique called the Streaming Matrix Hybrid Method (SMHM) is applied to two realistic duct-shielding problems. The results are compared to standard discrete-ordinates and Monte Carlo calculations. The SMHM shows promise as an alternative deterministic streaming method to standard discrete-ordinates
A Numerical Simulation for a Deterministic Compartmental ...
African Journals Online (AJOL)
In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...
Radiation transport calculation methods in BNCT
International Nuclear Information System (INIS)
Koivunoro, H.; Seppaelae, T.; Savolainen, S.
2000-01-01
Boron neutron capture therapy (BNCT) is used as a radiotherapy for malignant brain tumours. Radiation dose distribution is necessary to determine individually for each patient. Radiation transport and dose distribution calculations in BNCT are more complicated than in conventional radiotherapy. Total dose in BNCT consists of several different dose components. The most important dose component for tumour control is therapeutic boron dose D B . The other dose components are gamma dose D g , incident fast neutron dose D f ast n and nitrogen dose D N . Total dose is a weighted sum of the dose components. Calculation of neutron and photon flux is a complex problem and requires numerical methods, i.e. deterministic or stochastic simulation methods. Deterministic methods are based on the numerical solution of Boltzmann transport equation. Such are discrete ordinates (SN) and spherical harmonics (PN) methods. The stochastic simulation method for calculation of radiation transport is known as Monte Carlo method. In the deterministic methods the spatial geometry is partitioned into mesh elements. In SN method angular integrals of the transport equation are replaced with weighted sums over a set of discrete angular directions. Flux is calculated iteratively for all these mesh elements and for each discrete direction. Discrete ordinates transport codes used in the dosimetric calculations are ANISN, DORT and TORT. In PN method a Legendre expansion for angular flux is used instead of discrete direction fluxes, land the angular dependency comes a property of vector function space itself. Thus, only spatial iterations are required for resulting equations. A novel radiation transport code based on PN method and tree-multigrid technique (TMG) has been developed at VTT (Technical Research Centre of Finland). Monte Carlo method solves the radiation transport by randomly selecting neutrons and photons from a prespecified boundary source and following the histories of selected particles
Weighted particle method for solving the Boltzmann equation
International Nuclear Information System (INIS)
Tohyama, M.; Suraud, E.
1990-01-01
We propose a new, deterministic, method of solution of the nuclear Boltzmann equation. In this Weighted Particle Method two-body collisions are treated by a Master equation for an occupation probability of each numerical particle. We apply the method to the quadrupole motion of 12 C. A comparison with usual stochastic methods is made. Advantages and disadvantages of the Weighted Particle Method are discussed
The dialectical thinking about deterministic and probabilistic safety analysis
International Nuclear Information System (INIS)
Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong
2005-01-01
There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)
STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION
Directory of Open Access Journals (Sweden)
Nataša Krejić
2014-12-01
Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.
Mathematical methods in elasticity imaging
Ammari, Habib; Garnier, Josselin; Kang, Hyeonbae; Lee, Hyundae; Wahab, Abdul
2015-01-01
This book is the first to comprehensively explore elasticity imaging and examines recent, important developments in asymptotic imaging, modeling, and analysis of deterministic and stochastic elastic wave propagation phenomena. It derives the best possible functional images for small inclusions and cracks within the context of stability and resolution, and introduces a topological derivative-based imaging framework for detecting elastic inclusions in the time-harmonic regime. For imaging extended elastic inclusions, accurate optimal control methodologies are designed and the effects of uncertainties of the geometric or physical parameters on stability and resolution properties are evaluated. In particular, the book shows how localized damage to a mechanical structure affects its dynamic characteristics, and how measured eigenparameters are linked to elastic inclusion or crack location, orientation, and size. Demonstrating a novel method for identifying, locating, and estimating inclusions and cracks in elastic...
A deterministic width function model
Directory of Open Access Journals (Sweden)
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
Cheap arbitrary high order methods for single integrand SDEs
DEFF Research Database (Denmark)
Debrabant, Kristian; Kværnø, Anne
2017-01-01
For a particular class of Stratonovich SDE problems, here denoted as single integrand SDEs, we prove that by applying a deterministic Runge-Kutta method of order $p_d$ we obtain methods converging in the mean-square and weak sense with order $\\lfloor p_d/2\\rfloor$. The reason is that the B-series...
Probabilistic structural analysis methods for space transportation propulsion systems
Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.
1991-01-01
Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .
Daciuk, J; Champarnaud, JM; Maurel, D
2003-01-01
This paper compares various methods for constructing minimal, deterministic, acyclic, finite-state automata (recognizers) from sets of words. Incremental, semi-incremental, and non-incremental methods have been implemented and evaluated.
National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...
DEFF Research Database (Denmark)
Hostrup, Astrid Kuijers
1999-01-01
An introduction to BDF-methods is given. The use of these methods on differential algebraic equations (DAE's) with different indexes with respect to order, stability and convergens of the BDF-methods is presented.......An introduction to BDF-methods is given. The use of these methods on differential algebraic equations (DAE's) with different indexes with respect to order, stability and convergens of the BDF-methods is presented....
Uranium price forecasting methods
International Nuclear Information System (INIS)
Fuller, D.M.
1994-01-01
This article reviews a number of forecasting methods that have been applied to uranium prices and compares their relative strengths and weaknesses. The methods reviewed are: (1) judgemental methods, (2) technical analysis, (3) time-series methods, (4) fundamental analysis, and (5) econometric methods. Historically, none of these methods has performed very well, but a well-thought-out model is still useful as a basis from which to adjust to new circumstances and try again
Methods in aquatic bacteriology
National Research Council Canada - National Science Library
Austin, B
1988-01-01
.... Within these sections detailed chapters consider sampling methods, determination of biomass, isolation methods, identification, the bacterial microflora of fish, invertebrates, plants and the deep...
Transport equation solving methods
International Nuclear Information System (INIS)
Granjean, P.M.
1984-06-01
This work is mainly devoted to Csub(N) and Fsub(N) methods. CN method: starting from a lemma stated by Placzek, an equivalence is established between two problems: the first one is defined in a finite medium bounded by a surface S, the second one is defined in the whole space. In the first problem the angular flux on the surface S is shown to be the solution of an integral equation. This equation is solved by Galerkin's method. The Csub(N) method is applied here to one-velocity problems: in plane geometry, slab albedo and transmission with Rayleigh scattering, calculation of the extrapolation length; in cylindrical geometry, albedo and extrapolation length calculation with linear scattering. Fsub(N) method: the basic integral transport equation of the Csub(N) method is integrated on Case's elementary distributions; another integral transport equation is obtained: this equation is solved by a collocation method. The plane problems solved by the Csub(N) method are also solved by the Fsub(N) method. The Fsub(N) method is extended to any polynomial scattering law. Some simple spherical problems are also studied. Chandrasekhar's method, collision probability method, Case's method are presented for comparison with Csub(N) and Fsub(N) methods. This comparison shows the respective advantages of the two methods: a) fast convergence and possible extension to various geometries for Csub(N) method; b) easy calculations and easy extension to polynomial scattering for Fsub(N) method [fr
Summary of existing uncertainty methods
International Nuclear Information System (INIS)
Glaeser, Horst
2013-01-01
A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions
A stochastic collocation method for the second order wave equation with a discontinuous random speed
Motamed, Mohammad; Nobile, Fabio; Tempone, Raul
2012-01-01
In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical
Deterministic and probabilistic approach to safety analysis
International Nuclear Information System (INIS)
Heuser, F.W.
1980-01-01
The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)
Simulation of photonic waveguides with deterministic aperiodic nanostructures for biosensing
DEFF Research Database (Denmark)
Neustock, Lars Thorben; Paulsen, Moritz; Jahns, Sabrina
2016-01-01
Photonic waveguides with deterministic aperiodic corrugations offer rich spectral characteristics under surface-normal illumination. The finite-element method (FEM), the finite-difference time-domain (FDTD) method and a rigorous coupled wave algorithm (RCWA) are compared for computing the near...
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Advanced differential quadrature methods
Zong, Zhi
2009-01-01
Modern Tools to Perform Numerical DifferentiationThe original direct differential quadrature (DQ) method has been known to fail for problems with strong nonlinearity and material discontinuity as well as for problems involving singularity, irregularity, and multiple scales. But now researchers in applied mathematics, computational mechanics, and engineering have developed a range of innovative DQ-based methods to overcome these shortcomings. Advanced Differential Quadrature Methods explores new DQ methods and uses these methods to solve problems beyond the capabilities of the direct DQ method.After a basic introduction to the direct DQ method, the book presents a number of DQ methods, including complex DQ, triangular DQ, multi-scale DQ, variable order DQ, multi-domain DQ, and localized DQ. It also provides a mathematical compendium that summarizes Gauss elimination, the Runge-Kutta method, complex analysis, and more. The final chapter contains three codes written in the FORTRAN language, enabling readers to q...
Inflow Turbulence Generation Methods
Wu, Xiaohua
2017-01-01
Research activities on inflow turbulence generation methods have been vigorous over the past quarter century, accompanying advances in eddy-resolving computations of spatially developing turbulent flows with direct numerical simulation, large-eddy simulation (LES), and hybrid Reynolds-averaged Navier-Stokes-LES. The weak recycling method, rooted in scaling arguments on the canonical incompressible boundary layer, has been applied to supersonic boundary layer, rough surface boundary layer, and microscale urban canopy LES coupled with mesoscale numerical weather forecasting. Synthetic methods, originating from analytical approximation to homogeneous isotropic turbulence, have branched out into several robust methods, including the synthetic random Fourier method, synthetic digital filtering method, synthetic coherent eddy method, and synthetic volume forcing method. This article reviews major progress in inflow turbulence generation methods with an emphasis on fundamental ideas, key milestones, representative applications, and critical issues. Directions for future research in the field are also highlighted.
Bellman, Richard Ernest
1970-01-01
In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat
Consumer Behavior Research Methods
DEFF Research Database (Denmark)
Chrysochou, Polymeros
2017-01-01
This chapter starts by distinguishing consumer behavior research methods based on the type of data used, being either secondary or primary. Most consumer behavior research studies phenomena that require researchers to enter the field and collect data on their own, and therefore the chapter...... emphasizes the discussion of primary research methods. Based on the nature of the data primary research methods are further distinguished into qualitative and quantitative. The chapter describes the most important and popular qualitative and quantitative methods. It concludes with an overall evaluation...... of the methods and how to improve quality in consumer behavior research methods....
Deterministic computation of functional integrals
International Nuclear Information System (INIS)
Lobanov, Yu.Yu.
1995-09-01
A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the
Nuclear methods in medical physics
International Nuclear Information System (INIS)
Jeraj, R.
2003-01-01
A common ground for both, reactor and medical physics is a demand for high accuracy of particle transport calculations. In reactor physics, safe operation of nuclear power plants has been asking for high accuracy of calculation methods. Similarly, dose calculation in radiation therapy for cancer has been requesting high accuracy of transport methods to ensure adequate dosimetry. Common to both problems has always been a compromise between achievable accuracy and available computer power leading into a variety of calculation methods developed over the decades. On the other hand, differences of subjects (nuclear reactor vs. humans) and radiation types (neutron/photon vs. photon/electron or ions) are calling for very field-specific approach. Nevertheless, it is not uncommon to see drift of researches from one field to another. Several examples from both fields will be given with the aim to compare the problems, indicating their similarities and discussing their differences. As examples of reactor physics applications, both deterministic and Monte Carlo calculations will be presented for flux distributions of the VENUS and TRIGA Mark II benchmark. These problems will be paralleled to medical physics applications in linear accelerator radiation field determination and dose distribution calculations. Applicability of the adjoint/forward transport will be discussed in the light of both transport problems. Boron neutron capture therapy (BNCT) as an example of the close collaboration between the fields will be presented. At last, several other examples from medical physics, which can and cannot find corresponding problems in reactor physics, will be discussed (e.g., beam optimisation in inverse treatment planning, imaging applications). (author)
International Nuclear Information System (INIS)
Ohtori, Yasuki
2004-01-01
In the JEAG4601-1987 (Japan Electric Association Guide for earthquake resistance design), either the conventional deterministic method or probabilistic method is used for evaluating the stability of ground foundations and surrounding slopes in nuclear power plants. The deterministic method, in which the soil properties of 'mean ± coefficient x standard deviation' is adopted for the calculations, is generally used in the design stage to data. On the other hand, the probabilistic method, in which the soil properties assume to have probabilistic distributions, is stated as a future method. The deterministic method facilitates the evaluation, however, it is necessary to clarify the relation with the probabilistic method. In this paper, the relationship between the deterministic and the probabilistic methods are investigated. To do that, a simple model that can take into account the dynamic effect of structures and a simplified method for accounting the spatial randomness are proposed and used for the studies. As the results of studies, it is found that the strength of soil properties is most importation factor for the stability of ground structures and the probability below the safety factor evaluated with the soil properties of mean -1.0 x standard deviation' by the deterministic method is of much lower. (author)
U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...
International Nuclear Information System (INIS)
Garncarek, Z.
1989-01-01
The three circle method in its general form is presented. The method is especially useful for investigation of shapes of agglomerations of objects. An example of its applications to investigation of galaxies distribution is given. 17 refs. (author)
DEFF Research Database (Denmark)
Jensen, Torben Elgaard; Andreasen, Mogens Myrup
2010-01-01
The paper challenges the dominant and widespread view that a good design method will guarantee a systematic approach as well as certain results. First, it explores the substantial differences between on the one hand the conception of methods implied in Pahl & Beitz’s widely recognized text book...... on engineering design, and on the other hand the understanding of method use, which has emerged from micro-sociological studies of practice (ethnomethodology). Second, it reviews a number of case studies conducted by engineering students, who were instructed to investigate the actual use of design methods...... in Danish companies. The paper concludes that design methods in practice deviate substantially from Pahl & Beitz’s description of method use: The object and problems, which are the starting points for method use, are more contested and less given than generally assumed; The steps of methods are often...
Mastorakis, Nikos E
2009-01-01
Features contributions that are focused on significant aspects of current numerical methods and computational mathematics. This book carries chapters that advanced methods and various variations on known techniques that can solve difficult scientific problems efficiently.
International Nuclear Information System (INIS)
Lee, Byeong Hae
1992-02-01
This book gives descriptions of basic finite element method, which includes basic finite element method and data, black box, writing of data, definition of VECTOR, definition of matrix, matrix and multiplication of matrix, addition of matrix, and unit matrix, conception of hardness matrix like spring power and displacement, governed equation of an elastic body, finite element method, Fortran method and programming such as composition of computer, order of programming and data card and Fortran card, finite element program and application of nonelastic problem.
Conformable variational iteration method
Directory of Open Access Journals (Sweden)
Omer Acan
2017-02-01
Full Text Available In this study, we introduce the conformable variational iteration method based on new defined fractional derivative called conformable fractional derivative. This new method is applied two fractional order ordinary differential equations. To see how the solutions of this method, linear homogeneous and non-linear non-homogeneous fractional ordinary differential equations are selected. Obtained results are compared the exact solutions and their graphics are plotted to demonstrate efficiency and accuracy of the method.
VALUATION METHODS- LITERATURE REVIEW
Dorisz Talas
2015-01-01
This paper is a theoretical overview of the often used valuation methods with the help of which the value of a firm or its equity is calculated. Many experts (including Aswath Damodaran, Guochang Zhang and CA Hozefa Natalwala) classify the methods. The basic models are based on discounted cash flows. The main method uses the free cash flow for valuation, but there are some newer methods that reveal and correct the weaknesses of the traditional models. The valuation of flexibility of managemen...
Halcomb, Elizabeth; Hickman, Louise
2015-04-08
Mixed methods research involves the use of qualitative and quantitative data in a single research project. It represents an alternative methodological approach, combining qualitative and quantitative research approaches, which enables nurse researchers to explore complex phenomena in detail. This article provides a practical overview of mixed methods research and its application in nursing, to guide the novice researcher considering a mixed methods research project.
Possibilities of roentgenological method
International Nuclear Information System (INIS)
Sivash, Eh.S.; Sal'man, M.M.
1980-01-01
Literary and experimental data on estimating possibilities of roentgenologic investigations using an electron optical amplifier, X-ray television and roentgen cinematography are generalized. Different methods of studying gastro-intestinal tract are compared. The advantage of the roentgenologic method over the endoscopic method after stomach resection is shown [ru
The Generalized Sturmian Method
DEFF Research Database (Denmark)
Avery, James Emil
2011-01-01
these ideas clearly so that they become more accessible. By bringing together these non-standard methods, the book intends to inspire graduate students, postdoctoral researchers and academics to think of novel approaches. Is there a method out there that we have not thought of yet? Can we design a new method...... generations of researchers were left to work out how to achieve this ambitious goal for molecular systems of ever-increasing size. This book focuses on non-mainstream methods to solve the molecular electronic Schrödinger equation. Each method is based on a set of core ideas and this volume aims to explain...
Mimetic discretization methods
Castillo, Jose E
2013-01-01
To help solve physical and engineering problems, mimetic or compatible algebraic discretization methods employ discrete constructs to mimic the continuous identities and theorems found in vector calculus. Mimetic Discretization Methods focuses on the recent mimetic discretization method co-developed by the first author. Based on the Castillo-Grone operators, this simple mimetic discretization method is invariably valid for spatial dimensions no greater than three. The book also presents a numerical method for obtaining corresponding discrete operators that mimic the continuum differential and
International Nuclear Information System (INIS)
Leasure, C.S.
1992-01-01
The Department of Energy (DOE) has established an analytical methods compendium development program to integrate its environmental analytical methods. This program is administered through DOE's Laboratory Management Division (EM-563). The primary objective of this program is to assemble a compendium of analytical chemistry methods of known performance for use by all DOE Environmental Restoration and Waste Management program. This compendium will include methods for sampling, field screening, fixed analytical laboratory and mobile analytical laboratory analyses. It will also include specific guidance on the proper selection of appropriate sampling and analytical methods in using specific analytical requirements
Methods for assessing geodiversity
Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco
2017-04-01
The accepted systematics of geodiversity assessment methods will be presented in three categories: qualitative, quantitative and qualitative-quantitative. Qualitative methods are usually descriptive methods that are suited to nominal and ordinal data. Quantitative methods use a different set of parameters and indicators to determine the characteristics of geodiversity in the area being researched. Qualitative-quantitative methods are a good combination of the collection of quantitative data (i.e. digital) and cause-effect data (i.e. relational and explanatory). It seems that at the current stage of the development of geodiversity research methods, qualitative-quantitative methods are the most advanced and best assess the geodiversity of the study area. Their particular advantage is the integration of data from different sources and with different substantive content. Among the distinguishing features of the quantitative and qualitative-quantitative methods for assessing geodiversity are their wide use within geographic information systems, both at the stage of data collection and data integration, as well as numerical processing and their presentation. The unresolved problem for these methods, however, is the possibility of their validation. It seems that currently the best method of validation is direct filed confrontation. Looking to the next few years, the development of qualitative-quantitative methods connected with cognitive issues should be expected, oriented towards ontology and the Semantic Web.
The Random Ray Method for neutral particle transport
Energy Technology Data Exchange (ETDEWEB)
Tramm, John R., E-mail: jtramm@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States); Smith, Kord S., E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegela@mcs.anl.gov [Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States)
2017-08-01
A new approach to solving partial differential equations (PDEs) based on the method of characteristics (MOC) is presented. The Random Ray Method (TRRM) uses a stochastic rather than deterministic discretization of characteristic tracks to integrate the phase space of a problem. TRRM is potentially applicable in a number of transport simulation fields where long characteristic methods are used, such as neutron transport and gamma ray transport in reactor physics as well as radiative transfer in astrophysics. In this study, TRRM is developed and then tested on a series of exemplar reactor physics benchmark problems. The results show extreme improvements in memory efficiency compared to deterministic MOC methods, while also reducing algorithmic complexity, allowing for a sparser computational grid to be used while maintaining accuracy.
The Random Ray Method for neutral particle transport
International Nuclear Information System (INIS)
Tramm, John R.; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2017-01-01
A new approach to solving partial differential equations (PDEs) based on the method of characteristics (MOC) is presented. The Random Ray Method (TRRM) uses a stochastic rather than deterministic discretization of characteristic tracks to integrate the phase space of a problem. TRRM is potentially applicable in a number of transport simulation fields where long characteristic methods are used, such as neutron transport and gamma ray transport in reactor physics as well as radiative transfer in astrophysics. In this study, TRRM is developed and then tested on a series of exemplar reactor physics benchmark problems. The results show extreme improvements in memory efficiency compared to deterministic MOC methods, while also reducing algorithmic complexity, allowing for a sparser computational grid to be used while maintaining accuracy.
Probabilistic safety analysis : a new nuclear power plants licensing method
International Nuclear Information System (INIS)
Oliveira, L.F.S. de.
1982-04-01
After a brief retrospect of the application of Probabilistic Safety Analysis in the nuclear field, the basic differences between the deterministic licensing method, currently in use, and the probabilistic method are explained. Next, the two main proposals (by the AIF and the ACRS) concerning the establishment of the so-called quantitative safety goals (or simply 'safety goals') are separately presented and afterwards compared in their most fundamental aspects. Finally, some recent applications and future possibilities are discussed. (Author) [pt
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
Methods of Software Verification
Directory of Open Access Journals (Sweden)
R. E. Gurin
2015-01-01
Full Text Available This article is devoted to the problem of software verification (SW. Methods of software verification designed to check the software for compliance with the stated requirements such as correctness, system security and system adaptability to small changes in the environment, portability and compatibility, etc. These are various methods both by the operation process and by the way of achieving result. The article describes the static and dynamic methods of software verification and paid attention to the method of symbolic execution. In its review of static analysis are discussed and described the deductive method, and methods for testing the model. A relevant issue of the pros and cons of a particular method is emphasized. The article considers classification of test techniques for each method. In this paper we present and analyze the characteristics and mechanisms of the static analysis of dependencies, as well as their views, which can reduce the number of false positives in situations where the current state of the program combines two or more states obtained both in different paths of execution and in working with multiple object values. Dependences connect various types of software objects: single variables, the elements of composite variables (structure fields, array elements, the size of the heap areas, the length of lines, the number of initialized array elements in the verification code using static methods. The article pays attention to the identification of dependencies within the framework of the abstract interpretation, as well as gives an overview and analysis of the inference tools.Methods of dynamic analysis such as testing, monitoring and profiling are presented and analyzed. Also some kinds of tools are considered which can be applied to the software when using the methods of dynamic analysis. Based on the work a conclusion is drawn, which describes the most relevant problems of analysis techniques, methods of their solutions and
Deterministic analyses of severe accident issues
International Nuclear Information System (INIS)
Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.
2004-01-01
Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents
International Nuclear Information System (INIS)
Bourdon, B.
2003-01-01
The general principle of isotope dating methods is based on the presence of radioactive isotopes in the geologic or archaeological object to be dated. The decay with time of these isotopes is used to determine the 'zero' time corresponding to the event to be dated. This paper recalls the general principle of isotope dating methods (bases, analytical methods, validation of results and uncertainties) and presents the methods based on natural radioactivity (Rb-Sr, Sm-Nd, U-Pb, Re-Os, K-Ar (Ar-Ar), U-Th-Ra- 210 Pb, U-Pa, 14 C, 36 Cl, 10 Be) and the methods based on artificial radioactivity with their applications. Finally, the methods based on irradiation damages (thermoluminescence, fission tracks, electron spin resonance) are briefly evoked. (J.S.)
DEFF Research Database (Denmark)
Svabo, Connie
2016-01-01
is presented and an example is provided of a first exploratory engagement with it. The method is used in a specific project Becoming Iris, making inquiry into arts-based knowledge creation during a three month visiting scholarship at a small, independent visual art academy. Using the performative schizoid......A performative schizoid method is developed as a method contribution to performance as research. The method is inspired by contemporary research in the human and social sciences urging experimentation and researcher engagement with creative and artistic practice. In the article, the method...... method in Becoming Iris results in four audio-visual and performance-based productions, centered on an emergent theme of the scholartist as a bird in borrowed feathers. Interestingly, the moral lesson of the fable about the vain jackdaw, who dresses in borrowed peacock feathers and becomes a castout...
International Nuclear Information System (INIS)
Ferguson, A.J.
1974-01-01
An outline of the theory of angular correlations is presented, and the difference between the modern density matrix method and the traditional wave function method is stressed. Comments are offered on particular angular correlation theoretical techniques. A brief discussion is given of recent studies of gamma ray angular correlations of reaction products recoiling with high velocity into vacuum. Two methods for optimization to obtain the most accurate expansion coefficients of the correlation are discussed. (1 figure, 53 references) (U.S.)
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Hansen, G.E.
1985-01-01
The Rossi Alpha Method has proved to be valuable for the determination of prompt neutron lifetimes in fissile assemblies having known reproduction numbers at or near delayed critical. This workshop report emphasizes the pioneering applications of the method by Dr. John D. Orndoff to fast-neutron critical assemblies at Los Alamos. The value of the method appears to disappear for subcritical systems where the Rossi-α is no longer an α-eigenvalue
Barndt, William
2003-01-01
Over the past few years, the number of political science departments offering qualitative methods courses has grown substantially. The number of qualitative methods textbooks has kept pace, providing instructors with an overwhelming array of choices. But how to decide which text to choose from this exhortatory smorgasbord? The scholarship desperately needs evaluated. Yet the task is not entirely straightforward: qualitative methods textbooks reflect the diversity inherent in qualitative metho...
DEFF Research Database (Denmark)
Nielsen, Peter Vilhelm
The velocity level in a room ventilated by jet ventilation is strongly influenced by the supply conditions. The momentum flow in the supply jets controls the air movement in the room and, therefore, it is very important that the inlet conditions and the numerical method can generate a satisfactor...... description of this momentum flow. The Box Method is a practical method for the description of an Air Terminal Device which will save grid points and ensure the right level of the momentum flow....
Learning to Act: Qualitative Learning of Deterministic Action Models
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2017-01-01
In this article we study learnability of fully observable, universally applicable action models of dynamic epistemic logic. We introduce a framework for actions seen as sets of transitions between propositional states and we relate them to their dynamic epistemic logic representations as action...... in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...
Applied Bayesian hierarchical methods
National Research Council Canada - National Science Library
Congdon, P
2010-01-01
... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...
[Methods of quantitative proteomics].
Kopylov, A T; Zgoda, V G
2007-01-01
In modern science proteomic analysis is inseparable from other fields of systemic biology. Possessing huge resources quantitative proteomics operates colossal information on molecular mechanisms of life. Advances in proteomics help researchers to solve complex problems of cell signaling, posttranslational modification, structure and functional homology of proteins, molecular diagnostics etc. More than 40 various methods have been developed in proteomics for quantitative analysis of proteins. Although each method is unique and has certain advantages and disadvantages all these use various isotope labels (tags). In this review we will consider the most popular and effective methods employing both chemical modifications of proteins and also metabolic and enzymatic methods of isotope labeling.
Melendez, Jordan
2014-01-01
This note presents two model-independent methods for use in the alignment of the ALFA forward detectors. Using a Monte Carlo simulated LHC run at \\beta = 90m and \\sqrt{s} = 7 TeV, the Kinematic Peak alignment method is utilized to reconstruct the Mandelstam momentum transfer variable t for single-diractive protons. The Hot Spot method uses fluctuations in the hitmap density to pinpoint particular regions in the detector that could signal a misalignment. Another method uses an error function fit to find the detector edge. With this information, the vertical alignment can be determined.
Method of chronokinemetrical invariants
International Nuclear Information System (INIS)
Vladimirov, Yu.S.; Shelkovenko, A.Eh.
1976-01-01
A particular case of a general dyadic method - the method of chronokinemetric invariants is formulated. The time-like dyad vector is calibrated in a chronometric way, and the space-like vector - in a kinemetric way. Expressions are written for the main physical-geometrical values of the dyadic method and for differential operators. The method developed may be useful for predetermining the reference system of a single observer, and also for studying problems connected with emission and absorption of gravitational and electromagnetic waves [ru
Understanding advanced statistical methods
Westfall, Peter
2013-01-01
Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...
International Nuclear Information System (INIS)
Porter, J.F.
1996-01-01
Nondestructive testing (NDT) is the use of physical and chemical methods for evaluating material integrity without impairing its intended usefulness or continuing service. Nondestructive tests are used by manufaturer's for the following reasons: 1) to ensure product reliability; 2) to prevent accidents and save human lives; 3) to aid in better product design; 4) to control manufacturing processes; and 5) to maintain a uniform quality level. Nondestructive testing is used extensively on power plants, oil and chemical refineries, offshore oil rigs and pipeline (NDT can even be conducted underwater), welds on tanks, boilers, pressure vessels and heat exchengers. NDT is now being used for testing concrete and composite materials. Because of the criticality of its application, NDT should be performed and the results evaluated by qualified personnel. There are five basic nondestructive examination methods: 1) liquid penetrant testing - method used for detecting surface flaws in materials. This method can be used for metallic and nonmetallic materials, portable and relatively inexpensive. 2) magnetic particle testing - method used to detect surface and subsurface flaws in ferromagnetic materials; 3) radiographic testing - method used to detect internal flaws and significant variation in material composition and thickness; 4) ultrasonic testing - method used to detect internal and external flaws in materials. This method uses ultrasonics to measure thickness of a material or to examine the internal structure for discontinuities. 5) eddy current testing - method used to detect surface and subsurface flaws in conductive materials. Not one nondestructive examination method can find all discontinuities in all of the materials capable of being tested. The most important consideration is for the specifier of the test to be familiar with the test method and its applicability to the type and geometry of the material and the flaws to be detected
International Nuclear Information System (INIS)
Hayward, Robert M.; Rahnema, Farzad; Zhang, Dingkang
2013-01-01
Highlights: ► A new hybrid stochastic–deterministic transport theory method to couple with diffusion theory. ► The method is implemented in 2D hexagonal geometry. ► The new method produces excellent results when compared with Monte Carlo reference solutions. ► The method is fast, solving all test cases in less than 12 s. - Abstract: A new hybrid stochastic–deterministic transport theory method, which is designed to couple with diffusion theory, is presented. The new method is an extension of the incident flux response expansion method, and it combines the speed of diffusion theory with the accuracy of transport theory. With ease of use in mind, the new method is derived in such a way that it can be implemented with only minimal modifications to an existing diffusion theory method. A new angular expansion, which is necessary for the diffusion theory coupling, is developed in 2D and 3D. The method is implemented in 2D hexagonal geometry, and an HTTR benchmark problem is used to test its accuracy in a standalone configuration. It is found that the new method produces excellent results (with average relative error in partial current less than 0.033%) when compared with Monte Carlo reference solutions. Furthermore, the method is fast, solving all test cases in less than 12 s
Methods for data classification
Garrity, George [Okemos, MI; Lilburn, Timothy G [Front Royal, VA
2011-10-11
The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.
2014-01-01
The present invention relates to a method for exchanging data between at least two servers with use of a gateway. Preferably the method is applied to healthcare systems. Each server holds a unique federated identifier, which identifier identifies a single patient (P). Thus, it is possible for the
Blystone, Robert V.; Blodgett, Kevin
2006-01-01
The scientific method is the principal methodology by which biological knowledge is gained and disseminated. As fundamental as the scientific method may be, its historical development is poorly understood, its definition is variable, and its deployment is uneven. Scientific progress may occur without the strictures imposed by the formal…
Methods of numerical relativity
International Nuclear Information System (INIS)
Piran, T.
1983-01-01
Numerical Relativity is an alternative to analytical methods for obtaining solutions for Einstein equations. Numerical methods are particularly useful for studying generation of gravitational radiation by potential strong sources. The author reviews the analytical background, the numerical analysis aspects and techniques and some of the difficulties involved in numerical relativity. (Auth.)
International Nuclear Information System (INIS)
Kotikov, A.V.
1993-01-01
A new method of massive Feynman diagrams calculation is presented. It provides a fairly simple procedure to obtain the result without the D-space integral calculation (for the dimensional regularization). Some diagrams are calculated as an illustration of this method capacities. (author). 7 refs
BOUCHER, JOHN G.
THE AUTHOR STATES THAT BEFORE PRESENT FOREIGN LANGUAGE TEACHING METHODS CAN BE DISCUSSED INTELLIGENTLY, THE RESEARCH IN PSYCHOLOGY AND LINGUISTICS WHICH HAS INFLUENCED THE DEVELOPMENT OF THESE METHODS MUST BE CONSIDERED. MANY FOREIGN LANGUAGE TEACHERS WERE BEGINNING TO FEEL COMFORTABLE WITH THE AUDIOLINGUAL APPROACH WHEN NOAM CHOMSKY, IN HIS 1966…
Check, Joseph; Schutt, Russell K.
2011-01-01
"Research Methods in Education" introduces research methods as an integrated set of techniques for investigating questions about the educational world. This lively, innovative text helps students connect technique and substance, appreciate the value of both qualitative and quantitative methodologies, and make ethical research decisions.…
Thomas P. Holmes; Wiktor L. Adamowicz
2003-01-01
Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...
Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. It can lower ... at www.hormone.org/Spanish . Proven Weight Loss Methods Fact Sheet www.hormone.org
Radiation borehole logging method
International Nuclear Information System (INIS)
Wylie, A.; Mathew, P.J.
1977-01-01
A method of obtaining an indication of the diameter of a borehole is described. The method comprises subjecting the walls of the borehole to monoenergetic gamma radiation and making measurements of the intensity of gamma radiation backscattered from the walls. The energy of the radiation is sufficiently high for the shape to be substantially independent of the density and composition of the borehole walls
International Nuclear Information System (INIS)
Moser, H.; Rauert, W.
1980-01-01
Of the investigation methods used in hydrology, tracer methods hold a special place as they are the only ones which give direct insight into the movement and distribution processes taking place in surface and ground waters. Besides the labelling of water with salts and dyes, as in the past, in recent years the use of isotopes in hydrology, in water research and use, in ground-water protection and in hydraulic engineering has increased. This by no means replaces proven methods of hydrological investigation but tends rather to complement and expand them through inter-disciplinary cooperation. The book offers a general introduction to the application of various isotope methods to specific hydrogeological and hydrological problems. The idea is to place the hydrogeologist and the hydrologist in the position to recognize which isotope method will help him solve his particular problem or indeed, make a solution possible at all. He should also be able to recognize what the prerequisites are and what work and expenditure the use of such methods involves. May the book contribute to promoting cooperation between hydrogeologists, hydrologists, hydraulic engineers and isotope specialists, and thus supplement proven methods of investigation in hydrological research and water utilization and protection wherever the use of isotope methods proves to be of advantage. (orig./HP) [de
Essential numerical computer methods
Johnson, Michael L
2010-01-01
The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...
Saucez, Ph
2001-01-01
The general Method of Lines (MOL) procedure provides a flexible format for the solution of all the major classes of partial differential equations (PDEs) and is particularly well suited to evolutionary, nonlinear wave PDEs. Despite its utility, however, there are relatively few texts that explore it at a more advanced level and reflect the method''s current state of development.Written by distinguished researchers in the field, Adaptive Method of Lines reflects the diversity of techniques and applications related to the MOL. Most of its chapters focus on a particular application but also provide a discussion of underlying philosophy and technique. Particular attention is paid to the concept of both temporal and spatial adaptivity in solving time-dependent PDEs. Many important ideas and methods are introduced, including moving grids and grid refinement, static and dynamic gridding, the equidistribution principle and the concept of a monitor function, the minimization of a functional, and the moving finite elem...
International Nuclear Information System (INIS)
Rajabalinejad, M.
2010-01-01
To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.
Nölting, Bengt
2006-01-01
Incorporating recent dramatic advances, this textbook presents a fresh and timely introduction to modern biophysical methods. An array of new, faster and higher-power biophysical methods now enables scientists to examine the mysteries of life at a molecular level. This innovative text surveys and explains the ten key biophysical methods, including those related to biophysical nanotechnology, scanning probe microscopy, X-ray crystallography, ion mobility spectrometry, mass spectrometry, proteomics, and protein folding and structure. Incorporating much information previously unavailable in tutorial form, Nölting employs worked examples and 267 illustrations to fully detail the techniques and their underlying mechanisms. Methods in Modern Biophysics is written for advanced undergraduate and graduate students, postdocs, researchers, lecturers and professors in biophysics, biochemistry and related fields. Special features in the 2nd edition: • Illustrates the high-resolution methods for ultrashort-living protei...
International Nuclear Information System (INIS)
Deville, J.P.
1998-01-01
Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.)
Cooperative method development
DEFF Research Database (Denmark)
Dittrich, Yvonne; Rönkkö, Kari; Eriksson, Jeanette
2008-01-01
The development of methods tools and process improvements is best to be based on the understanding of the development practice to be supported. Qualitative research has been proposed as a method for understanding the social and cooperative aspects of software development. However, qualitative...... research is not easily combined with the improvement orientation of an engineering discipline. During the last 6 years, we have applied an approach we call `cooperative method development', which combines qualitative social science fieldwork, with problem-oriented method, technique and process improvement....... The action research based approach focusing on shop floor software development practices allows an understanding of how contextual contingencies influence the deployment and applicability of methods, processes and techniques. This article summarizes the experiences and discusses the further development...
DEFF Research Database (Denmark)
Jensen, Martin Trandberg
2014-01-01
This chapter showcases how mobile methods are more than calibrated techniques awaiting application by tourism researchers, but productive in the enactment of the mobile (Law and Urry, 2004). Drawing upon recent findings deriving from a PhD course on mobility and mobile methods it reveals...... the conceptual ambiguousness of the term ‘mobile methods’. In order to explore this ambiguousness the chapter provides a number of examples deriving from tourism research, to explore how mobile methods are always entangled in ideologies, predispositions, conventions and practice-realities. Accordingly......, the engagements with methods are acknowledged to be always political and contextual, reminding us to avoid essentialist discussions regarding research methods. Finally, the chapter draws on recent fieldwork to extend developments in mobilities-oriented tourism research, by employing auto-ethnography to call...
Numerical method for two phase flow with a unstable interface
International Nuclear Information System (INIS)
Glimm, J.; Marchesin, D.; McBryan, O.
1981-01-01
The random choice method is used to compute the oil-water interface for two dimensional porous media equations. The equations used are a pair of coupled equations; the (elliptic) pressure equation and the (hyperbolic) saturation equation. The equations do not include the dispersive capillary pressure term and the computation does not introduce numerical diffusion. The method resolves saturation discontinuities sharply. The main conclusion of this paper is that the random choice is a correct numerical procedure for this problem even in the highly fingered case. Two methods of inducing fingers are considered: deterministically, through choice of Cauchy data and heterogeneity, through maximizing the randomness of the random choice method
Determination method of radiostrontium
International Nuclear Information System (INIS)
1984-01-01
This manual provides determination methods of strontium-90 and strontium-89 in the environment released from nuclear facilities, and it is a revised edition of the previous manual published in 1974. As for the preparation method of radiation counting sample, ion exchange method, oxalate separation method and solvent extraction method were adopted in addition to the method of fuming nitric acid separation adopted in the previous edition. Strontium-90 is determined by the separation and radioactivity determination of yttrium-90 in radioequilibrium with strontium-90. Strontium-89 is determined by subtraction of radioactivity of strontium-90 plus yttrium-90 from gross radioactivity of isolated strontium carbonate. Radioactivity determination should be carried out with a low-background 2 π-gas-flow counting system for the mounted sample on a filter having a chemical form of ferric hydroxide, yttrium oxalate or strontium carbonate. This manual describes sample preparation procedures as well as radioactivity counting procedures for environmental samples of precipitates as rain or snow, airborne dust, fresh water, sea water and soil, and also for ash sample made from biological or food samples such as grains, vegetables, tea leaves, pine needle, milk, marine organisms, and total diet, by employing a method of fuming nitric acid separation, ion exchange separation, oxalate precipitate separation or solvent extraction separation (only for an ash sample). Procedures for reagent chemicals preparation is also attached to this manual. (Takagi, S.)
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
DEFF Research Database (Denmark)
Lynnerup, Niels
2009-01-01
Mummies are human remains with preservation of non-bony tissue. Many mummy studies focus on the development and application of non-destructive methods for examining mummies, including radiography, CT-scanning with advanced 3-dimensional visualisations, and endoscopic techniques, as well as minima......Mummies are human remains with preservation of non-bony tissue. Many mummy studies focus on the development and application of non-destructive methods for examining mummies, including radiography, CT-scanning with advanced 3-dimensional visualisations, and endoscopic techniques, as well...... as minimally-destructive chemical, physical and biological methods for, e.g., stable isotopes, trace metals and DNA....
Directory of Open Access Journals (Sweden)
Athanasios Drigas
2016-03-01
Full Text Available This article bridges the gap between the Montessori Method and Information and Communication Technologies (ICTs in contemporary education. It reviews recent research works which recall the Montessori philosophy, principles and didactical tools applying to today’s computers and supporting technologies in children’s learning process. This article reviews how important the stimulation of human senses in the learning process is, as well as the development of Montessori materials using the body and the hand in particular, all according to the Montessori Method along with recent researches over ICTs. Montessori Method within information society age acquires new perspectives, new functionality and new efficacy.
International Nuclear Information System (INIS)
Dubansky, A.
1980-01-01
The rubidium-strontium geological dating method is based on the determination of the Rb and Sr isotope ratio in rocks, mainly using mass spectrometry. The method is only practical for silicate minerals and rocks, potassium feldspars and slates. Also described is the rubidium-strontium isochrone method. This, however, requires a significant amount of experimental data and an analysis of large quantities of samples, often of the order of tons. The results are tabulated of rubidium-strontium dating of geological formations in the Czech Socialist Republic. (M.S.)
Structural Reliability Methods
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager; Madsen, H. O.
The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....
Tadd, Andrew R; Schwank, Johannes
2013-05-14
A catalytic reforming method is disclosed herein. The method includes sequentially supplying a plurality of feedstocks of variable compositions to a reformer. The method further includes adding a respective predetermined co-reactant to each of the plurality of feedstocks to obtain a substantially constant output from the reformer for the plurality of feedstocks. The respective predetermined co-reactant is based on a C/H/O atomic composition for a respective one of the plurality of feedstocks and a predetermined C/H/O atomic composition for the substantially constant output.
Nuclear physics mathematical methods
International Nuclear Information System (INIS)
Balian, R.; Gervois, A.; Giannoni, M.J.; Levesque, D.; Maille, M.
1984-01-01
The nuclear physics mathematical methods, applied to the collective motion theory, to the reduction of the degrees of freedom and to the order and disorder phenomena; are investigated. In the scope of the study, the following aspects are discussed: the entropy of an ensemble of collective variables; the interpretation of the dissipation, applying the information theory; the chaos and the universality; the Monte-Carlo method applied to the classical statistical mechanics and quantum mechanics; the finite elements method, and the classical ergodicity [fr
DEFF Research Database (Denmark)
Olivarius, Signe
of the transcriptome, 5’ end capture of RNA is combined with next-generation sequencing for high-throughput quantitative assessment of transcription start sites by two different methods. The methods presented here allow for functional investigation of coding as well as noncoding RNA and contribute to future...... RNAs rely on interactions with proteins, the establishment of protein-binding profiles is essential for the characterization of RNAs. Aiming to facilitate RNA analysis, this thesis introduces proteomics- as well as transcriptomics-based methods for the functional characterization of RNA. First, RNA...
Electromigration method in radiochemistry
International Nuclear Information System (INIS)
Makarova, T.P.; Stepanov, A.V.
1977-01-01
Investigations are reviewd of the period 1969-1975 accomplished by such methods as zonal electrophoresis in countercurrent, focusing electrophoresis, isotachophoresis, electrophoresis with elution, continuous two-dimensional electrophoresis. Since the methods considered are based on the use of porous fillers for stabilizing the medium, some attention is given to the effect of the solid-solution interface on the shape and rate of motion of the zones of the rare-earth elements investigated, Sr and others. The trend of developing electrophoresis as a method for obtaining high-purity elements is emphasized
Numerical methods using Matlab
Lindfield, George
2012-01-01
Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of useful and important numerical algorithms that can be implemented into MATLAB for a graphical interpretation to help researchers analyze a particular outcome. Many worked examples are given together with exercises and solutions to illustrate how numerical methods can be used to study problems that have applications in the biosciences, chaos, optimization, engineering and science across the board. Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of use
Model Correction Factor Method
DEFF Research Database (Denmark)
Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes
1997-01-01
The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...
Imaging methods in otorhinolaryngology
International Nuclear Information System (INIS)
Frey, K.W.; Mees, K.; Vogl, T.
1989-01-01
This book is the work of an otorhinolaryngologist and two radiologists, who combined their experience and efforts in order to solve a great variety and number of problems encountered in practical work, taking into account the latest technical potentials and the practical feasibility, which is determined by the equipment available. Every chapter presents the full range of diagnostic methods applicable, starting with the suitable plain radiography methods and proceeding to the various tomographic scanning methods, including conventional tomography. Every technique is assessed in terms of diagnostic value and drawbacks. (orig./MG) With 778 figs [de
Generalized subspace correction methods
Energy Technology Data Exchange (ETDEWEB)
Kolm, P. [Royal Institute of Technology, Stockholm (Sweden); Arbenz, P.; Gander, W. [Eidgenoessiche Technische Hochschule, Zuerich (Switzerland)
1996-12-31
A fundamental problem in scientific computing is the solution of large sparse systems of linear equations. Often these systems arise from the discretization of differential equations by finite difference, finite volume or finite element methods. Iterative methods exploiting these sparse structures have proven to be very effective on conventional computers for a wide area of applications. Due to the rapid development and increasing demand for the large computing powers of parallel computers, it has become important to design iterative methods specialized for these new architectures.
Nonlinear deterministic structures and the randomness of protein sequences
Huang Yan Zhao
2003-01-01
To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.
A Deterministic Annealing Approach to Clustering AIRS Data
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Concrete compositions and methods
Chen, Irvin; Lee, Patricia Tung; Patterson, Joshua
2015-06-23
Provided herein are compositions, methods, and systems for cementitious compositions containing calcium carbonate compositions and aggregate. The compositions find use in a variety of applications, including use in a variety of building materials and building applications.
Oza, Nikunj C.
2004-01-01
Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.
International Nuclear Information System (INIS)
Edgington, T.S.; Plow, E.F.
1979-01-01
The discovery of an isomeric species of carcinoembryonic antigen and methods of isolation, identification and utilization as a radiolabelled species of the same as an aid in the diagnosis of adenocarcinomas of the gastrointestinal tract are disclosed. 13 claims
Energy Technology Data Exchange (ETDEWEB)
Gatty, B
1986-04-01
Scientific methods of dating, born less than thirty years ago, have recently improved tremendously. First the dating principles will be given; then it will be explained how, through natural radioactivity, we can have access to the age of an event or an object; the case of radiocarbon will be especially emphasized. The principle of relative methods such as thermoluminescence or paleomagnetism will also be shortly given. What is the use for dating. The fields of its application are numerous; through these methods, relatively precise ages can be given to the major events which have been keys in the history of universe, life and man; thus, dating is a useful scientific tool in astrophysics, geology, biology, anthropology and archeology. Even if certain ages are still subject to controversies, we can say that these methods have confirmed evolution's continuity, be it on a cosmic, biologic or human scale, where ages are measured in billions, millions or thousands of years respectively.
Energy consumption assessment methods
Energy Technology Data Exchange (ETDEWEB)
Sutherland, K S
1975-01-01
The why, what, and how-to aspects of energy audits for industrial plants, and the application of energy accounting methods to a chemical plant in order to assess energy conservation possibilities are discussed. (LCL)
Indian Academy of Sciences (India)
Chemistry for their pioneering contri butions to the development of computational methods in quantum chemistry and density functional theory .... program of Pop Ie for ab-initio electronic structure calculation of molecules. This ab-initio MO ...
Methods for cellobiosan utilization
Energy Technology Data Exchange (ETDEWEB)
Linger, Jeffrey; Beckham, Gregg T.
2017-07-11
Disclosed herein are enzymes useful for the degradation of cellobiosan in materials such a pyrolysis oils. Methods of degrading cellobiosan using enzymes or organisms expressing the same are also disclosed.
Methods of neutron spectrometry
International Nuclear Information System (INIS)
Doerschel, B.
1981-01-01
The different methods of neutron spectrometry are based on the direct measurement of neutron velocity or on the use of suitable energy-dependent interaction processes. In the latter case the measuring effect of a detector is connected with the searched neutron spectrum by an integral equation. The solution needs suitable unfolding procedures. The most important methods of neutron spectrometry are the time-of-flight method, the crystal spectrometry, the neutron spectrometry by use of elastic collisions with hydrogen nuclei, and neutron spectrometry with the aid of nuclear reactions, especially of the neutron-induced activation. The advantages and disadvantages of these methods are contrasted considering the resolution, the measurable energy range, the sensitivity, and the experimental and computational efforts. (author)
Nölting, Bengt
2010-01-01
Incorporating recent dramatic advances, this textbook presents a fresh and timely introduction to modern biophysical methods. An array of new, faster and higher-power biophysical methods now enables scientists to examine the mysteries of life at a molecular level. This innovative text surveys and explains the ten key biophysical methods, including those related to biophysical nanotechnology, scanning probe microscopy, X-ray crystallography, ion mobility spectrometry, mass spectrometry, proteomics, and protein folding and structure. Incorporating much information previously unavailable in tutorial form, Nölting employs worked examples and about 270 illustrations to fully detail the techniques and their underlying mechanisms. Methods in Modern Biophysics is written for advanced undergraduate and graduate students, postdocs, researchers, lecturers, and professors in biophysics, biochemistry and related fields. Special features in the 3rd edition: Introduces rapid partial protein ladder sequencing - an important...
This Guide focuses primarily on Lean production, which is an organizational improvement philosophy and set of methods that originated in manufacturing but has been expanded to government and service sectors.
International Nuclear Information System (INIS)
Kaneko, K.
1987-01-01
A relationship between the number projection and the shell model methods is investigated in the case of a single-j shell. We can find a one-to-one correspondence between the number projected and the shell model states
Etching method employing radiation
International Nuclear Information System (INIS)
Chapman, B.N.; Winters, H.F.
1982-01-01
This invention provides a method for etching a silicon oxide, carbide, nitride, or oxynitride surface using an electron or ion beam in the presence of a xenon or krypton fluoride. No additional steps are required after exposure to radiation
GEM simulation methods development
International Nuclear Information System (INIS)
Tikhonov, V.; Veenhof, R.
2002-01-01
A review of methods used in the simulation of processes in gas electron multipliers (GEMs) and in the accurate calculation of detector characteristics is presented. Such detector characteristics as effective gas gain, transparency, charge collection and losses have been calculated and optimized for a number of GEM geometries and compared with experiment. A method and a new special program for calculations of detector macro-characteristics such as signal response in a real detector readout structure, and spatial and time resolution of detectors have been developed and used for detector optimization. A detailed development of signal induction on readout electrodes and electronics characteristics are included in the new program. A method for the simulation of charging-up effects in GEM detectors is described. All methods show good agreement with experiment
Improved radioanalytical methods
International Nuclear Information System (INIS)
Erickson, M.D.; Aldstadt, J.H.; Alvarado, J.S.; Crain, J.S.; Orlandini, K.A.; Smith, L.L.
1995-01-01
Methods for the chemical characterization of the environment are being developed under a multitask project for the Analytical Services Division (EM-263) within the US Department of Energy (DOE) Office of Environmental Management. This project focuses on improvement of radioanalytical methods with an emphasis on faster and cheaper routine methods. We have developed improved methods, for separation of environmental levels of technetium-99 and strontium-89/90, radium, and actinides from soil and water; and for separation of actinides from soil and water matrix interferences. Among the novel separation techniques being used are element- and class-specific resins and membranes. (The 3M Corporation is commercializing Empore trademark membranes under a cooperative research and development agreement [CRADA] initiated under this project). We have also developed methods for simultaneous detection of multiple isotopes using inductively coupled plasma-mass spectrometry (ICP-MS). The ICP-MS method requires less rigorous chemical separations than traditional radiochemical analyses because of its mass-selective mode of detection. Actinides and their progeny have been isolated and concentrated from a variety of natural water matrices by using automated batch separation incorporating selective resins prior to ICP-MS analyses. In addition, improvements in detection limits, sample volume, and time of analysis were obtained by using other sample introduction techniques, such as ultrasonic nebulization and electrothermal vaporization. Integration and automation of the separation methods with the ICP-MS methodology by using flow injection analysis is underway, with an objective of automating methods to achieve more reproducible results, reduce labor costs, cut analysis time, and minimize secondary waste generation through miniaturization of the process
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Sysala, Stanislav
2015-01-01
Roč. 70, č. 11 (2015), s. 2621-2637 ISSN 0898-1221 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:68145535 Keywords : system of nonlinear equations * Newton method * load increment method * elastoplasticity Subject RIV: IN - Informatics, Computer Science Impact factor: 1.398, year: 2015 http://www.sciencedirect.com/science/article/pii/S0898122115003818
Nuclear methods monitor nutrition
International Nuclear Information System (INIS)
Allen, B.J.
1988-01-01
Neutron activation of nitrogen and hydrogen in the body, the isotope dilution technique and the measurement of naturally radioactive potassium in the body are among the new nuclear methods, now under collaborative development by the Australian Nuclear Scientific and Technology Organization and medical specialists from several Sydney hospitals. These methods allow medical specialists to monitor the patient's response to various diets and dietary treatments in cases of cystic fibrosis, anorexia nervosa, long-term surgical trauma, renal diseases and AIDS. ills
International Nuclear Information System (INIS)
Hansen, K.
1990-01-01
During the last decade fission track (FT) analysis has evolved as an important tool in exploration for hydrocarbon resources. Most important is this method's ability to yield information about temperatures at different times (history), and thus relate oil generation and time independently of other maturity parameters. The purpose of this paper is to introduce the basics of the method and give an example from the author's studies. (AB) (14 refs.)
International Nuclear Information System (INIS)
Jeong, Yang Su; Oh, Byeong Seong
2010-05-01
This book introduces measurement and error, statistics of experimental data, population, sample variable, distribution function, propagation of error, mean and measurement of error, adjusting to rectilinear equation, common sense of error, experiment method, and record and statement. It also explains importance of error of estimation, systematic error, random error, treatment of single variable, significant figure, deviation, mean value, median, mode, sample mean, sample standard deviation, binomial distribution, gauss distribution, and method of least squares.
Methods for measuring shrinkage
Chapman, Paul; Templar, Simon
2006-01-01
This paper presents findings from research amongst European grocery retailers into their methods for measuring shrinkage. The findings indicate that: there is no dominant method for valuing or stating shrinkage; shrinkage in the supply chain is frequently overlooked; data is essential in pinpointing where and when loss occurs and that many retailers collect data at the stock-keeping unit (SKU) level and do so every 6 months. These findings reveal that it is difficult to benc...
Method of saccharifying cellulose
Johnson, E.A.; Demain, A.L.; Madia, A.
1983-05-13
A method is disclosed of saccharifying cellulose by incubation with the cellulase of Clostridium thermocellum in a broth containing an efficacious amount of thiol reducing agent. Other incubation parameters which may be advantageously controlled to stimulate saccharification include the concentration of alkaline earth salts, pH, temperature, and duration. By the method of the invention, even native crystalline cellulose such as that found in cotton may be completely saccharified.
Henn, Fritz [East Patchogue, NY
2012-01-24
Methods for treatment of depression-related mood disorders in mammals, particularly humans are disclosed. The methods of the invention include administration of compounds capable of enhancing glutamate transporter activity in the brain of mammals suffering from depression. ATP-sensitive K.sup.+ channel openers and .beta.-lactam antibiotics are used to enhance glutamate transport and to treat depression-related mood disorders and depressive symptoms.
Methods of experimental physics
Williams, Dudley
1962-01-01
Methods of Experimental Physics, Volume 3: Molecular Physics focuses on molecular theory, spectroscopy, resonance, molecular beams, and electric and thermodynamic properties. The manuscript first considers the origins of molecular theory, molecular physics, and molecular spectroscopy, as well as microwave spectroscopy, electronic spectra, and Raman effect. The text then ponders on diffraction methods of molecular structure determination and resonance studies. Topics include techniques of electron, neutron, and x-ray diffraction and nuclear magnetic, nuclear quadropole, and electron spin reson
Deterministic prediction of surface wind speed variations
Directory of Open Access Journals (Sweden)
G. V. Drisya
2014-11-01
Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Henke, Luke
2010-01-01
The ICARE method is a flexible, widely applicable method for systems engineers to solve problems and resolve issues in a complete and comprehensive manner. The method can be tailored by diverse users for direct application to their function (e.g. system integrators, design engineers, technical discipline leads, analysts, etc.). The clever acronym, ICARE, instills the attitude of accountability, safety, technical rigor and engagement in the problem resolution: Identify, Communicate, Assess, Report, Execute (ICARE). This method was developed through observation of Space Shuttle Propulsion Systems Engineering and Integration (PSE&I) office personnel approach in an attempt to succinctly describe the actions of an effective systems engineer. Additionally it evolved from an effort to make a broadly-defined checklist for a PSE&I worker to perform their responsibilities in an iterative and recursive manner. The National Aeronautics and Space Administration (NASA) Systems Engineering Handbook states, engineering of NASA systems requires a systematic and disciplined set of processes that are applied recursively and iteratively for the design, development, operation, maintenance, and closeout of systems throughout the life cycle of the programs and projects. ICARE is a method that can be applied within the boundaries and requirements of NASA s systems engineering set of processes to provide an elevated sense of duty and responsibility to crew and vehicle safety. The importance of a disciplined set of processes and a safety-conscious mindset increases with the complexity of the system. Moreover, the larger the system and the larger the workforce, the more important it is to encourage the usage of the ICARE method as widely as possible. According to the NASA Systems Engineering Handbook, elements of a system can include people, hardware, software, facilities, policies and documents; all things required to produce system-level results, qualities, properties, characteristics
VALUATION METHODS- LITERATURE REVIEW
Directory of Open Access Journals (Sweden)
Dorisz Talas
2015-07-01
Full Text Available This paper is a theoretical overview of the often used valuation methods with the help of which the value of a firm or its equity is calculated. Many experts (including Aswath Damodaran, Guochang Zhang and CA Hozefa Natalwala classify the methods. The basic models are based on discounted cash flows. The main method uses the free cash flow for valuation, but there are some newer methods that reveal and correct the weaknesses of the traditional models. The valuation of flexibility of management can be conducted mainly with real options. This paper briefly describes the essence of the Dividend Discount Model, the Free Cash Flow Model, the benefit from using real options and the Residual Income Model. There are a few words about the Adjusted Present Value approach as well. Different models uses different premises, and an overall truth is that if the required premises are real and correct, the value will be appropriately accurate. Another important condition is that experts, analysts should choose between the models on the basis of the purpose of valuation. Thus there are no good or bad methods, only methods that fit different goals and aims. The main task is to define exactly the purpose, then to find the most appropriate valuation technique. All the methods originates from the premise that the value of an asset is the present value of its future cash flows. According to the different points of view of different techniques the resulted values can be also differed from each other. Valuation models and techniques should be adapted to the rapidly changing world, but the basic statements remain the same. On the other hand there is a need for more accurate models in order to help investors get as many information as they could. Today information is one of the most important resources and financial models should keep up with this trend.
Deterministic quantitative risk assessment development
Energy Technology Data Exchange (ETDEWEB)
Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)
2009-07-01
Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Rice, J P; Saccone, N L; Corbett, J
2001-01-01
The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
International Nuclear Information System (INIS)
Beauwens, B.; Arkuszewski, J.; Boryszewicz, M.
1981-01-01
Results obtained in the field of linear iterative methods within the Coordinated Research Program on Transport Theory and Advanced Reactor Calculations are summarized. The general convergence theory of linear iterative methods is essentially based on the properties of nonnegative operators on ordered normed spaces. The following aspects of this theory have been improved: new comparison theorems for regular splittings, generalization of the notions of M- and H-matrices, new interpretations of classical convergence theorems for positive-definite operators. The estimation of asymptotic convergence rates was developed with two purposes: the analysis of model problems and the optimization of relaxation parameters. In the framework of factorization iterative methods, model problem analysis is needed to investigate whether the increased computational complexity of higher-order methods does not offset their increased asymptotic convergence rates, as well as to appreciate the effect of standard relaxation techniques (polynomial relaxation). On the other hand, the optimal use of factorization iterative methods requires the development of adequate relaxation techniques and their optimization. The relative performances of a few possibilities have been explored for model problems. Presently, the best results have been obtained with optimal diagonal-Chebyshev relaxation
Independent random sampling methods
Martino, Luca; Míguez, Joaquín
2018-01-01
This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...
Liseikin, Vladimir D
2017-01-01
This new edition provides a description of current developments relating to grid methods, grid codes, and their applications to actual problems. Grid generation methods are indispensable for the numerical solution of differential equations. Adaptive grid-mapping techniques, in particular, are the main focus and represent a promising tool to deal with systems with singularities. This 3rd edition includes three new chapters on numerical implementations (10), control of grid properties (11), and applications to mechanical, fluid, and plasma related problems (13). Also the other chapters have been updated including new topics, such as curvatures of discrete surfaces (3). Concise descriptions of hybrid mesh generation, drag and sweeping methods, parallel algorithms for mesh generation have been included too. This new edition addresses a broad range of readers: students, researchers, and practitioners in applied mathematics, mechanics, engineering, physics and other areas of applications.
Bayesian methods in reliability
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
Le, Khanh Chau
2012-01-01
The above examples should make clear the necessity of understanding the mechanism of vibrations and waves in order to control them in an optimal way. However vibrations and waves are governed by differential equations which require, as a rule, rather complicated mathematical methods for their analysis. The aim of this textbook is to help students acquire both a good grasp of the first principles from which the governing equations can be derived, and the adequate mathematical methods for their solving. Its distinctive features, as seen from the title, lie in the systematic and intensive use of Hamilton's variational principle and its generalizations for deriving the governing equations of conservative and dissipative mechanical systems, and also in providing the direct variational-asymptotic analysis, whenever available, of the energy and dissipation for the solution of these equations. It will be demonstrated that many well-known methods in dynamics like those of Lindstedt-Poincare, Bogoliubov-Mitropolsky, Ko...
International Nuclear Information System (INIS)
Racolta, P.M.
1994-01-01
The tribological field of activity is mainly concerned with the relative movement of different machine components, friction and wear phenomena and their dependence upon lubrication. Tribological studies on friction and wear processes are important because they lead to significant parameter-improvements of engineering tools and machinery components. A review of fundamental aspects of both friction and wear phenomena is presented. A number of radioindicator-based methods have been known for almost four decades, differing mainly with respect to the mode of introducing the radio-indicators into the machine part to be studied. All these methods briefly presented in this paper are based on the measurement of the activity of wear products and therefore require high activity levels of the part. For this reason, such determinations can be carried out only in special laboratories and under conditions which do not usually agree with the conditions of actual use. What is required is a sensitive, fast method allowing the determination of wear under any operating conditions, without the necessity of stopping and disassembling the machine. The above mentioned requirements are the features that have made the Thin Layer Activation technique (TLA) the most widely used method applied in wear and corrosion studies in the last two decades. The TLA principle, taking in account that wear and corrosion processes are characterised by a loss of material, consists in an ion beam irradiation of a well defined volume of a machine part subjected to wear. The radioactivity level changes can usually be measured by gamma-ray spectroscopy methods. A review of both main TLA fields of application in major laboratories abroad and of those performed at the U-120 cyclotron of I.P.N.E.-Bucharest together with the existing trends to extend other nuclear analytical methods to tribological studies is presented as well. (author). 25 refs., 6 figs., 2 tabs
Solution Methods for Structures with Random Properties Subject to Random Excitation
DEFF Research Database (Denmark)
Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.
This paper deals with the lower order statistical moments of the response of structures with random stiffness and random damping properties subject to random excitation. The arising stochastic differential equations (SDE) with random coefficients are solved by two methods, a second order...... the SDE with random coefficients with deterministic initial conditions to an equivalent nonlinear SDE with deterministic coefficient and random initial conditions. In both methods, the statistical moment equations are used. Hierarchy of statistical moments in the markovian approach is closed...... by the cumulant neglect closure method applied at the fourth order level....
Methods for pretreating biomass
Balan, Venkatesh; Dale, Bruce E; Chundawat, Shishir; Sousa, Leonardo
2017-05-09
A method for pretreating biomass is provided, which includes, in a reactor, allowing gaseous ammonia to condense on the biomass and react with water present in the biomass to produce pretreated biomass, wherein reactivity of polysaccharides in the biomass is increased during subsequent biological conversion as compared to the reactivity of polysaccharides in biomass which has not been pretreated. A method for pretreating biomass with a liquid ammonia and recovering the liquid ammonia is also provided. Related systems which include a biochemical or biofuel production facility are also disclosed.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Oermann, M H
1990-01-01
Research on teaching methods in nursing education was categorized into studies on media, CAI, and other nontraditional instructional strategies. While the research differed, some generalizations may be made from the findings. Multimedia, whether it is used for individual or group instruction, is at least as effective as traditional instruction (lecture and lecture-discussion) in promoting cognitive learning, retention of knowledge, and performance. Further study is needed to identify variables that may influence learning and retention. While learner attitudes toward mediated instruction tended to be positive, investigators failed to control for the effect of novelty. Control over intervening variables was lacking in the majority of studies as well. Research indicated that CAI is as effective as other teaching methods in terms of knowledge gain and retention. Attitudes toward CAI tended to be favorable, with similar problems in measurement as those evidenced in studies of media. Chang (1986) also recommends that future research examine the impact of computer-video interactive instruction on students, faculty, and settings. Research is needed on experimental teaching methods, strategies for teaching problem solving and clinical judgment, and ways of improving the traditional lecture and discussion. Limited research in these areas makes generalizations impossible. There is a particular need for research on how to teach students the diagnostic reasoning process and encourage critical thinking, both in terms of appropriate teaching methods and the way in which those strategies should be used. It is interesting that few researchers studied lecture and lecture-discussion except as comparable teaching methods for research on other strategies. Additional research questions may be generated on lecture and discussion in relation to promoting concept learning, an understanding of nursing and other theories, transfer of knowledge, and development of cognitive skills. Few
International Nuclear Information System (INIS)
Fortin, Ph.
2000-01-01
This document gives a first introduction to 14 C dating as it is put into practice at the radiocarbon dating centre of Claude-Bernard university (Lyon-1 univ., Villeurbanne, France): general considerations and recalls of nuclear physics; the 14 C dating method; the initial standard activity; the isotopic fractioning; the measurement of samples activity; the liquid-scintillation counters; the calibration and correction of 14 C dates; the preparation of samples; the benzene synthesis; the current applications of the method. (J.S.)
Methods of Multivariate Analysis
Rencher, Alvin C
2012-01-01
Praise for the Second Edition "This book is a systematic, well-written, well-organized text on multivariate analysis packed with intuition and insight . . . There is much practical wisdom in this book that is hard to find elsewhere."-IIE Transactions Filled with new and timely content, Methods of Multivariate Analysis, Third Edition provides examples and exercises based on more than sixty real data sets from a wide variety of scientific fields. It takes a "methods" approach to the subject, placing an emphasis on how students and practitioners can employ multivariate analysis in real-life sit
Tautomerism methods and theories
Antonov, Liudmil
2013-01-01
Covering the gap between basic textbooks and over-specialized scientific publications, this is the first reference available to describe this interdisciplinary topic for PhD students and scientists starting in the field. The result is an introductory description providing suitable practical examples of the basic methods used to study tautomeric processes, as well as the theories describing the tautomerism and proton transfer phenomena. It also includes different spectroscopic methods for examining tautomerism, such as UV-VIs, time-resolved fluorescence spectroscopy, and NMR spectrosc
Speeding Fermat's factoring method
McKee, James
A factoring method is presented which, heuristically, splits composite n in O(n^{1/4+epsilon}) steps. There are two ideas: an integer approximation to sqrt(q/p) provides an O(n^{1/2+epsilon}) algorithm in which n is represented as the difference of two rational squares; observing that if a prime m divides a square, then m^2 divides that square, a heuristic speed-up to O(n^{1/4+epsilon}) steps is achieved. The method is well-suited for use with small computers: the storage required is negligible, and one never needs to work with numbers larger than n itself.
High frequency asymptotic methods
International Nuclear Information System (INIS)
Bouche, D.; Dessarce, R.; Gay, J.; Vermersch, S.
1991-01-01
The asymptotic methods allow us to compute the interaction of high frequency electromagnetic waves with structures. After an outline of their foundations with emphasis on the geometrical theory of diffraction, it is shown how to use these methods to evaluate the radar cross section (RCS) of complex tri-dimensional objects of great size compared to the wave-length. The different stages in simulating phenomena which contribute to the RCS are reviewed: physical theory of diffraction, multiple interactions computed by shooting rays, research for creeping rays. (author). 7 refs., 6 figs., 3 insets
Practical methods of optimization
Fletcher, R
2013-01-01
Fully describes optimization methods that are currently most valuable in solving real-life problems. Since optimization has applications in almost every branch of science and technology, the text emphasizes their practical aspects in conjunction with the heuristics useful in making them perform more reliably and efficiently. To this end, it presents comparative numerical studies to give readers a feel for possibile applications and to illustrate the problems in assessing evidence. Also provides theoretical background which provides insights into how methods are derived. This edition offers rev
Electrorheological fluids and methods
Green, Peter F.; McIntyre, Ernest C.
2015-06-02
Electrorheological fluids and methods include changes in liquid-like materials that can flow like milk and subsequently form solid-like structures under applied electric fields; e.g., about 1 kV/mm. Such fluids can be used in various ways as smart suspensions, including uses in automotive, defense, and civil engineering applications. Electrorheological fluids and methods include one or more polar molecule substituted polyhedral silsesquioxanes (e.g., sulfonated polyhedral silsesquioxanes) and one or more oils (e.g., silicone oil), where the fluid can be subjected to an electric field.
International Nuclear Information System (INIS)
Peel, J.L.; Waites, W.M.
1981-01-01
A method of sterilisation of food packaging is described which comprises treating microorganisms with an ultraviolet irradiated solution of hydrogen peroxide to render the microorganisms non-viable. The wavelength of ultraviolet radiation used is wholly or predominantly below 325 nm and the concentration of the hydrogen peroxide is no greater than 10% by weight. The method is applicable to a wide variety of microorganisms including moulds, yeasts, bacteria, viruses and protozoa and finds particular application in the destruction of spore-forming bacteria, especially those which are dairy contaminants. (U.K.)
Unorthodox theoretical methods
Energy Technology Data Exchange (ETDEWEB)
Nedd, Sean [Iowa State Univ., Ames, IA (United States)
2012-01-01
The use of the ReaxFF force field to correlate with NMR mobilities of amine catalytic substituents on a mesoporous silica nanosphere surface is considered. The interfacing of the ReaxFF force field within the Surface Integrated Molecular Orbital/Molecular Mechanics (SIMOMM) method, in order to replicate earlier SIMOMM published data and to compare with the ReaxFF data, is discussed. The development of a new correlation consistent Composite Approach (ccCA) is presented, which incorporates the completely renormalized coupled cluster method with singles, doubles and non-iterative triples corrections towards the determination of heats of formations and reaction pathways which contain biradical species.
Directory of Open Access Journals (Sweden)
Bardenet Rémi
2013-07-01
Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
The SPH homogeneization method
International Nuclear Information System (INIS)
Kavenoky, Alain
1978-01-01
The homogeneization of a uniform lattice is a rather well understood topic while difficult problems arise if the lattice becomes irregular. The SPH homogeneization method is an attempt to generate homogeneized cross sections for an irregular lattice. Section 1 summarizes the treatment of an isolated cylindrical cell with an entering surface current (in one velocity theory); Section 2 is devoted to the extension of the SPH method to assembly problems. Finally Section 3 presents the generalisation to general multigroup problems. Numerical results are obtained for a PXR rod bundle assembly in Section 4
Splines and variational methods
Prenter, P M
2008-01-01
One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension
Smith, C.S.
1959-08-01
A method is described for rolling uranium metal at relatively low temperatures and under non-oxidizing conditions. The method involves the steps of heating the uranium to 200 deg C in an oil bath, withdrawing the uranium and permitting the oil to drain so that only a thin protective coating remains and rolling the oil coated uranium at a temperature of 200 deg C to give about a 15% reduction in thickness at each pass. The operation may be repeated to accomplish about a 90% reduction without edge cracking, checking or any appreciable increase in brittleness.
Supercritical fluid analytical methods
International Nuclear Information System (INIS)
Smith, R.D.; Kalinoski, H.T.; Wright, B.W.; Udseth, H.R.
1988-01-01
Supercritical fluids are providing the basis for new and improved methods across a range of analytical technologies. New methods are being developed to allow the detection and measurement of compounds that are incompatible with conventional analytical methodologies. Characterization of process and effluent streams for synfuel plants requires instruments capable of detecting and measuring high-molecular-weight compounds, polar compounds, or other materials that are generally difficult to analyze. The purpose of this program is to develop and apply new supercritical fluid techniques for extraction, separation, and analysis. These new technologies will be applied to previously intractable synfuel process materials and to complex mixtures resulting from their interaction with environmental and biological systems
Bao, Lei; Stonebraker, Stephen R.; Sadaghiani, Homeyra
2008-09-01
The traditional methods of assigning and grading homework in large enrollment physics courses have raised concerns among many instructors and students. In this paper we discuss a cost-effective approach to managing homework that involves making half of the problem solutions available to students before the homework is due. In addition, students are allowed some control in choosing which problems to solve. This paper-based approach to homework provides more detailed and timely support to students and increases the amount of self-direction in the homework process. We describe the method and present preliminary results on how students have responded.
Jump probabilities in the non-Markovian quantum jump method
International Nuclear Information System (INIS)
Haerkoenen, Kari
2010-01-01
The dynamics of a non-Markovian open quantum system described by a general time-local master equation is studied. The propagation of the density operator is constructed in terms of two processes: (i) deterministic evolution and (ii) evolution of a probability density functional in the projective Hilbert space. The analysis provides a derivation for the jump probabilities used in the recently developed non-Markovian quantum jump (NMQJ) method (Piilo et al 2008 Phys. Rev. Lett. 100 180402).
Molecular methods for biofilms
Ferrera, Isabel; Balagué , Vanessa; Voolstra, Christian R.; Aranda, Manuel; Bayer, Till; Abed, Raeid M.M.; Dobretsov, Sergey; Owens, Sarah M.; Wilkening, Jared; Fessler, Jennifer L.; Gilbert, Jack A.
2014-01-01
This chapter deals with both classical and modern molecular methods that can be useful for the identification of microorganisms, elucidation and comparison of microbial communities, and investigation of their diversity and functions. The most important and critical steps necessary for all molecular methods is DNA isolation from microbial communities and environmental samples; these are discussed in the first part. The second part provides an overview over DNA polymerase chain reaction (PCR) amplification and DNA sequencing methods. Protocols and analysis software as well as potential pitfalls associated with application of these methods are discussed. Community fingerprinting analyses that can be used to compare multiple microbial communities are discussed in the third part. This part focuses on Denaturing Gradient Gel Electrophoresis (DGGE), Terminal Restriction Fragment Length Polymorphism (T-RFLP) and Automated rRNA Intergenic Spacer Analysis (ARISA) methods. In addition, classical and next-generation metagenomics methods are presented. These are limited to bacterial artificial chromosome and Fosmid libraries and Sanger and next-generation 454 sequencing, as these methods are currently the most frequently used in research. Isolation of nucleic acids: This chapter discusses, the most important and critical steps necessary for all molecular methods is DNA isolation from microbial communities and environmental samples. Nucleic acid isolation methods generally include three steps: cell lysis, removal of unwanted substances, and a final step of DNA purification and recovery. The first critical step is the cell lysis, which can be achieved by enzymatic or mechanical procedures. Removal of proteins, polysaccharides and other unwanted substances is likewise important to avoid their interference in subsequent analyses. Phenol-chloroform-isoamyl alcohol is commonly used to recover DNA, since it separates nucleic acids into an aqueous phase and precipitates proteins and
Molecular methods for biofilms
Ferrera, Isabel
2014-08-30
This chapter deals with both classical and modern molecular methods that can be useful for the identification of microorganisms, elucidation and comparison of microbial communities, and investigation of their diversity and functions. The most important and critical steps necessary for all molecular methods is DNA isolation from microbial communities and environmental samples; these are discussed in the first part. The second part provides an overview over DNA polymerase chain reaction (PCR) amplification and DNA sequencing methods. Protocols and analysis software as well as potential pitfalls associated with application of these methods are discussed. Community fingerprinting analyses that can be used to compare multiple microbial communities are discussed in the third part. This part focuses on Denaturing Gradient Gel Electrophoresis (DGGE), Terminal Restriction Fragment Length Polymorphism (T-RFLP) and Automated rRNA Intergenic Spacer Analysis (ARISA) methods. In addition, classical and next-generation metagenomics methods are presented. These are limited to bacterial artificial chromosome and Fosmid libraries and Sanger and next-generation 454 sequencing, as these methods are currently the most frequently used in research. Isolation of nucleic acids: This chapter discusses, the most important and critical steps necessary for all molecular methods is DNA isolation from microbial communities and environmental samples. Nucleic acid isolation methods generally include three steps: cell lysis, removal of unwanted substances, and a final step of DNA purification and recovery. The first critical step is the cell lysis, which can be achieved by enzymatic or mechanical procedures. Removal of proteins, polysaccharides and other unwanted substances is likewise important to avoid their interference in subsequent analyses. Phenol-chloroform-isoamyl alcohol is commonly used to recover DNA, since it separates nucleic acids into an aqueous phase and precipitates proteins and
Software specification methods
Habrias, Henri
2010-01-01
This title provides a clear overview of the main methods, and has a practical focus that allows the reader to apply their knowledge to real-life situations. The following are just some of the techniques covered: UML, Z, TLA+, SAZ, B, OMT, VHDL, Estelle, SDL and LOTOS.
International Nuclear Information System (INIS)
1978-01-01
This invention provides a method for removing nuclear fuel elements from a fabrication building while at the same time testing the fuel elements for leaks without releasing contaminants from the fabrication building or from the fuel elements. The vacuum source used, leak detecting mechanism and fuel element fabrication building are specified to withstand environmental hazards. (UK)
Photovoltaic device and method
Cleereman, Robert J; Lesniak, Michael J; Keenihan, James R; Langmaid, Joe A; Gaston, Ryan; Eurich, Gerald K; Boven, Michelle L
2015-01-27
The present invention is premised upon an improved photovoltaic device ("PVD") and method of use, more particularly to an improved photovoltaic device with an integral locator and electrical terminal mechanism for transferring current to or from the improved photovoltaic device and the use as a system.
International Nuclear Information System (INIS)
Alverbro, Karin
2010-01-01
Many decision-making situations today affect humans and the environment. In practice, many such decisions are made without an overall view and prioritise one or other of the two areas. Now and then these two areas of regulation come into conflict, e.g. the best alternative as regards environmental considerations is not always the best from a human safety perspective and vice versa. This report was prepared within a major project with the aim of developing a framework in which both the environmental aspects and the human safety aspects are integrated, and decisions can be made taking both fields into consideration. The safety risks have to be analysed in order to be successfully avoided and one way of doing this is to use different kinds of risk analysis methods. There is an abundance of existing methods to choose from and new methods are constantly being developed. This report describes some of the risk analysis methods currently available for analysing safety and examines the relationships between them. The focus here is mainly on human safety aspects
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. HEV and cirrhosis: methods. Study group. Patients with cirrhosis and recent jaundice for <30 d. Controls. Patients with liver cirrhosis but no recent worsening. Exclusions. Significant alcohol consumption. Recent hepatotoxic drugs. Recent antiviral therapy. Recent ...
Method of killing microorganisms
International Nuclear Information System (INIS)
Tensmeyer, L.G.
1980-01-01
A method of sterilizing the contents of containers involves exposure to a plasma induced therein by focusing a high-power laser beam in an electromagnetic field preferably for a period of from 1.0 millisec to 1.0 secs. (U.K.)
International Nuclear Information System (INIS)
Berthomier, Charles
1975-01-01
A method capable of handling the amplitude and the frequency time laws of a certain kind of geophysical signals is described here. This method is based upon the analytical signal idea of Gabor and Ville, which is constructed either in the time domain by adding an imaginary part to the real signal (in-quadrature signal), or in the frequency domain by suppressing negative frequency components. The instantaneous frequency of the initial signal is then defined as the time derivative of the phase of the analytical signal, and his amplitude, or envelope, as the modulus of this complex signal. The method is applied to three types of magnetospheric signals: chorus, whistlers and pearls. The results obtained by analog and numerical calculations are compared to results obtained by classical systems using filters, i.e. based upon a different definition of the concept of frequency. The precision with which the frequency-time laws are determined leads then to the examination of the principle of the method and to a definition of instantaneous power density spectrum attached to the signal, and to the first consequences of this definition. In this way, a two-dimensional representation of the signal is introduced which is less deformed by the analysis system properties than the usual representation, and which moreover has the advantage of being obtainable practically in real time [fr
International Nuclear Information System (INIS)
Mahaffy, J.H.; Liles, D.R.; Bott, T.F.
1981-01-01
The numerical methods and physical models used in the Transient Reactor Analysis Code (TRAC) versions PD2 and PF1 are discussed. Particular emphasis is placed on TRAC-PF1, the version specifically designed to analyze small-break loss-of-coolant accidents
The Prescribed Velocity Method
DEFF Research Database (Denmark)
Nielsen, Peter Vilhelm
The- velocity level in a room ventilated by jet ventilation is strongly influenced by the supply conditions. The momentum flow in the supply jets controls the air movement in the room and, therefore, it is very important that the inlet conditions and the numerical method can generate a satisfactory...
Immunocytochemical methods and protocols
National Research Council Canada - National Science Library
Javois, Lorette C
1999-01-01
... monoclonal antibodies to study cell differentiation during embryonic development. For a select few disciplines volumes have been published focusing on the specific application of immunocytochemical techniques to that discipline. What distinguished Immunocytochemical Methods and Protocols from earlier books when it was first published four years ago was i...
Adhesive compositions and methods
Allen, Scott D.; Sendijarevic, Vahid; O'Connor, James
2017-12-05
The present invention encompasses polyurethane adhesive compositions comprising aliphatic polycarbonate chains. In one aspect, the present invention encompasses polyurethane adhesives derived from aliphatic polycarbonate polyols and polyisocyanates wherein the polyol chains contain a primary repeating unit having a structure:. In another aspect, the invention provides articles comprising the inventive polyurethane compositions as well as methods of making such compositions.
Ferrari's Method and Technology
Althoen, Steve
2005-01-01
Some tips that combine knowledge of mathematics history and technology for adapting Ferrar's method to factor quintics with a TI-83 graphing calculator are presented. A demonstration on the use of the root finder and regression capabilities of the graphing calculator are presented, so that the tips can be easily adapted for any graphing calculator…
Dasenbrock, Reed Way
1995-01-01
Examines literary theory's displacing of "method" in the New Historicist criticism. Argues that Stephen Greenblatt and Lee Paterson imply that no objective historical truth is possible and as a result do not give methodology its due weight in their criticism. Questions the theory of "truth" advanced in this vein of literary…
Sparse Classification - Methods & Applications
DEFF Research Database (Denmark)
Einarsson, Gudmundur
for analysing such data carry the potential to revolutionize tasks such as medical diagnostics where often decisions need to be based on only a few high-dimensional observations. This explosion in data dimensionality has sparked the development of novel statistical methods. In contrast, classical statistics...
International Nuclear Information System (INIS)
Braendas, E.
1986-01-01
The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented
Alternative methods in criticality
International Nuclear Information System (INIS)
Pedicini, J.M.
1982-01-01
In this thesis two new methods of calculating the criticality of a nuclear system are introduced and verified. Most methods of determining the criticality of a nuclear system depend implicitly upon knowledge of the angular flux, net currents, or moments of the angular flux, on the system surface in order to know the leakage. For small systems, leakage is the predominant element in criticality calculations. Unfortunately, in these methods the least accurate fluxes, currents, or moments are those occurring near system surfaces or interfaces. This is due to a mathematical inability to satisfy rigorously with a finite order angular polynomial expansion or angular difference technique the physical boundary conditions which occur on these surfaces. Consequently, one must accept large computational effort or less precise criticality calculations. The methods introduced in this thesis, including a direct leakage operator and an indirect multiple scattering leakage operator, obviate the need to know angular fluxes accurately at system boundaries. Instead, the system wide scalar flux, an integral quantity which is substantially easier to obtain with good precision is sufficient to obtain production, absorption, scattering, and leakage rates
African Journals Online (AJOL)
David Norris
genetic variance and its distribution in the population structure can lead to the design of optimum ... Recent developments in statistical methods and computing algorithms ..... This may be an indication of the general effect of the population structure. .... Presentation at the 40th anniversary, Institute of Genetics and Animal.
Friend, Julie; Elander, Richard T.; Tucker, III; Melvin P.; Lyons, Robert C.
2010-10-26
A method for treating biomass was developed that uses an apparatus which moves a biomass and dilute aqueous ammonia mixture through reaction chambers without compaction. The apparatus moves the biomass using a non-compressing piston. The resulting treated biomass is saccharified to produce fermentable sugars.
Embodied Design Ideation methods
DEFF Research Database (Denmark)
Wilde, Danielle; Vallgårda, Anna; Tomico, Oscar
2017-01-01
Embodied design ideation practices work with relationships between body, material and context to enliven design and research potential. Methods are often idiosyncratic and – due to their physical nature – not easily transferred. This presents challenges for designers wishing to develop and share ...
Indian Academy of Sciences (India)
TiO2 nanotubes have been synthesized by sol–gel template method using alumina membrane. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), Raman spectroscopy, UV absorption spectrum and X-ray diffraction techniques have been used to investigate the structure, morphology and optical ...
Audience Methods and Gratifications.
Lull, James
A model of need gratification inspired by the work of K.E. Rosengren suggests a theoretical framework making it possible to identify, measure, and assess the components of the need gratification process with respect to the mass media. Methods having cognitive and behavioral components are designed by individuals to achieve need gratification. Deep…
Kong, Peter C.; Pink, Robert J.; Zuck, Larry D.
2008-08-19
A method for forming ammonia is disclosed and which includes the steps of forming a plasma; providing a source of metal particles, and supplying the metal particles to the plasma to form metal nitride particles; and providing a substance, and reacting the metal nitride particles with the substance to produce ammonia, and an oxide byproduct.
Fashion, Mediations & Method Assemblages
DEFF Research Database (Denmark)
Sommerlund, Julie; Jespersen, Astrid Pernille
of handling multiple, fluid realities with multiple, fluid methods. Empirically, the paper works with mediation in fashion - that is efforts the active shaping of relations between producer and consumer through communication, marketing and PR. Fashion mediation is by no means simple, but organise complex...
Universal Image Steganalytic Method
Directory of Open Access Journals (Sweden)
V. Banoci
2014-12-01
Full Text Available In the paper we introduce a new universal steganalytic method in JPEG file format that is detecting well-known and also newly developed steganographic methods. The steganalytic model is trained by MHF-DZ steganographic algorithm previously designed by the same authors. The calibration technique with the Feature Based Steganalysis (FBS was employed in order to identify statistical changes caused by embedding a secret data into original image. The steganalyzer concept utilizes Support Vector Machine (SVM classification for training a model that is later used by the same steganalyzer in order to identify between a clean (cover and steganographic image. The aim of the paper was to analyze the variety in accuracy of detection results (ACR while detecting testing steganographic algorithms as F5, Outguess, Model Based Steganography without deblocking, JP Hide and Seek which represent the generally used steganographic tools. The comparison of four feature vectors with different lengths FBS (22, FBS (66 FBS(274 and FBS(285 shows promising results of proposed universal steganalytic method comparing to binary methods.
Uspenskiy, S. I.; Yermakova, S. V.; Chaynova, L. D.; Mitkin, A. A.; Gushcheva, T. M.; Strelkov, Y. K.; Tsvetkova, N. F.
1973-01-01
Various factors used in ergonomic research are given. They are: (1) anthrometric measurement, (2) polyeffector method of assessing the functional state of man, (3) galvanic skin reaction, (4) pneumography, (5) electromyography, (6) electrooculography, and (7) tachestoscopy. A brief summary is given of each factor and includes instrumentation and results.
Research Methods in Sociolinguistics
Hernández-Campoy, Juan Manuel
2014-01-01
The development of Sociolinguistics has been qualitatively and quantitatively outstanding within Linguistic Science since its beginning in the 1950s, with a steady growth in both theoretical and methodological developments as well as in its interdisciplinary directions within the spectrum of language and society. Field methods in sociolinguistic…
Kriging : Methods and Applications
Kleijnen, J.P.C.
2017-01-01
In this chapter we present Kriging— also known as a Gaussian process (GP) model— which is a mathematical interpolation method. To select the input combinations to be simulated, we use Latin hypercube sampling (LHS); we allow uniform and non-uniform distributions of the simulation inputs. Besides
Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee
2013-04-16
The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.
Does, R.J.M.M.; de Mast, J.; Balakrishnan, N.; Brandimarte, P.; Everitt, B.; Molenberghs, G.; Piegorsch, W.; Ruggeri, F.
2015-01-01
Six Sigma is built on principles and methods that have proven themselves over the twentieth century. It has incorporated the most effective approaches and integrated them into a full program. It offers a management structure for organizing continuous improvement of routine tasks, such as
Andersson, Pher G
2008-01-01
With its comprehensive overview of modern reduction methods, this book features high quality contributions allowing readers to find reliable solutions quickly and easily. The monograph treats the reduction of carbonyles, alkenes, imines and alkynes, as well as reductive aminations and cross and heck couplings, before finishing off with sections on kinetic resolutions and hydrogenolysis. An indispensable lab companion for every chemist.
Methods of information processing
Energy Technology Data Exchange (ETDEWEB)
Kosarev, Yu G; Gusev, V D
1978-01-01
Works are presented on automation systems for editing and publishing operations by methods of processing symbol information and information contained in training selection (ranking of objectives by promise, classification algorithm of tones and noise). The book will be of interest to specialists in the automation of processing textural information, programming, and pattern recognition.
Energy Technology Data Exchange (ETDEWEB)
Mamedov, N Ya; Kadymova, K S; Dzhafarov, Sh T
1963-10-28
One type of dual completion method utilizes a single tubing string. Through the use of the proper tubing equipment, the fluid from the low-productive upper formation is lifted by utilizing the surplus energy of a submerged pump, which handles the production from the lower stratum.
Methods Evolved by Observation
Montessori, Maria
2016-01-01
Montessori's idea of the child's nature and the teacher's perceptiveness begins with amazing simplicity, and when she speaks of "methods evolved," she is unveiling a methodological system for observation. She begins with the early childhood explosion into writing, which is a familiar child phenomenon that Montessori has written about…
Alternative methods in criticality
International Nuclear Information System (INIS)
Pedicini, J.M.
1982-01-01
Two new methods of calculating the criticality of a nuclear system are introduced and verified. Most methods of determining the criticality of a nuclear system depend implicitly upon knowledge of the angular flux, net currents, or moments of the angular flux, on the system surface in order to know the leakage. For small systems, leakage is the predominant element in criticality calculations. Unfortunately, in these methods the least accurate fluxes, currents, or moments are those occuring near system surfaces or interfaces. This is due to a mathematical inability to satisfy rigorously with a finite order angular polynomial expansion or angular difference technique the physical boundary conditions which occur on these surfaces. Consequently, one must accept large computational effort or less precise criticality calculations. The methods introduced in this thesis, including a direct leakage operator and an indirect multiple scattering leakage operator, obviate the need to know angular fluxes accurately at system boundaries. Instead, the system wide scalar flux, an integral quantity which is substantially easier to obtain with good precision, is sufficient to obtain production, absorption, scattering, and leakage rates
WATER CHEMISTRY ASSESSMENT METHODS
This section summarizes and evaluates the surfce water column chemistry assessment methods for USEPA/EMAP-SW, USGS-NAQA, USEPA-RBP, Oho EPA, and MDNR-MBSS. The basic objective of surface water column chemistry assessment is to characterize surface water quality by measuring a sui...
Isaacson, Eugene
1994-01-01
This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Sequential optimization and reliability assessment method for metal forming processes
International Nuclear Information System (INIS)
Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.
2004-01-01
Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations
Stochastic seismic floor response analysis method for various damping systems
International Nuclear Information System (INIS)
Kitada, Y.; Hattori, K.; Ogata, M.; Kanda, J.
1991-01-01
A study using the stochastic seismic response analysis method which is applicable for the estimation of floor response spectra is carried out. It is pointed out as a shortcoming in this stochastic seismic response analysis method, that the method tends to overestimate floor response spectra for low damping systems, e.g. 1% of the critical damping ratio. An investigation on the cause of the shortcoming is carried out and a number of improvements in this method were also made to the original method by taking correlation of successive peaks in a response time history into account. The application of the improved method to a typical BWR reactor building is carried out. The resultant floor response spectra are compared with those obtained by deterministic time history analysis. Floor response spectra estimated by the improved method consistently cover the response spectra obtained by the time history analysis for various damping ratios. (orig.)
Deterministic hazard quotients (HQs): Heading down the wrong road
International Nuclear Information System (INIS)
Wilde, L.; Hunter, C.; Simpson, J.
1995-01-01
The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches
2006-01-30
detail next. 3.2 Fast Sweeping Method for Equation (1) The fast sweeping method was originated in Boue and Dupis [5], its first PDE formulation was in...Geophysics, 50:903–923, 1985. [5] M. Boue and P. Dupuis. Markov chain approximations for deterministic control prob- lems with affine dynamics and
Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications
Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne
2014-01-01
The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.
Joint Parametric Fault Diagnosis and State Estimation Using KF-ML Method
DEFF Research Database (Denmark)
Sun, Zhen; Yang, Zhenyu
2014-01-01
The paper proposes a new method for a kind of parametric fault online diagnosis with state estimation jointly. The considered fault affects not only the deterministic part of the system but also the random circumstance. The proposed method first applies Kalman Filter (KF) and Maximum Likelihood (...
Swiler, Thomas P.; Garcia, Ernest J.; Francis, Kathryn M.
2013-06-11
A method is disclosed for singulating die from a semiconductor substrate (e.g. a semiconductor-on-insulator substrate or a bulk silicon substrate) containing an oxide layer (e.g. silicon dioxide or a silicate glass) and one or more semiconductor layers (e.g. monocrystalline or polycrystalline silicon) located above the oxide layer. The method etches trenches through the substrate and through each semiconductor layer about the die being singulated, with the trenches being offset from each other around at least a part of the die so that the oxide layer between the trenches holds the substrate and die together. The trenches can be anisotropically etched using a Deep Reactive Ion Etching (DRIE) process. After the trenches are etched, the oxide layer between the trenches can be etched away with an HF etchant to singulate the die. A release fixture can be located near one side of the substrate to receive the singulated die.
DEFF Research Database (Denmark)
Karosiene, Edita
Analysis. The chapter provides detailed explanations on how to use different methods for T cell epitope discovery research, explaining how input should be given as well as how to interpret the output. In the last chapter, I present the results of a bioinformatics analysis of epitopes from the yellow fever...... peptide-MHC interactions. Furthermore, using yellow fever virus epitopes, we demonstrated the power of the %Rank score when compared with the binding affinity score of MHC prediction methods, suggesting that this score should be considered to be used for selecting potential T cell epitopes. In summary...... immune responses. Therefore, it is of great importance to be able to identify peptides that bind to MHC molecules, in order to understand the nature of immune responses and discover T cell epitopes useful for designing new vaccines and immunotherapies. MHC molecules in humans, referred to as human...
Fox, Robert V.; Zhang, Fengyan; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin
2016-06-21
Single source precursors or pre-copolymers of single source precursors are subjected to microwave radiation to form particles of a I-III-VI.sub.2 material. Such particles may be formed in a wurtzite phase and may be converted to a chalcopyrite phase by, for example, exposure to heat. The particles in the wurtzite phase may have a substantially hexagonal shape that enables stacking into ordered layers. The particles in the wurtzite phase may be mixed with particles in the chalcopyrite phase (i.e., chalcopyrite nanoparticles) that may fill voids within the ordered layers of the particles in the wurtzite phase thus produce films with good coverage. In some embodiments, the methods are used to form layers of semiconductor materials comprising a I-III-VI.sub.2 material. Devices such as, for example, thin-film solar cells may be fabricated using such methods.
Motor degradation prediction methods
Energy Technology Data Exchange (ETDEWEB)
Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.
1996-12-01
Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor`s duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures.
Vempala, Santosh S
2005-01-01
Random projection is a simple geometric technique for reducing the dimensionality of a set of points in Euclidean space while preserving pairwise distances approximately. The technique plays a key role in several breakthrough developments in the field of algorithms. In other cases, it provides elegant alternative proofs. The book begins with an elementary description of the technique and its basic properties. Then it develops the method in the context of applications, which are divided into three groups. The first group consists of combinatorial optimization problems such as maxcut, graph coloring, minimum multicut, graph bandwidth and VLSI layout. Presented in this context is the theory of Euclidean embeddings of graphs. The next group is machine learning problems, specifically, learning intersections of halfspaces and learning large margin hypotheses. The projection method is further refined for the latter application. The last set consists of problems inspired by information retrieval, namely, nearest neig...
Liseikin, Vladimir D
2010-01-01
This book is an introduction to structured and unstructured grid methods in scientific computing, addressing graduate students, scientists as well as practitioners. Basic local and integral grid quality measures are formulated and new approaches to mesh generation are reviewed. In addition to the content of the successful first edition, a more detailed and practice oriented description of monitor metrics in Beltrami and diffusion equations is given for generating adaptive numerical grids. Also, new techniques developed by the author are presented, in particular a technique based on the inverted form of Beltrami’s partial differential equations with respect to control metrics. This technique allows the generation of adaptive grids for a wide variety of computational physics problems, including grid clustering to given function values and gradients, grid alignment with given vector fields, and combinations thereof. Applications of geometric methods to the analysis of numerical grid behavior as well as grid ge...
Directory of Open Access Journals (Sweden)
Suzić Nenad
2014-01-01
Full Text Available The paper displays the application of the Cross-Impact method in pedagogy, namely a methodological approach which crosses variables in a novel, but statistically justified manner. The method is an innovation in pedagogy as well as in research methodology of social and psychological phenomena. Specifically, events and processes are crossed, that is, experts' predictions of about future interaction of events and processes. Therefore, this methodology is futuristic; it concerns predicting future, which is of key importance for pedagogic objectives. The paper presents two instances of the cross-impact approach: the longer, displayed in fourteen steps, and the shorter, in four steps. They are both accompanied with mathematic and statistical formulae allowing for quantification, that is, a numerical expression of the probability of a certain event happening in the future. The advantage of this approach is that it facilitates planning in education which so far has been solely based on lay estimates and assumptions.
DEFF Research Database (Denmark)
Steijn, Arthur
2016-01-01
Contemporary scenography often consists of video-projected motion graphics. The field is lacking in academic methods and rigour: descriptions and models relevant for the creation as well as in the analysis of existing works. In order to understand the phenomenon of motion graphics in a scenographic...... construction as a support to working systematically practice-led research project. The design model is being developed through design laboratories and workshops with students and professionals who provide feedback that lead to incremental improvements. Working with this model construction-as-method reveals...... context, I have been conducting a practice-led research project. Central to the project is construction of a design model describing sets of procedures, concepts and terminology relevant for design and studies of motion graphics in spatial contexts. The focus of this paper is the role of model...
International Nuclear Information System (INIS)
Brown, Stephen.
1992-01-01
In a method of avoiding use of nuclear radiation, eg gamma rays, X-rays, electron beams, for testing semiconductor components for resistance to hard radiation, which hard radiation causes data corruption in some memory devices and 'latch-up' in others, similar fault effects can be achieved using a xenon or other 'light' flash gun even though the penetration of light is significantly less than that of gamma rays. The method involves treating a device with gamma radiation, measuring a particular fault current at the onset of a fault event, repeating the test with light to confirm the occurrence of the fault event at the same measured fault current, and using the fault current value as a reference for future tests using light on similar devices. (author)
International Nuclear Information System (INIS)
Mahaney, W.C.
1984-01-01
The papers in this book cover absolute, relative and multiple dating methods, and have been written by specialists from a number of different earth sciences disciplines - their common interest being the dating of geological materials within the Quaternary. Papers on absolute dating methods discuss radiocarbon, uranium-series, potassium argon, 40 Ar/ 39 Ar, paleomagnetic, obsidian hydration, thermoluminescence, amino acid racemization, tree rings, and lichenometric techniques. Those on relative dating include discussions on various geomorphic relative age indicators such as drainage density changes, hypsometric integrals, bifurcation ratios, stream junction angles, spur morphology, hillslope geometry, and till sheet characteristics. The papers on multiple dating cite examples from the Rocky Mountains, Australia, Lake Agassiz Basin, and the Southern Andes. Also included is the panel discussion which reviews and assesses the information presented, and a field trip guide which discusses the sequences of Wisconian tills and interlayered lacustrine and fluvial sediments. (orig.)
Xα method with pseudopotentials
International Nuclear Information System (INIS)
Szasz, L.
1980-01-01
The Xα method for an atom or molecule is transformed into an all-electron pseudopotential formalism. The equations of the Xα method are exactly transformed into pseudo-orbital equations and the resulting pseudopotentials are replaced by simple density-dependent potentials derived from Thomas-Fermi model. It is shown that the new formalism satisfies the virial theorem. As the first application it is shown that the model explains the shell-structure of atoms by the property that the pseudo-orbitals for the (ns), (np), (nd) etc. electrons are, in a very good approximation, the solutions of the same equation and have their maxima at the same point thereby creating the peaks in the radial density characterizing the shell structure. (orig.)
Developments in Surrogating Methods
Directory of Open Access Journals (Sweden)
Hans van Dormolen
2005-11-01
Full Text Available In this paper, I would like to talk about the developments in surrogating methods for preservation. My main focus will be on the technical aspects of preservation surrogates. This means that I will tell you something about my job as Quality Manager Microfilming for the Netherlands’ national preservation program, Metamorfoze, which is coordinated by the National Library. I am responsible for the quality of the preservation microfilms, which are produced for Metamorfoze. Firstly, I will elaborate on developments in preservation methods in relation to the following subjects: · Preservation microfilms · Scanning of preservation microfilms · Preservation scanning · Computer Output Microfilm. In the closing paragraphs of this paper, I would like to tell you something about the methylene blue test. This is an important test for long-term storage of preservation microfilms. Also, I will give you a brief report on the Cellulose Acetate Microfilm Conference that was held in the British Library in London, May 2005.
Motor degradation prediction methods
International Nuclear Information System (INIS)
Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.
1996-01-01
Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor's duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures
Thermoluminescence dating method
International Nuclear Information System (INIS)
Zink, A.
2004-01-01
A crystal that is submitted to radiation stores energy and releases this energy under the form of light whenever it is heated. These 2 properties: the ability to store energy and the ability to reset the energy stored are the pillars on which time dating methods like thermoluminescence are based. A typical accuracy of the thermoluminescence method is between 5 to 7% but an accuracy of 3% can be reached with a sufficient number of measurement. This article describes the application of thermoluminescence to the dating of a series of old terra-cotta statues. This time measurement is absolute and does not require any calibration, it represents the time elapsed since the last heating of the artifact. (A.C.)
International Nuclear Information System (INIS)
Megy, J.A.
1980-01-01
A method was developed for making nuclear-grade zirconium from a zirconium compound, which ismore economical than previous methods since it uses aluminum as the reductant metal rather than the more expensive magnesium. A fused salt phase containing the zirconium compound to be reduced is first prepared. The fused salt phase is then contacted with a molten metal phase which contains aluminum and zinc. The reduction is effected by mutual displacment. Aluminum is transported from the molten metal phase to the fused salt phase, replacing zirconium in the salt. Zirconium is transported from the fused salt phase to the molten metal phase. The fused salt phase and the molten metal phase are then separated, and the solvent metal and zirconium are separated by distillation or other means. (DN)
Directory of Open Access Journals (Sweden)
Roxana L. IONESCU
2014-06-01
Full Text Available Companies operating in a global economy that is constantly changing and developming, especially during the financial crisis and political instability. It is necessary to adapt and develop sales methods in such environment. For large companies who base their activity on sales it has become a necessity to learn different types of sales approaches because their knowledge enables them to grow the number of customers and therefore the sales and the turnover. This paper aims to exame the most effective sales methods used on the highly sensitive economic and social environment – the insurance market. In the field of insurances, the sales process is even more important because sellers need to sell an intangible product that may materialize in the future, but there is no certainty.
Directory of Open Access Journals (Sweden)
Fong-Zhi Chen
2001-01-01
Full Text Available In this study, the Plücker coordinates representation is used to formulate the ruled surface and the molecular path for pumping speed performance evaluation of a molecular vacuum pump. The ruled surface represented by the Pliicker coordinates is used to develop a criterion for when gas molecules hit the pump surface wall. The criterion is applied to analyze the flow rate of a new developed vacuum pump in transition regimes by using the DSMC (Direct Simulation Monte Carlo method. When a molecule flies in a neutral electrical field its path is a straight line. If the molecular path and the generators of a ruled surface are both represented by the Pliicker coordinates, the position of the molecular hit on the wall can be verified by the reciprocal condition of the lines. The Plücker coordinates representation is quite convenient in the DSMC method for this three-dimensional molecular flow simulation.
DEFF Research Database (Denmark)
Wölfel, Christiane; Merritt, T.
2013-01-01
There are many examples of cards used to assist or provide structure to the design process, yet there has not been a thorough articulation of the strengths and weaknesses of the various examples. We review eighteen card-based design tools in order to understand how they might benefit designers....... The card-based tools are explained in terms of five design dimensions including the intended purpose and scope of use, duration of use, methodology, customization, and formal/material qualities. Our analysis suggests three design patterns or archetypes for existing card-based design method tools...... and highlights unexplored areas in the design space. The paper concludes with recommendations for the future development of card-based methods for the field of interaction design....
Smith, Richard Harding; Martin, Glenn Brian
2004-05-18
The present invention allows the determination of trace levels of ionic substances in a sample solution (ions, metal ions, and other electrically charged molecules) by coupling a separation method, such as liquid chromatography, with ion selective electrodes (ISE) prepared so as to allow detection at activities below 10.sup.-6 M. The separation method distributes constituent molecules into fractions due to unique chemical and physical properties, such as charge, hydrophobicity, specific binding interactions, or movement in an electrical field. The separated fractions are detected by means of the ISE(s). These ISEs can be used singly or in an array. Accordingly, modifications in the ISEs are used to permit detection of low activities, specifically, below 10.sup.-6 M, by using low activities of the primary analyte (the molecular species which is specifically detected) in the inner filling solution of the ISE. Arrays constructed in various ways allow flow-through sensing for multiple ions.
International Nuclear Information System (INIS)
Myers, J.D.
1986-01-01
A method is described of treatment of opacity of the lens of an eye resulting from foreign matter at the back surface of the eye lens within the vitreous fluid body of the eye with a passively Q-switched laser device. The method consists of: (a) generating a single lasing pulse emitted from the laser device focused within the eye vitreous fluid body, spaced from the lens back surface, creating a microplasma dot in the vitreous fluid body (b) then increasing the frequency of the lasing pulses emitted from the lasing device having a frequency greater than the life of the microplasma to generate an elongated lasing plasma within the eye vitreous fluid moving toward the lens back surface, until the elongated lasing plasma contacts and destroys the foreign matter
International Nuclear Information System (INIS)
Van Brutzel, L.
2015-01-01
Dislocation-Dynamics (DD) technique is identified as the method able to model the evolution of material plastic properties as a function of the microstructural transformation predicted at the atomic scale. Indeed, it is the only simulation method capable of taking into account the collective behaviour of a large number of dislocations inside a realistic microstructure. DD simulations are based on the elastic dislocation theory following rules inherent to the dislocation core structure often call 'local rules'. All the data necessary to establish the local rules for DD have to come directly from experiment or alternatively from simulations carried out at the atomic scale such as molecular dynamics or ab initio calculations. However, no precise information on the interaction between two dislocations or between dislocations and defects induced by irradiation are available for nuclear fuels. Therefore, in this article the DD technique will be presented and some examples are given of what can be achieved with it. (author)
Directory of Open Access Journals (Sweden)
Cathleen HASKINS
2010-12-01
Full Text Available Dr. Maria Montessori provided the world with a powerful philosophy and practice for the advancement of humanity: change how we educate children and we change the world. She understood two things very clearly: One, that we can build a better world, a more just and peaceful place, when we educate for the realization of the individual and collective human potential; and two, that the only way to create an educational system that will that will serve this end is to scrap the current system entirely and replace it with a completely new system. She gave us a system through which to accomplish that goal: The Montessori Method. The following is a personal and professional account of the Montessori Method of educating children.
International Nuclear Information System (INIS)
Thomas, L.E.
1975-07-01
Stereoscopic methods used in TEM are reviewed. The use of stereoscopy to characterize three-dimensional structures observed by TEM has become widespread since the introduction of instruments operating at 1 MV. In its emphasis on whole structures and thick specimens this approach differs significantly from conventional methods of microstructural analysis based on three-dimensional image reconstruction from a number of thin-section views. The great advantage of stereo derives from the ability to directly perceive and measure structures in three-dimensions by capitalizing on the unsurpassed human ability for stereoscopic matching of corresponding details on picture pairs showing the same features from different viewpoints. At this time, stereo methods are aimed mainly at structural understanding at the level of dislocations, precipitates, and irradiation-induced point-defect clusters in crystal and on the cellular irradiation-induced point-defect clusters in crystal and on the cellular level of biological specimens. 3-d reconstruction methods have concentrated on the molecular level where image resolution requirements dictate the use of very thin specimens. One recent application of three-dimensional coordinate measurements is a system developed for analyzing depth variations in the numbers, sizes and total volumes of voids produced near the surfaces of metal specimens during energetic ion bombardment. This system was used to correlate the void volumes at each depth along the ion range with the number of atomic displacements produced at that depth, thereby unfolding the entire swelling versus dose relationship from a single stereo view. A later version of this system incorporating computer-controlled stereo display capabilities is now being built
Polymer compositions and methods
Energy Technology Data Exchange (ETDEWEB)
Allen, Scott D.; Willkomm, Wayne R.
2018-02-06
The present invention encompasses polyurethane compositions comprising aliphatic polycarbonate chains. In one aspect, the present invention encompasses polyurethane foams, thermoplastics and elastomers derived from aliphatic polycarbonate polyols and polyisocyanates wherein the polyol chains contain a primary repeating unit having a structure: ##STR00001## In another aspect, the invention provides articles comprising the inventive foam and elastomer compositions as well as methods of making such compositions.
Methods of celestial mechanics
Brouwer, Dirk
2013-01-01
Methods of Celestial Mechanics provides a comprehensive background of celestial mechanics for practical applications. Celestial mechanics is the branch of astronomy that is devoted to the motions of celestial bodies. This book is composed of 17 chapters, and begins with the concept of elliptic motion and its expansion. The subsequent chapters are devoted to other aspects of celestial mechanics, including gravity, numerical integration of orbit, stellar aberration, lunar theory, and celestial coordinates. Considerable chapters explore the principles and application of various mathematical metho
Assessment methods for rehabilitation.
Biefang, S; Potthoff, P
1995-09-01
Diagnostics and evaluation in medical rehabilitation should be based on methods that are as objective as possible. In this context quantitative methods are an important precondition. We conducted for the German Pensions Insurance Institutions (which are in charge of the medical and vocational rehabilitation of workers and employees) a survey on assessment methods for rehabilitation which included an evaluation of American literature, with the aim to indicate procedures that can be considered for adaptation in Germany and to define further research requirements. The survey identified: (1) standardized procedures and instrumented tests for the assessment of musculoskeletal, cardiopulmonary and neurophysiological function; (2) personality, intelligence, achievement, neuropsychological and alcoholism screening tests for the assessment of mental or cognitive function; (3) rating scales and self-administered questionnaires for the assessment of Activities of Daily Living and Instrumental Activities of Daily Living (ADL/IADL Scales); (4) generic profiles and indexes as well as disease-specific measures for the assessment of health-related quality of life and health status; and (5) rating scales for vocational assessment. German equivalents or German versions exist only for a part of the procedures identified. Translation and testing of Anglo-Saxon procedures should have priority over the development of new German methods. The following procedures will be taken into account: (a) instrumented tests for physical function, (b) IADL Scales, (c) generic indexes of health-related quality of life, (d) specific quality of life and health status measures for disorders of the circulatory system, metabolic system, digestive organs, respiratory tract and for cancer, and (e) vocational rating scales.
Energy Technology Data Exchange (ETDEWEB)
Cronauer, D.C.; Kehl, W.L.
1977-06-08
In a method to liquify coal in the presence of hydrogen and hydrogen-transfer solvents, a hydrogenation catalyst is used in which an amorphous aluminium phosphate is taken as catalyst carrier. The particular advantage of aluminium phosphate catalyst carriers is their property of not loosing their mechanical strength even after manifold oxidizing regeneration (burning off the deposited carbon). The quantity of carbon deposited on the catalyst when using an aluminium phosphate carrier is considerably loss than with usual catalyst carriers.
International Nuclear Information System (INIS)
Jones, D.R.
1981-01-01
The subject is discussed under the headings: introduction (identification, quantification of risk); some approaches to risk evaluation (use of the 'no risk' principle; the 'acceptable risk' method; risk balancing; comparison of risks, benefits and other costs); cost benefit analysis; an alternative approach (tabulation and display; description and reduction of the data table); identification of potential decision sets consistent with the constraints. Some references are made to nuclear power. (U.K.)
Directory of Open Access Journals (Sweden)
V. Kukharenko
2013-03-01
Full Text Available Content curated - a new activity (started in 2008 qualified network users with process large amounts of information to represent her social network users. To prepare content curators developed 7 weeks distance course, which examines the functions, methods and tools curator. Courses showed a significant relationship success learning on the availability of advanced personal learning environment and the ability to process and analyze information.
International Nuclear Information System (INIS)
Cornecsu, M.
1995-01-01
The work describes a method of documenting the AUDIT plan upon the basis of two quantitative elements resulting from quality assurance program appraisal system function implementation degree as established from the latest AUDIT performed an system function weight in QAP, respectively, appraised by taking into account their significance for the activities that are to be performed in the period for which the AUDITs are planned. (Author) 3 Figs., 2 Refs
Method for separating isotopes
International Nuclear Information System (INIS)
Jepson, B.E.
1976-01-01
The invention comprises a method for separating different isotopes of elements from each other by contacting a feed solution containing the different isotopes with a macrocyclic polyether to preferentially form a macrocyclic polyether complex with the lighter of the different isotopes. The macrocyclic polyether complex is then separated from the lighter isotope depleted feed solution. A chemical separation of isotopes is carried out in which a constant refluxing system permits a continuous countercurrent liquid-liquid extraction. (LL)
Directory of Open Access Journals (Sweden)
A. S. Borsch
2010-03-01
Full Text Available In this paper we consider two different approaches to the methods stegoanalysis applicable to common multimedia formats. The first approach uses the verification and the analysis of changes in the fields of media files that must remain constant throughout the bit stream of potential container file. The second approach is more complicated for implementation and involves collecting of information by means of many experiments.
Situational Method Engineering
Henderson-Sellers, Brian; Ralyte, Jolita; Par, Agerfalk; Rossi, Matti
2014-01-01
While previously available methodologies for software – like those published in the early days of object technology – claimed to be appropriate for every conceivable project, situational method engineering (SME) acknowledges that most projects typically have individual characteristics and situations. Thus, finding the most effective methodology for a particular project needs specific tailoring to that situation. Such a tailored software development methodology needs to take into account all t...
Situational method engineering
Henderson-Sellers, Brian; Ågerfalk, Pär J; Rossi, Matti
2014-01-01
While previously available methodologies for software ? like those published in the early days of object technology ? claimed to be appropriate for every conceivable project, situational method engineering (SME) acknowledges that most projects typically have individual characteristics and situations. Thus, finding the most effective methodology for a particular project needs specific tailoring to that situation. Such a tailored software development methodology needs to take into account all the bits and pieces needed for an organization to develop software, including the software process, the
Chang, Shih-ger [El Cerrito, CA; Liu, Shou-heng [Kaohsiung, TW; Liu, Zhao-rong [Beijing, CN; Yan, Naiqiang [Berkeley, CA
2009-01-20
Disclosed herein is a method for removing mercury from a gas stream comprising contacting the gas stream with a getter composition comprising bromine, bromochloride, sulphur bromide, sulphur dichloride or sulphur monochloride and mixtures thereof. In one preferred embodiment the getter composition is adsorbed onto a sorbent. The sorbent may be selected from the group consisting of flyash, limestone, lime, calcium sulphate, calcium sulfite, activated carbon, charcoal, silicate, alumina and mixtures thereof. Preferred is flyash, activated carbon and silica.
Method for making nanomaterials
Fan, Hongyou; Wu, Huimeng
2013-06-04
A method of making a nanostructure by preparing a face centered cubic-ordered metal nanoparticle film from metal nanoparticles, such as gold and silver nanoparticles, exerting a hydrostatic pressure upon the film at pressures of several gigapascals, followed by applying a non-hydrostatic stress perpendicularly at a pressure greater than approximately 10 GPA to form an array of nanowires with individual nanowires having a relatively uniform length, average diameter and density.
Chen, J.-Y.
1992-01-01
Viewgraphs are presented on the following topics: the grand challenge of combustion engineering; research of probability density function (PDF) methods at Sandia; experiments of turbulent jet flames (Masri and Dibble, 1988); departures from chemical equilibrium; modeling turbulent reacting flows; superequilibrium OH radical; pdf modeling of turbulent jet flames; scatter plot for CH4 (methane) and O2 (oxygen); methanol turbulent jet flames; comparisons between predictions and experimental data; and turbulent C2H4 jet flames.
METHOD OF ADAPTIVE MAGNETOTHERAPY
Rudyk, Valentine Yu.; Tereshchenko, Mykola F.; Rudyk, Tatiana A.
2016-01-01
Practical realization of adaptive control in magnetotherapy apparatus acquires an actual importance on the modern stage of development of magnetotherapy.The structural scheme of method of adaptive impulsive magnetotherapy and algorithm of adaptive control of feed-back signal during procedure of magnetotherapy is represented.A feed-back in magnetotherapy complex will be realized with control of magnetic induction and analysis of man's physiological indexes (temperature, pulse, blood prassure, ...
International Nuclear Information System (INIS)
Pelc, N.J.; Spritzer, C.E.; Lee, J.N.
1988-01-01
A rapid, phase-contrast, MR imaging method of imaging flow has been implemented. The method, called VIGRE (velocity imaging with gradient recalled echoes), consists of two interleaved, narrow flip angle, gradient-recalled acquisitions. One is flow compensated while the second has a specified flow encoding (both peak velocity and direction) that causes signals to contain additional phase in proportion to velocity in the specified direction. Complex image data from the first acquisition are used as a phase reference for the second, yielding immunity from phase accumulation due to causes other than motion. Images with pixel values equal to MΔΘ where M is the magnitude of the flow compensated image and ΔΘ is the phase difference at the pixel, are produced. The magnitude weighting provides additional vessel contrast, suppresses background noise, maintains the flow direction information, and still allows quantitative data to be retrieved. The method has been validated with phantoms and is undergoing initial clinical evaluation. Early results are extremely encouraging
Trottenberg, U; Third European Conference on Multigrid Methods
1991-01-01
These proceedings contain a selection of papers presented at the Third European Conference on Multigrid Methods which was held in Bonn on October 1-4, 1990. Following conferences in 1981 and 1985, a platform for the presentation of new Multigrid results was provided for a third time. Multigrid methods no longer have problems being accepted by numerical analysts and users of numerical methods; on the contrary, they have been further developed in such a successful way that they have penetrated a variety of new fields of application. The high number of 154 participants from 18 countries and 76 presented papers show the need to continue the series of the European Multigrid Conferences. The papers of this volume give a survey on the current Multigrid situation; in particular, they correspond to those fields where new developments can be observed. For example, se veral papers study the appropriate treatment of time dependent problems. Improvements can also be noticed in the Multigrid approach for semiconductor eq...
Kallianpur, Gopinath; Hida, Takeyuki
1987-01-01
The use of probabilistic methods in the biological sciences has been so well established by now that mathematical biology is regarded by many as a distinct dis cipline with its own repertoire of techniques. The purpose of the Workshop on sto chastic methods in biology held at Nagoya University during the week of July 8-12, 1985, was to enable biologists and probabilists from Japan and the U. S. to discuss the latest developments in their respective fields and to exchange ideas on the ap plicability of the more recent developments in stochastic process theory to problems in biology. Eighteen papers were presented at the Workshop and have been grouped under the following headings: I. Population genetics (five papers) II. Measure valued diffusion processes related to population genetics (three papers) III. Neurophysiology (two papers) IV. Fluctuation in living cells (two papers) V. Mathematical methods related to other problems in biology, epidemiology, population dynamics, etc. (six papers) An important f...
International Nuclear Information System (INIS)
Kase, M.B.
1985-01-01
The objective of this study was to provide, in cooperation with ORNL and LANL, specimens required for studies to develop organic insulators having the cryogenic neutron irradiation resistance required for MFE systems utilizing superconducting magnetic confinement. To develop test methods and analytical procedures for assessing radiation damage. To stimulate and participate in international cooperation directed toward accomplishing these objectives. The system for producing uniaxially reinforced, 3-4 mm (0.125 in) diameter rod specimens has been refined and validated by production of excellent quality specimens using liquid-mix epoxy resin systems. The methodology is undergoing further modification to permit use of hot-melt epoxy and polyimide resin systems as will be required for the experimental program to be conducted in the NLTNIF reactor at ORNL. Preliminary studies indicate that short beam and torsional shear test methods will be useful in evaluating radiation degradation. Development of these and other applicable test methods are continuing. A cooperative program established with laboratories in Japan and in England has resulted in the production and testing of specimens having an identical configuration
International Nuclear Information System (INIS)
Leclerc, J.P.
2001-01-01
The first international congress on 'Tracers and tracing methods' took place in Nancy in May 2001. The objective of this second congress was to present the current status and trends on tracing methods and their applications. It has given the opportunity to people from different fields to exchange scientific information and knowledge about tracer methodologies and applications. The target participants were the researchers, engineers and technologists of various industrial and research sectors: chemical engineering, environment, food engineering, bio-engineering, geology, hydrology, civil engineering, iron and steel production... Two sessions have been planned to cover both fundamental and industrial aspects: 1)fundamental development (tomography, tracer camera visualization and particles tracking; validation of computational fluid dynamics simulations by tracer experiments and numerical residence time distribution; new tracers and detectors or improvement and development of existing tracing methods; data treatments and modeling; reactive tracer experiments and interpretation) 2)industrial applications (geology, hydrogeology and oil field applications; civil engineering, mineral engineering and metallurgy applications; chemical engineering; environment; food engineering and bio-engineering). The program included 5 plenary lectures, 23 oral communications and around 50 posters. Only 9 presentations are interested for the INIS database
Development of partitioning method
International Nuclear Information System (INIS)
Kubota, Kazuo; Dojiri, Shigeru; Kubota, Masumitsu
1988-10-01
The literature survey was carried out on the amount of natural resources, behaviors in reprocessing process and in separation and recovery methods of the platinum group elements and technetium which are contained in spent fuel. The essential results are described below. (1) The platinum group elements, which are contained in spent fuel, are quantitatively limited, compared with total demand for them in Japan. And estimated separation and recovery cost is rather high. In spite of that, development of these techniques is considered to be very important because the supply of these elements is almost from foreign resources in Japan. (2) For recovery of these elements, studies of recovery from undisolved residue and from high level liquid waste (HLLW) also seem to be required. (3) As separation and recovery methods, following techniques are considered to be effective; lead extraction, liquid metal extraction, solvent extraction, ion-exchange, adsorption, precipitation, distillation, electrolysis or their combination. (4) But each of these methods has both advantages and disadvantages. So development of such processes largely depends on future works. (author) 94 refs
Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads
Directory of Open Access Journals (Sweden)
Králik Juraj
2014-12-01
Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented
Apparatus and method for deterministic control of surface figure during full aperture polishing
Suratwala, Tayyab Ishaq; Feit, Michael Dennis; Steele, William Augustus
2013-11-19
A polishing system configured to polish a lap includes a lap configured to contact a workpiece for polishing the workpiece; and a septum configured to contact the lap. The septum has an aperture formed therein. The radius of the aperture and radius the workpiece are substantially the same. The aperture and the workpiece have centers disposed at substantially the same radial distance from a center of the lap. The aperture is disposed along a first radial direction from the center of the lap, and the workpiece is disposed along a second radial direction from the center of the lap. The first and second radial directions may be opposite directions.
Apparatus and method for deterministic control of surface figure during full aperture pad polishing
Suratwala, Tayyab Ishaq; Feit, Michael Douglas; Steele, William Augustus
2017-10-10
A polishing system configured to polish a lap includes a lap configured to contact a workpiece for polishing the workpiece; and a septum configured to contact the lap. The septum has an aperture formed therein. The radius of the aperture and radius the workpiece are substantially the same. The aperture and the workpiece have centers disposed at substantially the same radial distance from a center of the lap. The aperture is disposed along a first radial direction from the center of the lap, and the workpiece is disposed along a second radial direction from the center of the lap. The first and second radial directions may be opposite directions.
Directory of Open Access Journals (Sweden)
Shahram RostamPour
2012-01-01
Full Text Available The purpose of this paper is to measure the relative efficiencies of various cow husbandries. The proposed model of this paper uses distribution free analysis to measure the performance of different units responsible for taking care of cows. We gather the necessary information of all units including number of cows, amount of internet usage, number of subunits for taking care of cows, amount of forage produced in each province for grazing livestock and average hour per person training courses as independent variables and consider the amount of produced milk as dependent variable. The necessary information are collected from all available units located in different provinces of Iran and the production function is estimated using a linear programming model. The results indicate that the capital city of Iran, Tehran, holds the highest technical efficiency, the lowest efficiency belongs to province of Ilam and other provinces mostly performs poorly.
Method Engineering: Engineering of Information Systems Development Methods and Tools
Brinkkemper, J.N.; Brinkkemper, Sjaak
1996-01-01
This paper proposes the term method engineering for the research field of the construction of information systems development methods and tools. Some research issues in method engineering are identified. One major research topic in method engineering is discussed in depth: situational methods, i.e.
A novel reliability evaluation method for large engineering systems
Directory of Open Access Journals (Sweden)
Reda Farag
2016-06-01
Full Text Available A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first- or second-order reliability method (FORM/SORM will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM developed by the author and his research team using FORM, response surface method (RSM, an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples.
Three-dimensional protein structure prediction: Methods and computational strategies.
Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C
2014-10-12
A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Sandford, M.T. II; Bradley, J.N.; Handel, T.G.
1996-06-01
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in Microsoft{reg_sign} bitmap (.BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed {open_quote}steganography.{close_quote} Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or {open_quote}lossy{close_quote} compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is in data an analysis algorithm.
Methods of mathematical optimization
Vanderplaats, G. N.
The fundamental principles of numerical optimization methods are reviewed, with an emphasis on potential engineering applications. The basic optimization process is described; unconstrained and constrained minimization problems are defined; a general approach to the design of optimization software programs is outlined; and drawings and diagrams are shown for examples involving (1) the conceptual design of an aircraft, (2) the aerodynamic optimization of an airfoil, (3) the design of an automotive-engine connecting rod, and (4) the optimization of a 'ski-jump' to assist aircraft in taking off from a very short ship deck.
CSIR Research Space (South Africa)
Monchusi, B
2012-10-01
Full Text Available stream_source_info Monchusi_2012.pdf.txt stream_content_type text/plain stream_size 1953 Content-Encoding ISO-8859-1 stream_name Monchusi_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Novel Mining Methods 4th... 2012 Slide 12 CSIR mine safety platform AR Drone Differential time-of-flight beacon Sampling ? CSIR 2012 Slide 13 Reef Laser-Induced Breakdown Spectroscopy (LIBS) head Scan X-Y Laser/Spectrometer/Computer Rock Breaking ? CSIR 2012 Slide...
Recurrent fuzzy ranking methods
Hajjari, Tayebeh
2012-11-01
With the increasing development of fuzzy set theory in various scientific fields and the need to compare fuzzy numbers in different areas. Therefore, Ranking of fuzzy numbers plays a very important role in linguistic decision-making, engineering, business and some other fuzzy application systems. Several strategies have been proposed for ranking of fuzzy numbers. Each of these techniques has been shown to produce non-intuitive results in certain case. In this paper, we reviewed some recent ranking methods, which will be useful for the researchers who are interested in this area.
ZIRCONIUM PHOSPHATE ADSORPTION METHOD
Russell, E.R.; Adamson, A.S.; Schubert, J.; Boyd, G.E.
1958-11-01
A method is presented for separating plutonium values from fission product values in aqueous acidic solution. This is accomplished by flowing the solutlon containing such values through a bed of zirconium orthophosphate. Any fission products adsorbed can subsequently be eluted by washing the column with a solution of 2N HNO/sub 3/ and O.lN H/sub 3/PO/sub 4/. Plutonium values may subsequently be desorbed by contacting the column with a solution of 7N HNO/sub 3/ .
A Pluralistic, Longitudinal Method
DEFF Research Database (Denmark)
Evers, Winie; Marroun, Sana; Young, Louise
2017-01-01
There is recognition in business markets of the need for connected relationships to enable the survival and growth of firms. Finding new ways to collaborate enables firms to better seek opportunities and challenges and enhance network capability. However the traditional methods used to research...... and analysis. Longitudinal research considers a Danish advertising and communication firm looking for new ideas by involving their network in order to help them to compete in their environment of rapid globalization and emergence of new technologies. A five stage research design considered how network...
METHOD OF ELECTROPOLISHING URANIUM
Walker, D.E.; Noland, R.A.
1959-07-14
A method of electropolishing the surface of uranium articles is presented. The process of this invention is carried out by immersing the uranium anticle into an electrolyte which contains from 35 to 65% by volume sulfuric acid, 1 to 20% by volume glycerine and 25 to 50% by volume of water. The article is made the anode in the cell and polished by electrolyzing at a voltage of from 10 to 15 volts. Discontinuing the electrolysis by intermittently withdrawing the anode from the electrolyte and removing any polarized film formed therein results in an especially bright surface.
Directory of Open Access Journals (Sweden)
Andrzej Piotrowski
2018-03-01
Full Text Available In industrial practice, hobs are manufactured and used. The problem boils down to the identification of a hob with defining its profile, which depends on many design and technological parameters (such as the grinding wheel size, profile, type and positioning during machining. This makes the basis for the correct execution and sharpening of the tool. The accuracy of the hob determines the quality of gear wheel teeth being shaped. The article presents the hob identification methods that are possible to be used in industrial and laboratory practice.
Radioanalytical methods manual
International Nuclear Information System (INIS)
Chiu, N.W.; Dean, J.R.
1986-01-01
This Radioanalytical Methods manual is comprised of 12 chapters. It includes a review of the pertinent literature up to the end of 1982 pertaining to the measurement of the radioactive species listed under the terms of the contract. Included is methodology recommended for the decompositions of soils, tailings, ores, biological samples and air filters. Detailed analytical methodology for the measurement of gross alpha, gross beta, gross gamma, uranium, radium-226, radium-228, lead-210, thorium-232, thorium-230, thorium-228, total thorium, radon-222, radon-220 and radon-219 is presented
International Nuclear Information System (INIS)
Witte, N.S.
1998-01-01
The classical formalism of the Moment Problem has been combined with a cumulant approach and applied to the extensive many-body problem. This has yielded many new exact results for many-body systems in the thermodynamic limit - for the ground state energy, for excited state gaps, and for arbitrary ground state averages. The method applies to any extensive Hamiltonian system, for any phase or symmetry arising in the model, whether on a lattice or in the continuum, and for any dimensionality. The theorems are of a nonperturbative nature with respect to any couplings occuring in the model. (Copyright (1998) World Scientific Publishing Co. Pte. Ltd)
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Reiss, Howard
1997-01-01
Since there is no shortage of excellent general books on elementary thermodynamics, this book takes a different approach, focusing attention on the problem areas of understanding of concept and especially on the overwhelming but usually hidden role of ""constraints"" in thermodynamics, as well as on the lucid exposition of the significance, construction, and use (in the case of arbitrary systems) of the thermodynamic potential. It will be especially useful as an auxiliary text to be used along with any standard treatment.Unlike some texts, Methods of Thermodynamics does not use statistical m
International Nuclear Information System (INIS)
Brunnett, C.J.
1980-01-01
A novel method is described for processing the analogue signals from the photomultiplier tubes in a tomographic X-ray scanner. The system produces a series of pulses whose instantaneous frequency depends on the detected intensity of the X-radiation. A timer unit is used to determine the segment scan intervals and also to deduce the average radiation intensity detected during this interval. The overall system is claimed to possess the advantageous properties of low time delay, wide bandwidth and relative low cost. (U.K.)
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Introduction to functional methods
International Nuclear Information System (INIS)
Faddeev, L.D.
1976-01-01
The functional integral is considered in relation to Feynman diagrams and phase space. The holomorphic form of the functional integral is then discussed. The main problem of the lectures, viz. the construction of the S-matrix by means of the functional integral, is considered. The functional methods described explicitly take into account the Bose statistics of the fields involved. The different procedure used to treat fermions is discussed. An introduction to the problem of quantization of gauge fields is given. (B.R.H.)
Inspection Methods in Programming.
1981-06-01
Counting is a a specialization of Iterative-generation in which the generating function is Oneplus ) Waters’ second category of plan building method...is Oneplus and the initial input is 1. 0 I 180 CHAPTER NINE -ta a acio f igr9-.IeaieGnrtoPln 7 -7 STEADY STATE PLANS 181 TemporalPlan counting...specializalion iterative-generation roles .action(afu nction) ,tail(counting) conslraints .action.op = oneplus A .action.input = 1 The lItcrative-application
Powell, James; Reich, Morris; Danby, Gordon
1997-07-22
A magnetic imager 10 includes a generator 18 for practicing a method of applying a background magnetic field over a concealed object, with the object being effective to locally perturb the background field. The imager 10 also includes a sensor 20 for measuring perturbations of the background field to detect the object. In one embodiment, the background field is applied quasi-statically. And, the magnitude or rate of change of the perturbations may be measured for determining location, size, and/or condition of the object.
Geometrical method of decoupling
Directory of Open Access Journals (Sweden)
C. Baumgarten
2012-12-01
Full Text Available The computation of tunes and matched beam distributions are essential steps in the analysis of circular accelerators. If certain symmetries—like midplane symmetry—are present, then it is possible to treat the betatron motion in the horizontal, the vertical plane, and (under certain circumstances the longitudinal motion separately using the well-known Courant-Snyder theory, or to apply transformations that have been described previously as, for instance, the method of Teng and Edwards. In a preceding paper, it has been shown that this method requires a modification for the treatment of isochronous cyclotrons with non-negligible space charge forces. Unfortunately, the modification was numerically not as stable as desired and it was still unclear, if the extension would work for all conceivable cases. Hence, a systematic derivation of a more general treatment seemed advisable. In a second paper, the author suggested the use of real Dirac matrices as basic tools for coupled linear optics and gave a straightforward recipe to decouple positive definite Hamiltonians with imaginary eigenvalues. In this article this method is generalized and simplified in order to formulate a straightforward method to decouple Hamiltonian matrices with eigenvalues on the real and the imaginary axis. The decoupling of symplectic matrices which are exponentials of such Hamiltonian matrices can be deduced from this in a few steps. It is shown that this algebraic decoupling is closely related to a geometric “decoupling” by the orthogonalization of the vectors E[over →], B[over →], and P[over →], which were introduced with the so-called “electromechanical equivalence.” A mathematical analysis of the problem can be traced down to the task of finding a structure-preserving block diagonalization of symplectic or Hamiltonian matrices. Structure preservation means in this context that the (sequence of transformations must be symplectic and hence canonical. When
Wilson, David G [Tijeras, NM; Robinett, III, Rush D.
2012-02-21
A control system design method and concomitant control system comprising representing a physical apparatus to be controlled as a Hamiltonian system, determining elements of the Hamiltonian system representation which are power generators, power dissipators, and power storage devices, analyzing stability and performance of the Hamiltonian system based on the results of the determining step and determining necessary and sufficient conditions for stability of the Hamiltonian system, creating a stable control system based on the results of the analyzing step, and employing the resulting control system to control the physical apparatus.
Moses, E.I.
1992-12-01
A laser pulse stacking method is disclosed. A problem with the prior art has been the generation of a series of laser beam pulses where the outer and inner regions of the beams are generated so as to form radially non-synchronous pulses. Such pulses thus have a non-uniform cross-sectional area with respect to the outer and inner edges of the pulses. The present invention provides a solution by combining the temporally non-uniform pulses in a stacking effect to thus provide a more uniform temporal synchronism over the beam diameter. 2 figs.
Follstaedt, David M.; Moran, Michael P.
2005-03-15
A method for thinning (such as in grinding and polishing) a material surface using an instrument means for moving an article with a discontinuous surface with an abrasive material dispersed between the material surface and the discontinuous surface where the discontinuous surface of the moving article provides an efficient means for maintaining contact of the abrasive with the material surface. When used to dimple specimens for microscopy analysis, a wheel with a surface that has been modified to produce a uniform or random discontinuous surface significantly improves the speed of the dimpling process without loss of quality of finish.
Methods of statistical physics
Akhiezer, Aleksandr I
1981-01-01
Methods of Statistical Physics is an exposition of the tools of statistical mechanics, which evaluates the kinetic equations of classical and quantized systems. The book also analyzes the equations of macroscopic physics, such as the equations of hydrodynamics for normal and superfluid liquids and macroscopic electrodynamics. The text gives particular attention to the study of quantum systems. This study begins with a discussion of problems of quantum statistics with a detailed description of the basics of quantum mechanics along with the theory of measurement. An analysis of the asymptotic be
International Nuclear Information System (INIS)
Izumidani, Masakiyo; Tanno, Kazuo.
1978-01-01
Purpose: To enable automatic filter operation and facilitate back-washing operation by back-washing filters used in a bwr nuclear power plant utilizing an exhaust gas from a ventilator or air conditioner. Method: Exhaust gas from an exhaust pipe of an ventilator or air conditioner is pressurized in a compressor and then introduced in a back-washing gas tank. Then, the exhaust gas pressurized to a predetermined pressure is blown from the inside to the outside of a filter to thereby separate impurities collected on the filter elements and introduce them to a waste tank. (Furukawa, Y.)
Alternative Methods of Regression
Birkes, David
2011-01-01
Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s
Dual ant colony operational modal analysis parameter estimation method
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Deterministic Compressed Sensing
2011-11-01
39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
International Nuclear Information System (INIS)
1990-01-01
In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)
International Nuclear Information System (INIS)
Yamamoto, Toshihiro; Miyoshi, Yoshinori
2004-01-01
A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)
Directory of Open Access Journals (Sweden)
Seyed Jalal Younesi
2015-06-01
Full Text Available Objective: The current research is to investigate the relation between deterministic thinking and mental health among drug abusers, in which the role of cognitive distortions is considered and clarified by focusing on deterministic thinking. Methods: The present study is descriptive and correlative. All individuals with experience of drug abuse who had been referred to the Shafagh Rehabilitation center (Kahrizak were considered as the statistical population. 110 individuals who were addicted to drugs (stimulants and Methamphetamine were selected from this population by purposeful sampling to answer questionnaires about deterministic thinking and general health. For data analysis Pearson coefficient correlation and regression analysis was used. Results: The results showed that there is a positive and significant relationship between deterministic thinking and the lack of mental health at the statistical level [r=%22, P<0.05], which had the closest relation to deterministic thinking among the factors of mental health, such as anxiety and depression. It was found that the two factors of deterministic thinking which function as the strongest variables that predict the lack of mental health are: definitiveness in predicting tragic events and future anticipation. Discussion: It seems that drug abusers suffer from deterministic thinking when they are confronted with difficult situations, so they are more affected by depression and anxiety. This way of thinking may play a major role in impelling or restraining drug addiction.
International Nuclear Information System (INIS)
Delincee, H.
1991-01-01
The purpose of this session is to discuss the various possibilities for detecting modifications in DNA after irradiation and whether these changes can be utilized as an indicator for the irradiation treatment of foods. The requirement to be fulfilled is that the method be able to distinguish irradiated food without the presence of a control sample, thus the measured response after irradiation must be large enough to supersede background levels from other treatments. Much work has been performed on the effects of radiation on DNA, particularly due to its importance in radiation biology. The main lesions of DNA as a result of irradiation are base damage, damage of the sugar moiety, single strand and double strand breaks. Crosslinking between bases also occurs, e.g. production of thymine dimers, or between DNA and protein. A valuable review on how to utilize these DNA changes for detection purposes has already appeared. Tables 1, 2 and 3 list the proposed methods of detecting changes in irradiated DNA, some identified products as examples for a possible irradiation indicator, in the case of immunoassay the substance used as antigen, and some selected literature references. In this short review, it is not intended to provide a complete literature survey
International Nuclear Information System (INIS)
Takahashi, Akihito.
1994-01-01
A Pt wire electrode is supported from the periphery relative to a Pd electrode by way of a polyethylene or teflon plate in heavy water, and electrolysis is applied while varying conditions successively in a sawteeth fashion at an initial stage, and after elapse of about one week, a pulse current is supplied to promote nuclear reaction and to generate excess heat greater than a charged electric power. That is, small amount of neutron emission is increased and electrolytic cell temperature is elevated by varying the electrolysis conditions successively in the sawteeth fashion at the initial stage. In addition, when the pulse electric current is supplied after elapse of about one week, the electrolytic cell temperature is abnormally elevated, so that the promotion of nuclear reaction phenomenon and the generation of excess heat greater than the charged electric power are recognized. Then, a way to control power level and time fluctuation of cold fusion is attained, thereby contributing to development of a further method for generating excess heat as desired. In addition, it contributes to a development for a method of obtaining such an excess heat that can be taken as a new energy. (N.H.)
Research methods in information
Pickard, Alison Jane
2013-01-01
The long-awaited 2nd edition of this best-selling research methods handbook is fully updated and includes brand new coverage of online research methods and techniques, mixed methodology and qualitative analysis. There is an entire chapter contributed by Professor Julie McLeod, Sue Childs and Elizabeth Lomas focusing on research data management, applying evidence from the recent JISC funded 'DATUM' project. The first to focus entirely on the needs of the information and communications community, it guides the would-be researcher through the variety of possibilities open to them under the heading "research" and provides students with the confidence to embark on their dissertations. The focus here is on the 'doing' and although the philosophy and theory of research is explored to provide context, this is essentially a practical exploration of the whole research process with each chapter fully supported by examples and exercises tried and tested over a whole teaching career. The book will take readers through eac...
Spent fuel reprocessing method
International Nuclear Information System (INIS)
Shoji, Hirokazu; Mizuguchi, Koji; Kobayashi, Tsuguyuki.
1996-01-01
Spent oxide fuels containing oxides of uranium and transuranium elements are dismantled and sheared, then oxide fuels are reduced into metals of uranium and transuranium elements in a molten salt with or without mechanical removal of coatings. The reduced metals of uranium and transuranium elements and the molten salts are subjected to phase separation. From the metals of uranium and transuranium elements subjected to phase separation, uranium is separated to a solid cathode and transuranium elements are separated to a cadmium cathode by an electrolytic method. Molten salts deposited together with uranium to the solid cathode, and uranium and transuranium elements deposited to the cadmium cathode are distilled to remove deposited molten salts and cadmium. As a result, TRU oxides (solid) such as UO 2 , Pu 2 in spent fuels can be reduced to U and TRU by a high temperature metallurgical method not using an aqueous solution to separate them in the form of metal from other ingredients, and further, metal fuels can be obtained through an injection molding step depending on the purpose. (N.H.)
Neutron source multiplication method
International Nuclear Information System (INIS)
Clayton, E.D.
1985-01-01
Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system
Magnesium fluoride recovery method
International Nuclear Information System (INIS)
Gay, R.L.; McKenzie, D.E.
1989-01-01
A method of obtaining magnesium fluoride substantially free from radioactive uranium from a slag formed in the production of metallic uranium by the reduction of depleted uranium tetrafluoride with metallic magnesium in a retort wherein the slag contains the free metals magnesium and uranium and also oxides and fluorides of the metals. The slag having a radioactivity level of at least about 7,000 rhoCi/gm. The method comprises the steps of: grinding the slag to a median particle size of about 200 microns; contacting the ground slag in a reaction zone with an acid having a strength of from about 0.5 to 1.5 N for a time of from about 4 to about 20 hours in the presence of a catalytic amount of iron; removing the liquid product; treating the particulate solid product; repeating the last two steps at least one more time to produce a solid residue consisting essentially of magnesium fluoride substantially free of uranium and having a residual radioactivity level of less than about 1000 rhoCi/gm
Method of pancreas scintigraphy
International Nuclear Information System (INIS)
Michele, E.; Schmidt, H.A.E.
1976-01-01
Scintigraphy of the pancreas is important because of a lack of simple internal and x-ray pancreas diagnostic examination methods, non-burdening to the patient, yet providing sufficient evidence. We conceived a double isotope subtraction method aimed at widespread application; financially, it should be within the range even of smaller nuclear medicine departments. A scanner is combined with double impulse processing and a subtraction unit (Picker Dualscanner) and an adapted x-ray unit with the x-ray tube aimed at the scan-field. Commercial sup(Se-75)selenium methionine is used for pancreas imagining. sup(TC-99m)colloidal sulphur is used as a liver indicator. After barium-brei application orally, an x-ray is taken of the gastro-intestinal tract, so as to be able to delineate the pancreas from other epigastric organs also able to accumulate methionine. The subtraction photoscan is then inscribed on this pre-exposed film without any shift of the patient. It is also possible to use two parallel films (x-ray/photoscan) and then to superposition them
International Nuclear Information System (INIS)
Saller, H.A.; Hodge, E.S.; Paprocki, S.J.; Dayton, R.W.
1987-01-01
A method of making a fuel-containing structure for nuclear reactors is described comprising providing an assembly comprising fuel units consisting of a core plate containing thermal-neutron-fissionable material, sheets of cladding metal on its bottom and top surfaces, the cladding sheets being of greater width and length than the core plates whereby recesses are formed at the ends and sides of the core plate, and end pieces and first side pieces of cladding metal of the same thickness as the core plate positioned in the recesses. The assembly further comprises second side pieces of cladding metal engaging the cladding sheets so as to space the fuel units from one another, and filler plates of an acid-dissolvable nonresilient material whose melting point is above 2000 0 F, arranged between a pair of the second side pieces and the cladding plates of two adjacent fuel units. The filler plates have the same thickness as the second side pieces. The method further comprises enclosing the entire assembly in an envelope; evacuating the interior of the entire assembly through the envelope; applying inert gas under a pressure of about 10,000 psi to the outside of the envelope while at the same time heating the assembly to a temperature above the flow point of the cladding metal but below the melting point of any material of the assembly, slowly cooling the assembly to room temperature; removing the envelope; and dissolving the filler plates without attacking the cladding metal
Terahertz composite imaging method
Institute of Scientific and Technical Information of China (English)
QIAO Xiaoli; REN Jiaojiao; ZHANG Dandan; CAO Guohua; LI Lijuan; ZHANG Xinming
2017-01-01
In order to improve the imaging quality of terahertz(THz) spectroscopy, Terahertz Composite Imaging Method(TCIM) is proposed. The traditional methods of improving THz spectroscopy image quality are mainly from the aspects of de-noising and image enhancement. TCIM breaks through this limitation. A set of images, reconstructed in a single data collection, can be utilized to construct two kinds of composite images. One algorithm, called Function Superposition Imaging Algorithm(FSIA), is to construct a new gray image utilizing multiple gray images through a certain function. The features of the Region Of Interest (ROI) are more obvious after operating, and it has capability of merging ROIs in multiple images. The other, called Multi-characteristics Pseudo-color Imaging Algorithm(McPcIA), is to construct a pseudo-color image by combining multiple reconstructed gray images in a single data collection. The features of ROI are enhanced by color differences. Two algorithms can not only improve the contrast of ROIs, but also increase the amount of information resulting in analysis convenience. The experimental results show that TCIM is a simple and effective tool for THz spectroscopy image analysis.
Chemical decontamination method
International Nuclear Information System (INIS)
Nishiwaki, Hitoshi.
1996-01-01
Metal wastes contaminated by radioactive materials are contained in a rotational decontamination vessel, and the metal wastes are rotated therein while being in contact with a slight amount of a decontamination liquid comprising a mineral acid. As the mineral acid, a mixed acid of nitric acid, hydrochloric acid and fluoric acid is preferably used. Alternatively, chemical decontamination can also be conducted by charging an acid resistant stirring medium in the rotational decontamination vessel. The surface of the metal wastes is uniformly covered by the slight amount of decontamination liquid to dissolve the surface layer. In addition, heat of dissolution generated in this case is accumulated in the inside of the rotational decontamination vessel, the temperature is elevated with no particular heating, thereby enabling to obtain an excellent decontamination effect substantially at the same level as in the case of heating the liquid to 70degC in a conventional immersion decontamination method. Further, although contact areas between the metal wastes and the immersion vessel are difficult to be decontaminated in the immersion decontamination method, all of areas can be dissolved uniformly in the present invention. (T.M.)
Directory of Open Access Journals (Sweden)
Wang Chia-Jean
2007-01-01
Full Text Available AbstractWhile 32 nm lithography technology is on the horizon for integrated circuit (IC fabrication, matching the pace for miniaturization with optics has been hampered by the diffraction limit. However, development of nanoscale components and guiding methods is burgeoning through advances in fabrication techniques and materials processing. As waveguiding presents the fundamental issue and cornerstone for ultra-high density photonic ICs, we examine the current state of methods in the field. Namely, plasmonic, metal slot and negative dielectric based waveguides as well as a few sub-micrometer techniques such as nanoribbons, high-index contrast and photonic crystals waveguides are investigated in terms of construction, transmission, and limitations. Furthermore, we discuss in detail quantum dot (QD arrays as a gain-enabled and flexible means to transmit energy through straight paths and sharp bends. Modeling, fabrication and test results are provided and show that the QD waveguide may be effective as an alternate means to transfer light on sub-diffraction dimensions.
Magnetohydrodynamic generation method
International Nuclear Information System (INIS)
Masai, Tadahisa; Ishibashi, Eiichi; Kojima, Akihiro.
1967-01-01
The present invention relates to a magneto-hydrodynamic generation method which increases the conductivity of active gas and the generated energy. In the conventional method of open-cycle magnetohydrodynamic generation, the working fluid does not possess a favorable electric conductivity since the collision cross section is large when the combustion is carried out in a condition of excess oxygen. Furthermore, combustion under a condition of oxygen shortage is uncapable of completely converting the generated energy. The air preheater or boiler is not sufficient to collect the waste gas resulting in damage and other economic disadvantages. In the present invention, the combustion gas caused by excess fuel in the combuster is supplied to the generator as the working gas, to which air or fully oxidized air is added to be reheated. While incomplete gas used for heat collection is not adequate, the unburned damage may be eliminated by combusting again and increasing the gas temperature and heat collection rate. Furthermore, a diffuser is mounted at the rear side of the generator to decrease the gas combustion rate. Thus, even when directly absorbing the preheated fully oxidized air or the ordinary air, the boiler is free from damage caused by combustion delay or impulsive force. (M. Ishida)
International Nuclear Information System (INIS)
Suzuki, Toshio; Hida, Kazuki; Yoshioka, Ritsuo.
1990-01-01
The enrichment degree of fuels initially loaded in a reactor core was made extremely lower than that of fresh fuels to be loaded in the succeeding cycle, or the enrichment degree for all of the initially loaded fuels was made identical with that of the fresh fuels in the conventional reactor operation method. In this operation method, since the initially loaded fuels are sometimes taken out after the completion of the cycle at the low burnup degree as it is, it can not be said to reduce the fuel cycle cost. As a means for dissolving this problem, at least two different kinds of initially loaded fuels are prepared. The enrichment degree of the highly enriched fuels is made identical with that of the fresh fuels, and the enrichment degree and the number of low enriched fuels are not changed after the completion of the first cycle but they are operated till the end of the second cycle. Further, all of the fuels at the low enrichment degree are taken out after the completion of the second cycle and exchanged with the fresh fuels. As a result, high burnup ratio of the initially loaded fuels can be increased, to improve the fuel economy. (I.S.)
Energy Technology Data Exchange (ETDEWEB)
Lyon, R K
1975-05-22
Isotopes of a gaseous compound can be separated by multi-infrared photoabsorption which follows a selective dissociation of the excited molecules by single photon absorption of photons of visible or UV radiation. The process involves three steps. Firstly, the molecules to be separated are irradiated with a high-energy IR laser, whereby the molecules of the compound containing the lighter isotopes are preferably excited. They are then irradiated by a second laser with UV or visible light whose frequency of radiation brings the excited molecules into a form in which they can be separated from the non-excited molecules. The third step is the reformation of the substances according to known methods. A power density of at least 10/sup 4/ watt/cm/sup 2/ per torr gas pressure with an irradiation time of 10/sup -10/ to 5 x 10/sup -5/ seconds in the presence of a second gas with at least 5 times higher partial pressure is necessary for the IR radiation. The method may be used for UF/sub 6/ for which an example is given here.
International Nuclear Information System (INIS)
Ishida, Ryuichi; Hatanaka, Tatsuo.
1969-01-01
A friction welding method for forming a lattice-shaped base and tie plate supporter for fuel elements is disclosed in which a plate formed with a concavity along its edge is pressure welded to a rotating member such as a boss by longitudinally contacting the projecting surfaces remaining on either side of the concavity with the rotating member during the high speed rotation thereof in the presence of an inert gas. Since only the two projecting surfaces of the plate are fused by friction to the rotary member, heat expansion is absorbed by the concavity to prevent distortion; moreover, a two point contact surface assures a stable fitting and promotes the construction of a rigid lattice in which a number of the abovementioned plates are friction welded between rotating members to form any desired complex arrangement. The inert has serves to protect the material quality of the contacting surfaces from air during the welding step. The present invention thus provides a method in which even Zircaloy may be friction welded in place of casting stainless steel in the construction of supporting lattices to thereby enhance neutron economy. (K. J. Owens)
Earthquake Hazard Analysis Methods: A Review
Sari, A. M.; Fakhrurrozi, A.
2018-02-01
One of natural disasters that have significantly impacted on risks and damage is an earthquake. World countries such as China, Japan, and Indonesia are countries located on the active movement of continental plates with more frequent earthquake occurrence compared to other countries. Several methods of earthquake hazard analysis have been done, for example by analyzing seismic zone and earthquake hazard micro-zonation, by using Neo-Deterministic Seismic Hazard Analysis (N-DSHA) method, and by using Remote Sensing. In its application, it is necessary to review the effectiveness of each technique in advance. Considering the efficiency of time and the accuracy of data, remote sensing is used as a reference to the assess earthquake hazard accurately and quickly as it only takes a limited time required in the right decision-making shortly after the disaster. Exposed areas and possibly vulnerable areas due to earthquake hazards can be easily analyzed using remote sensing. Technological developments in remote sensing such as GeoEye-1 provide added value and excellence in the use of remote sensing as one of the methods in the assessment of earthquake risk and damage. Furthermore, the use of this technique is expected to be considered in designing policies for disaster management in particular and can reduce the risk of natural disasters such as earthquakes in Indonesia.
Method Engineering: Engineering of Information Systems Development Methods and Tools
Brinkkemper, J.N.; Brinkkemper, Sjaak
1996-01-01
This paper proposes the term method engineering for the research field of the construction of information systems development methods and tools. Some research issues in method engineering are identified. One major research topic in method engineering is discussed in depth: situational methods, i.e. the configuration of a project approach that is tuned to the project at hand. A language and support tool for the engineering of situational methods are discussed.
Conjugated polymer nanoparticles, methods of using, and methods of making
Habuchi, Satoshi
2017-03-16
Embodiments of the present disclosure provide for conjugated polymer nanoparticle, method of making conjugated polymer nanoparticles, method of using conjugated polymer nanoparticle, polymers, and the like.
Conjugated polymer nanoparticles, methods of using, and methods of making
Habuchi, Satoshi; Piwonski, Hubert Marek; Michinobu, Tsuyoshi
2017-01-01
Embodiments of the present disclosure provide for conjugated polymer nanoparticle, method of making conjugated polymer nanoparticles, method of using conjugated polymer nanoparticle, polymers, and the like.
Gambill, W.R.; Greene, N.D.
1960-08-30
A method is given for increasing burn-out heat fluxes under nucleate boiling conditions in heat exchanger tubes without incurring an increase in pumping power requirements. This increase is achieved by utilizing a spinning flow having a rotational velocity sufficient to produce a centrifugal acceleration of at least 10,000 g at the tube wall. At this acceleration the heat-transfer rate at burn out is nearly twice the rate which can be achieved in a similar tube utilizing axial flow at the same pumping power. At higher accelerations the improvement over axial flow is greater, and heat fluxes in excess of 50 x 10/sup 6/ Btu/hr/sq ft can be achieved.
Dixon, R.D.; Smith, F.M.; O`Leary, R.F.
1997-04-01
A method is provided for joining beryllium pieces which comprises: depositing aluminum alloy on at least one beryllium surface; contacting that beryllium surface with at least one other beryllium surface; and welding the aluminum alloy coated beryllium surfaces together. The aluminum alloy may be deposited on the beryllium using gas metal arc welding. The aluminum alloy coated beryllium surfaces may be subjected to elevated temperatures and pressures to reduce porosity before welding the pieces together. The aluminum alloy coated beryllium surfaces may be machined into a desired welding joint configuration before welding. The beryllium may be an alloy of beryllium or a beryllium compound. The aluminum alloy may comprise aluminum and silicon. 9 figs.
Mathematical methods in engineering
Machado, José
2014-01-01
This book presents a careful selection of the contributions presented at the Mathematical Methods in Engineering (MME10) International Symposium, held at the Polytechnic Institute of Coimbra- Engineering Institute of Coimbra (IPC/ISEC), Portugal, October 21-24, 2010. The volume discusses recent developments about theoretical and applied mathematics toward the solution of engineering problems, thus covering a wide range of topics, such as: Automatic Control, Autonomous Systems, Computer Science, Dynamical Systems and Control, Electronics, Finance and Economics, Fluid Mechanics and Heat Transfer, Fractional Mathematics, Fractional Transforms and Their Applications, Fuzzy Sets and Systems, Image and Signal Analysis, Image Processing, Mechanics, Mechatronics, Motor Control and Human Movement Analysis, Nonlinear Dynamics, Partial Differential Equations, Robotics, Acoustics, Vibration and Control, and Wavelets.
Radiofrequency attenuator and method
Warner, Benjamin P [Los Alamos, NM; McCleskey, T Mark [Los Alamos, NM; Burrell, Anthony K [Los Alamos, NM; Agrawal, Anoop [Tucson, AZ; Hall, Simon B [Palmerston North, NZ
2009-01-20
Radiofrequency attenuator and method. The attenuator includes a pair of transparent windows. A chamber between the windows is filled with molten salt. Preferred molten salts include quarternary ammonium cations and fluorine-containing anions such as tetrafluoroborate (BF.sub.4.sup.-), hexafluorophosphate (PF.sub.6.sup.-), hexafluoroarsenate (AsF.sub.6.sup.-), trifluoromethylsulfonate (CF.sub.3SO.sub.3.sup.-), bis(trifluoromethylsulfonyl)imide ((CF.sub.3SO.sub.2).sub.2N.sup.-), bis(perfluoroethylsulfonyl)imide ((CF.sub.3CF.sub.2SO.sub.2).sub.2N.sup.-) and tris(trifluoromethylsulfonyl)methide ((CF.sub.3SO.sub.2).sub.3C.sup.-). Radicals or radical cations may be added to or electrochemically generated in the molten salt to enhance the RF attenuation.
Tolle, Charles R [Idaho Falls, ID; Clark, Denis E [Idaho Falls, ID; Smartt, Herschel B [Idaho Falls, ID; Miller, Karen S [Idaho Falls, ID
2009-10-06
A material-forming tool and a method for forming a material are described including a shank portion; a shoulder portion that releasably engages the shank portion; a pin that releasably engages the shoulder portion, wherein the pin defines a passageway; and a source of a material coupled in material flowing relation relative to the pin and wherein the material-forming tool is utilized in methodology that includes providing a first material; providing a second material, and placing the second material into contact with the first material; and locally plastically deforming the first material with the material-forming tool so as mix the first material and second material together to form a resulting material having characteristics different from the respective first and second materials.
International Nuclear Information System (INIS)
Utamura, Motoaki; Urata, Megumu.
1976-01-01
Object: To detect failed fuel element in a reactor with high precision by measuring the radioactivity concentrations for more than one nuclides of fission products ( 131 I and 132 I, for example) contained in each sample of coolant in fuel channel. Method: The radioactivity concentrations in the sampled coolant are obtained from gamma spectra measured by a pulse height analyser after suitable cooling periods according to the half-lives of the fission products to be measured. The first measurement for 132 I is made in two hours after sampling, and the second for 131 I is started one day after the sampling. Fuel element corresponding to the high radioactivity concentrations for both 131 I and 132 I is expected with certainty to have failed
Feldenkrais, Moshé
1981-01-01
Moshe Feldenkrais is known from the textbooks as a collaborator of Joliot-Curie, Langevin, and Kowarski participating in the first nuclear fission experiments. During the war he went to Great Britain and worked on the development of submarine detection devices. From experimental physics, following finally a suggestion of Lew Kowarski, he turned his interest to neurophysiology and neuropsychology. He studied the cybernetical organisation between human body dynamics and the mind. He developed his method known as "Functional integration" and "Awareness through movement". It has been applied with surprising results to post-traumatic rehabilitation, psychotherapy, re-education of the mentally or physically handicapped, and improvement of performance in sports. It can be used by everybody who wants to discover his natural grace of movement.
Interval methods: An introduction
DEFF Research Database (Denmark)
Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj
2006-01-01
This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...
Baryons with functional methods
International Nuclear Information System (INIS)
Fischer, Christian S.
2017-01-01
We summarise recent results on the spectrum of ground-state and excited baryons and their form factors in the framework of functional methods. As an improvement upon similar approaches we explicitly take into account the underlying momentum-dependent dynamics of the quark-gluon interaction that leads to dynamical chiral symmetry breaking. For light octet and decuplet baryons we find a spectrum in very good agreement with experiment, including the level ordering between the positive- and negative-parity nucleon states. Comparing the three-body framework with the quark-diquark approximation, we do not find significant differences in the spectrum for those states that have been calculated in both frameworks. This situation is different in the electromagnetic form factor of the Δ, which may serve to distinguish both pictures by comparison with experiment and lattice QCD.
Modified risk evaluation method
International Nuclear Information System (INIS)
Udell, C.J.; Tilden, J.A.; Toyooka, R.T.
1993-08-01
The purpose of this paper is to provide a structured and cost-oriented process to determine risks associated with nuclear material and other security interests. Financial loss is a continuing concern for US Department of Energy contractors. In this paper risk is equated with uncertainty of cost impacts to material assets or human resources. The concept provides a method for assessing the effectiveness of an integrated protection system, which includes operations, safety, emergency preparedness, and safeguards and security. The concept is suitable for application to sabotage evaluations. The protection of assets is based on risk associated with cost impacts to assets and the potential for undesirable events. This will allow managers to establish protection priorities in terms of the cost and the potential for the event, given the current level of protection
International Nuclear Information System (INIS)
Steinberg, M.; Manowitz, B.; Waide, C.H.
1976-01-01
A method and apparatus are described for producing rockbolts in the roof of a subterranean cavity in which two components of an ambient temperature curable resin system are premixed and then inserted into a bore hole. The mixture is permitted to polymerize in situ, and then the hardened material is cut off at the entrance to the hole leaving a hardened portion for insertion into the next hole as a precursor. In a preferred embodiment, a flexible glass roving is employed to reinforce the material in the hole and a metal tube inserted to support the roving while it is fed into the hole and also to provide venting. The roving and tube is then cut off and left in the hole
Method of controlling reactivity
International Nuclear Information System (INIS)
Tochihara, Hiroshi.
1982-01-01
Purpose: To improve the reactivity controlling characteristics by artificially controlling the leakage of neutron from a reactor and providing a controller for controlling the reactivity. Method: A reactor core is divided into several water gaps to increase the leakage of neutron, its reactivity is reduced, a gas-filled control rod or a fuel assembly is inserted into the gap as required, the entire core is coupled in a system to reduce the leakage of the neutron, and the reactivity is increased. The reactor shutdown is conducted by the conventional control rod, and to maintain critical state, boron density varying system is used together. Futher, a control rod drive is used with that similar to the conventional one, thereby enabling fast reactivity variation, and the positive reactivity can be obtained by the insertion, thereby improving the reactivity controlling characteristics. (Yoshihara, H.)
Method for radioactivity monitoring
Umbarger, C. John; Cowder, Leo R.
1976-10-26
The disclosure relates to a method for analyzing uranium and/or thorium contents of liquid effluents preferably utilizing a sample containing counting chamber. Basically, 185.7-keV gamma rays following .sup.235 U alpha decay to .sup.231 Th which indicate .sup.235 U content and a 63-keV gamma ray doublet found in the nucleus of .sup.234 Pa, a granddaughter of .sup.238 U, are monitored and the ratio thereof taken to derive uranium content and isotopic enrichment .sup.235 U/.sup.235 U + .sup.238 U) in the liquid effluent. Thorium content is determined by monitoring the intensity of 238-keV gamma rays from the nucleus of .sup.212 Bi in the decay chain of .sup.232 Th.
International Nuclear Information System (INIS)
Saito, Toshiro.
1983-01-01
Purpose: To decrease the cost and shorten the working time by saving fueling neutron detectors and their components. Method: Incore drive tubes for the neutron source range monitor (SRM) and intermediate range monitor (IRM) are disposed respectively within in a reactor core and a SRM detector assembly is inserted to the IRM incore drive tube which is most nearest to the neutron source upon reactor fueling. The reactor core reactivity is monitored by the SRM detector assembly. The SRM detector asesembly inserted into the IRM drive tube is extracted at the time of charging fuels up to the frame connecting the SRM and, thereafter, IRM detection assembly is inserted into the IRM drive tube and the SRM detector assembly is inserted into the SRM drive tube respectively for monitoring the reactor core. (Sekiya, K.)
Radioactive air sampling methods
Maiello, Mark L
2010-01-01
Although the field of radioactive air sampling has matured and evolved over decades, it has lacked a single resource that assimilates technical and background information on its many facets. Edited by experts and with contributions from top practitioners and researchers, Radioactive Air Sampling Methods provides authoritative guidance on measuring airborne radioactivity from industrial, research, and nuclear power operations, as well as naturally occuring radioactivity in the environment. Designed for industrial hygienists, air quality experts, and heath physicists, the book delves into the applied research advancing and transforming practice with improvements to measurement equipment, human dose modeling of inhaled radioactivity, and radiation safety regulations. To present a wide picture of the field, it covers the international and national standards that guide the quality of air sampling measurements and equipment. It discusses emergency response issues, including radioactive fallout and the assets used ...
International Nuclear Information System (INIS)
Coppa, N.V.; Stewart, P.; Renzi, E.
1999-01-01
The present invention provides methods and apparatus for freeze drying in which a solution, which can be a radioactive salt dissolved within an acid, is frozen into a solid on vertical plates provided within a freeze drying chamber. The solid is sublimated into vapor and condensed in a cold condenser positioned above the freeze drying chamber and connected thereto by a conduit. The vertical positioning of the cold condenser relative to the freeze dryer helps to help prevent substances such as radioactive materials separated from the solution from contaminating the cold condenser. Additionally, the system can be charged with an inert gas to produce a down rush of gas into the freeze drying chamber to also help prevent such substances from contaminating the cold condenser
International Nuclear Information System (INIS)
Nakaya, Iwao; Murakami, Tadashi; Miyake, Takafumi; Funakoshi, Toshio; Inagaki, Yuzo; Hashimoto, Yasuhide.
1985-01-01
Purpose: To convert radioactive wastes into the final state for storage (artificial rocks) in a short period of time. Method: Radioactive burnable wastes such as spent papers, cloths and oils and activated carbons are burnt into ashes in a burning furnace, while radioactive liquid wastes such as liquid wastes of boric acid, exhausted cleaning water and decontaminating liquid wastes are powderized in a drying furnace or calcining furnace. These powders are joined with silicates as such as white clay, silica and glass powder and a liquid alkali such as NaOH or Ca(OH) 2 and transferred to a solidifying vessel. Then, the vessel is set to a hydrothermal reactor, heated and pressurized, then taken out about 20 min after and tightly sealed. In this way, radioactive wastes are converted through the hydrothermal reactions into aqueous rock stable for a long period of time to obtain solidification products insoluble to water and with an extremely low leaching rate. (Ikeda, J.)
International Nuclear Information System (INIS)
Delano, M.A.
1991-01-01
This patent describes a method for combusting fuel and oxidant to achieve reduced formation of nitrogen oxides. It comprises: It comprises: heating a combustion zone to a temperature at least equal to 1500 degrees F.; injecting into the heated combustion zone a stream of oxidant at a velocity within the range of from 200 to 1070 feet per second; injecting into the combustion zone, spaced from the oxidant stream, a fuel stream at a velocity such that the ratio of oxidant stream velocity to fuel stream velocity does not exceed 20; aspirating combustion gases into the oxidant stream and thereafter intermixing the aspirated oxidant stream and fuel stream to form a combustible mixture; combusting the combustible mixture to produce combustion gases for the aspiration; and maintaining the fuel stream substantially free from contact with oxidant prior to the intermixture with aspirated oxidant
Introduction to perturbation methods
Holmes, M
1995-01-01
This book is an introductory graduate text dealing with many of the perturbation methods currently used by applied mathematicians, scientists, and engineers. The author has based his book on a graduate course he has taught several times over the last ten years to students in applied mathematics, engineering sciences, and physics. The only prerequisite for the course is a background in differential equations. Each chapter begins with an introductory development involving ordinary differential equations. The book covers traditional topics, such as boundary layers and multiple scales. However, it also contains material arising from current research interest. This includes homogenization, slender body theory, symbolic computing, and discrete equations. One of the more important features of this book is contained in the exercises. Many are derived from problems of up- to-date research and are from a wide range of application areas.
2002-07-02
2002 US 6,413,589 B1 35 36 yttria or CaO, can also be made porous by sol gel , or by superconductor powders 32 is at least 20 to 50 C. below its...Display Research Conference, (1988). 3,949,263 A 4/1976 Harper "Laser Method for Synthesis and Processing of Continuous 4,009,027 A 2/1977 Naidich et al...spots on reinforcing 33,800, WO3 (7.16) 33,600, Fe2O3 (5.24) 23,100, MoO3 carbon fibers in carbon composites; (4.692) 20,100, and MnO2 (5.026) 21,900
Chattamvelli, Rajan
2015-01-01
DATA MINING METHODS, Second Edition discusses both theoretical foundation and practical applications of datamining in a web field including banking, e-commerce, medicine, engineering and management. This book starts byintroducing data and information, basic data type, data category and applications of data mining. The second chapterbriefly reviews data visualization technology and importance in data mining. Fundamentals of probability and statisticsare discussed in chapter 3, and novel algorithm for sample covariants are derived. The next two chapters give an indepthand useful discussion of data warehousing and OLAP. Decision trees are clearly explained and a new tabularmethod for decision tree building is discussed. The chapter on association rules discusses popular algorithms andcompares various algorithms in summary table form. An interesting application of genetic algorithm is introduced inthe next chapter. Foundations of neural networks are built from scratch and the back propagation algorithm is derived...
Mintěl, Tomáš
2009-01-01
Tato diplomová práce se zabývá akcelerací interpolačních metod s využitím GPU a architektury NVIDIA (R) CUDA TM. Grafický výstup je reprezentován demonstrační aplikací pro transformaci obrazu nebo videa s použitím vybrané interpolace. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Pro práci s obrazem a videem jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of pixel interpolation methods usi...
Computational Methods in Medicine
Directory of Open Access Journals (Sweden)
Angel Garrido
2010-01-01
Full Text Available Artificial Intelligence requires Logic. But its Classical version shows too many insufficiencies. So, it is absolutely necessary to introduce more sophisticated tools, such as Fuzzy Logic, Modal Logic, Non-Monotonic Logic, and so on [2]. Among the things that AI needs to represent are Categories, Objects, Properties, Relations between objects, Situations, States, Time, Events, Causes and effects, Knowledge about knowledge, and so on. The problems in AI can be classified in two general types
[3, 4], Search Problems and Representation Problem. There exist different ways to reach this objective. So, we have [3] Logics, Rules, Frames, Associative Nets, Scripts and so on, that are often interconnected. Also, it will be very useful, in dealing with problems of uncertainty and causality, to introduce Bayesian Networks and particularly, a principal tool as the Essential Graph. We attempt here to show the scope of application of such versatile methods, currently fundamental in Medicine.
Method of electrostatic filtration
International Nuclear Information System (INIS)
Devienne, F.M.
1975-01-01
Electrostatic filtration of secondary ions of mass m in a given mass ratio with a primary ion of mass M which has formed the secondary ions by fission is carried out by a method which consists in forming a singly-charged primary ion of the substance having a molecular mass M and extracting the ion at a voltage V 1 with respect to ground. The primary ion crosses a potential barrier V 2 , in producing the dissociation of the ion into at least two fragments of secondary ions and in extracting the fragment ion of mass m at a voltage V 2 . Filtration is carried out in an electrostatic analyzer through which only the ions of energy eV'' are permitted to pass, detecting the ions which have been filtered. The mass m of the ions is such that (M/m) = (V 1 - V 2 )/(V'' - V 2 )
International Nuclear Information System (INIS)
Dixon, R.D.; Smith, F.M.; O'Leary, R.F.
1997-01-01
A method is provided for joining beryllium pieces which comprises: depositing aluminum alloy on at least one beryllium surface; contacting that beryllium surface with at least one other beryllium surface; and welding the aluminum alloy coated beryllium surfaces together. The aluminum alloy may be deposited on the beryllium using gas metal arc welding. The aluminum alloy coated beryllium surfaces may be subjected to elevated temperatures and pressures to reduce porosity before welding the pieces together. The aluminum alloy coated beryllium surfaces may be machined into a desired welding joint configuration before welding. The beryllium may be an alloy of beryllium or a beryllium compound. The aluminum alloy may comprise aluminum and silicon. 9 figs
Lubrication method and apparatus
Energy Technology Data Exchange (ETDEWEB)
McCarty, R.S.
1988-05-03
In a combustion turbine engine comprising a bearing member journaling a rotatable component, and compressor means providing pressurized air, the method of providing liquid lubricant to the bearing member is described comprising the steps of: providing the liquid lubricant sealed within a collapsible and penetrable bladder member; enclosing the bladder member and lubricant within a substantially closed housing sealingly cooperating with the bladder member to define a pair of chambers; arranging a penetrating lance member in one of the pair of chambers in confronting relationship with the bladder member; providing communication of the pressurized air with the other of the pair of chambers to force the bladder member into impaled sealing relationship with the lance member; communicating the lubricant to the bearing member via the lance member; and utilizing the pressurized air within the other chamber to collapse the bladder member, simultaneously flowing the lubricant to the bearing member.
International Nuclear Information System (INIS)
Miyahara, Shuji.
1990-01-01
An ultrasonic generator and a liquid supply nozzle are opposed to an object to be ground and a pump is started in this state to supply an organic solvent. Matters to be decontaminated which adheres to the surface of the object to be ground and are difficult to be removed by a mere mechanical removing method can be eliminated previously by the surface active effect of the organic solvent such as ethanol prior to the oscillation of the ultrasonic generator. Subsequently, when the ultrasonic generator is oscillated, scales in the floated state can be removed simply. Further, since the organic solvent can penetrate to provide the surface active effect even in such a narrow portion that the top end of the ultrasonic generator is difficult to the intruded at the surface of the object to be ground, the decontaminating treatment can be applied also to such a narrow portion. (T.M.)
Development of partitioning method
International Nuclear Information System (INIS)
Kobayashi, Tsutomu; Shirahashi, Koichi; Kubota, Masumitsu
1989-11-01
Precipitation behavior of elements in a high-level liquid waste (HLW) was studied by using the simulated liquid waste, when the transuranic elements group was precipitated and separated as oxalate from HLW generated from the reprocessing of spent nuclear fuel. The results showed that over 90 % of strontium and barium were precipitated when oxalic acid was directly added to HLW to precipitate the transuranic elements group, and the percentages of these elements precipitated were affected by molybdenum and or zirconium. Therefore, a method of adding oxalic acid into the filtrate was studied after removing previously molybdenum and zirconium as precipitate by denitrating HLW, and it was found that precipitated fractions of strontium and barium could be suppressed about 10 %. Adding oxalic acid under the co-existance of ascorbic acid is effective for quantitative precipitation of neptunium in HLW. In this case, it was found that adding ascorbic acid had little influence on precipitation behavior of the other elements except palladium. (author)
Microencapsulation system and method
Morrison, Dennis R. (Inventor)
2009-01-01
A microencapsulation apparatus is provided which is configured to form co-axial multi-lamellar microcapsules from materials discharged from first and second microsphere dispensers of the apparatus. A method of fabricating and processing microcapsules is also provided which includes forming distinct droplets comprising one or more materials and introducing the droplets directly into a solution bath to form a membrane around the droplets such that a plurality of microcapsules are formed. A microencapsulation system is provided which includes a microcapsule production unit, a fluidized passage for washing and harvesting microcapsules dispensed from the microcapsule production unit and a flow sensor for sizing and counting the microcapsules. In some embodiments, the microencapsulation system may further include a controller configured to simultaneously operate the microcapsule production unit, fluidized passage and flow sensor to process the microcapsules in a continuous manner.
Energy Technology Data Exchange (ETDEWEB)
Kellerman, Peter
2013-12-21
The Floating Silicon Method (FSM) project at Applied Materials (formerly Varian Semiconductor Equipment Associates), has been funded, in part, by the DOE under a “Photovoltaic Supply Chain and Cross Cutting Technologies” grant (number DE-EE0000595) for the past four years. The original intent of the project was to develop the FSM process from concept to a commercially viable tool. This new manufacturing equipment would support the photovoltaic industry in following ways: eliminate kerf losses and the consumable costs associated with wafer sawing, allow optimal photovoltaic efficiency by producing high-quality silicon sheets, reduce the cost of assembling photovoltaic modules by creating large-area silicon cells which are free of micro-cracks, and would be a drop-in replacement in existing high efficiency cell production process thereby allowing rapid fan-out into the industry.
Theurich, Gordon R.
1976-01-01
1. In a method of separating isotopes in a high speed gas centrifuge wherein a vertically oriented cylindrical rotor bowl is adapted to rotate about its axis within an evacuated chamber, and wherein an annular molecular pump having an intake end and a discharge end encircles the uppermost portion of said rotor bowl, said molecular pump being attached along its periphery in a leak-tight manner to said evacuated chamber, and wherein end cap closure means are affixed to the upper end of said rotor bowl, and a process gas withdrawal and insertion system enters said bowl through said end cap closure means, said evacuated chamber, molecular pump and end cap defining an upper zone at the discharge end of said molecular pump, said evacuated chamber, molecular pump and rotor bowl defining a lower annular zone at the intake end of said molecular pump, a method for removing gases from said upper and lower zones during centrifuge operation with a minimum loss of process gas from said rotor bowl, comprising, in combination: continuously measuring the pressure in said upper zone, pumping gas from said lower zone from the time the pressure in said upper zone equals a first preselected value until the pressure in said upper zone is equal to a second preselected value, said first preselected value being greater than said second preselected value, and continuously pumping gas from said upper zone from the time the pressure in said upper zone equals a third preselected value until the pressure in said upper zone is equal to a fourth preselected value, said third preselected value being greater than said first, second and fourth preselected values.
Section for qualitative methods (Letter)
Todd, Z.; Madill, A.
2004-01-01
Qualitative research methods are increasingly used in all areas of psychology. We have proposed a new Section – the Qualitative Methods in Psychology Section – for anyone with an interest in using these research methods.
Distinguishing deterministic and noise components in ELM time series
International Nuclear Information System (INIS)
Zvejnieks, G.; Kuzovkov, V.N
2004-01-01
Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type
Detection methods for irradiated food
International Nuclear Information System (INIS)
Stevenson, M.H.
1993-01-01
The plenary lecture gives a brief historical review of the development of methods for the detection of food irradiation and defines the demands on such methods. The methods described in detail are as follows: 1) Physical methods: As examples of luminescence methods, thermoluminescence and chermoluminescence are mentioned; ESR spectroscopy is discussed in detail by means of individual examples (crustaceans, frutis and vegetables, spieces and herbs, nuts). 2) Chemical methods: Examples given for these are methods that make use of alterations in lipids through radiation (formation of long-chain hydrocarbons, formation of 2-alkyl butanones), respectively radiation-induced alterations in the DNA. 3) Microbiological methods. An extensive bibliography is appended. (VHE) [de
Methods and Technologies Branch (MTB)
The Methods and Technologies Branch focuses on methods to address epidemiologic data collection, study design and analysis, and to modify technological approaches to better understand cancer susceptibility.
Energy Technology Data Exchange (ETDEWEB)
Leote, Joao [epartment of Neurosurgery, Hospital Garcia de Orta, Almada (Portugal); Institute of Biophysics and Biomedical Engineering, Faculty of Sciences of the University of Lisbon, Lisboa (Portugal); Nunes, Rita; Cerqueira, Luis; Ferreira, Hugo Alexandre [Institute of Biophysics and Biomedical Engineering, Faculty of Sciences of the University of Lisbon, Lisboa (Portugal)
2015-05-18
Recently, DKI-based tractography has been developed, showing improved crossing-fiber resolution in comparison to deterministic DTI-based tractography in healthy subjects. In this work, DTI and DKI-based tractography methods were compared regarding the assessment of the corticospinal tract in patients presenting space-occupying brain lesions near cortical motor areas. Nine patients (4 males) aged 23 to 62 years old, with space-occupying brain lesions (e.g. tumors) were studied for pre-surgical planning using a 1.5T MRI scanner and a 12-channel head coil. In 5 patients diffusion data was acquired along 64 directions and in 4 patients along 32 directions both with b-values 0, 1000 and 2000 s/mm2. Corticospinal tracts were estimated using deterministic DTI and DKI methods and also using probabilistic DTI. The superior cerebellar peduncles and the motor cortical areas, ipsilateral and contralateral to the lesions, were used as seed regions-of-interest for fiber tracking. Tracts courses and volumes were documented and compared between methods. Results showed that it was possible to estimate fiber tracts using deterministic DTI and DKI methods in 8/9 patients, and using the probabilistic DTI method in all patients. Overall, it was observed that DKI-based tractography showed more voluminous fiber tracts than when using deterministic DTI. The DKI method also showed curvilinear fibers mainly above lesions margins, which were not visible with deterministic DTI in 5 patients. Similar tracts were observed when using probabilistic DTI in 3 of those patients. Results suggest that the DKI method contribute with additional information about the corticospinal tract course in comparison with the DTI method, especially with subcortical lesions and near lesions’ margins. Therefore, this study suggests that DKI-based tractography could be useful in MRI and hybrid PET-MRI pre-surgical planning protocols for improved corticospinal tract evaluation.
Comparative law as method and the method of comparative law
Hage, J.C.; Adams, M.; Heirbaut, D.
2014-01-01
This article addresses both the justificatory role of comparative law within legal research (comparative law as method) and the method of comparative law itself. In this connection two questions will be answered: 1. Is comparative law a method, or a set of methods, for legal research? 2. Does
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2017-09-01
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p cranial nerves. Probabilistic tracking with a gradual
The State of Deterministic Thinking among Mothers of Autistic Children
Directory of Open Access Journals (Sweden)
Mehrnoush Esbati
2011-10-01
Full Text Available Objectives: The purpose of the present study was to investigate the effectiveness of cognitive-behavior education on decreasing deterministic thinking in mothers of children with autism spectrum disorders. Methods: Participants were 24 mothers of autistic children who were referred to counseling centers of Tehran and their children’s disorder had been diagnosed at least by a psychiatrist and a counselor. They were randomly selected and assigned into control and experimental groups. Measurement tool was Deterministic Thinking Questionnaire and both groups answered it before and after education and the answers were analyzed by analysis of covariance. Results: The results indicated that cognitive-behavior education decreased deterministic thinking among mothers of autistic children, it decreased four sub scale of deterministic thinking: interaction with others, absolute thinking, prediction of future, and negative events (P<0.05 as well. Discussions: By learning cognitive and behavioral techniques, parents of children with autism can reach higher level of psychological well-being and it is likely that these cognitive-behavioral skills would have a positive impact on general life satisfaction of mothers of children with autism.
Progress in nuclear well logging modeling using deterministic transport codes
International Nuclear Information System (INIS)
Kodeli, I.; Aldama, D.L.; Maucec, M.; Trkov, A.
2002-01-01
Further studies in continuation of the work presented in 2001 in Portoroz were performed in order to study and improve the performances, precission and domain of application of the deterministic transport codes with respect to the oil well logging analysis. These codes are in particular expected to complement the Monte Carlo solutions, since they can provide a detailed particle flux distribution in the whole geometry in a very reasonable CPU time. Real-time calculation can be envisaged. The performances of deterministic transport methods were compared to those of the Monte Carlo method. IRTMBA generic benchmark was analysed using the codes MCNP-4C and DORT/TORT. Centric as well as excentric casings were considered using 14 MeV point neutron source and NaI scintillation detectors. Neutron and gamma spectra were compared at two detector positions.(author)
Deterministic behavioural models for concurrency
DEFF Research Database (Denmark)
Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn
1993-01-01
This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....
Simiu, Emil
2002-01-01
The classical Melnikov method provides information on the behavior of deterministic planar systems that may exhibit transitions, i.e. escapes from and captures into preferred regions of phase space. This book develops a unified treatment of deterministic and stochastic systems that extends the applicability of the Melnikov method to physically realizable stochastic planar systems with additive, state-dependent, white, colored, or dichotomous noise. The extended Melnikov method yields the novel result that motions with transitions are chaotic regardless of whether the excitation is deterministic or stochastic. It explains the role in the occurrence of transitions of the characteristics of the system and its deterministic or stochastic excitation, and is a powerful modeling and identification tool. The book is designed primarily for readers interested in applications. The level of preparation required corresponds to the equivalent of a first-year graduate course in applied mathematics. No previous exposure to d...
International Nuclear Information System (INIS)
Timothy, J.G.; Bybee, R.L.
1978-01-01
A detector array and method are described in which sets of electrode elements are provided. Each set consists of a number of linear extending parallel electrodes. The sets of electrode elements are disposed at an angle (preferably orthogonal) with respect to one another so that the individual elements intersect and overlap individual elements of the other sets. Electrical insulation is provided between the overlapping elements. The detector array is exposed to a source of charged particles which in accordance with one embodiment comprise electrons derived from a microchannel array plate exposed to photons. Amplifier and discriminator means are provided for each individual electrode element. Detection means are provided to sense pulses on individual electrode elements in the sets, with coincidence of pulses on individual intersecting electrode elements being indicative of charged particle impact at the intersection of the elements. Electronic readout means provide an indication of coincident events and the location where the charged particle or particles impacted. Display means are provided for generating appropriate displays representative of the intensity and locaton of charged particles impacting on the detector array
Music acupuncture stimulation method.
Brătilă, F; Moldovan, C
2007-01-01
Harmonic Medicine is the model using the theory that the body rhythms synchronize to an outer rhythm applied for therapeutic purpose, can restores the energy balance in acupuncture channels and organs and the condition of well-being. The purpose of this scientific work was to demonstrate the role played by harmonic sounds in the stimulation of the Lung (LU) Meridian (Shoutaiyin Feijing) and of the Kidney (KI) Meridian (Zushaoyin Shenjing). It was used an original method that included: measurement and electronic sound stimulation of the Meridian Entry Point, measurement of Meridian Exit Point, computer data processing, bio feed-back adjustment of the music stimulation parameters. After data processing, it was found that the sound stimulation of the Lung Meridian Frequency is optimal between 122 Hz and 128 Hz, with an average of 124 Hz (87% of the subjects) and for Kidney Meridian from 118 Hz to 121 Hz, with an average of 120 Hz (67% of the subjects). The acupuncture stimulation was more intense for female subjects (> 7%) than for the male ones. We preliminarily consider that an informational resonance phenomenon can be developed between the acupuncture music stimulation frequency and the cellular dipole frequency, being a really "resonant frequency signature" of an acupoint. The harmonic generation and the electronic excitation or low-excitation status of an acupuncture point may be considered as a resonance mechanism. By this kind of acupunctural stimulation, a symphony may act and play a healer role.
International Nuclear Information System (INIS)
1977-01-01
Methods are described for measuring catecholamine levels in human and animal body fluids and tissues using the catechol-O-methyl-transferase (COMT) radioassay. The assay involves incubating the biological sample with COMT and the tritiated methyl donor, S-adenosyl-L-methionine( 3 H)-methyl. The O-methylated ( 3 H) epinephrine and/or norepinephrine are extracted and oxidised to vanillin- 3 H which in turn is extracted and its radioactivity counted. When analysing dopamine levels the assay is extended by vanillin- 3 H and raising the pH of the aqueous periodate phase from which O-methylated ( 3 H) dopamine is extracted and counted. The assay may be modified depending on whether measurements of undifferentiated total endogenous catecholamine levels or differential analyses of the catecholamine levels are being performed. The sensitivity of the assay can be as low as 5 picograms for norepinephrine and epinephrine and 12 picograms for dopamine. The assemblance of the essential components of the assay into a kit for use in laboratories is also described. (U.K.)
International Nuclear Information System (INIS)
Maeda, Katsuji.
1982-01-01
Purpose: To prevent stress corrosion cracks in stainless steels caused from hydrogen peroxide in reactor operation in which the density of hydrogen peroxide in the reactor water is controlled upon reactor start-up. Method: A heat exchanger equipped with a heat source for applying external heat is disposed into the recycling system for reactor coolants. Upon reactor start-up, the coolants are heated by the heat exchanger till arriving at a temperature at which the dissolving rate is faster than the forming rate of hydrogen peroxide in the coolants, and nuclear heating is started after reaching the above temperature. The temperature of the reactor water is increased in such a manner and, when it arrives at 140 0 C, extraction of control elements is started and the heat source for the heat exchanger is interrupted simultaneously. In this way spikes in the density of hydrogen peroxide are suppressed upon reactor start-up to thereby decrease the stress corrosion cracks in stainless steels. (Horiuchi, T.)
Jones, E.M. Jr.
1985-03-12
A method is described for producing tertiary ethers from C[sub 4] or C[sub 5] streams containing isobutene and isoamylene respectively in a process wherein a acidic cation exchange resin is used as the catalyst and as a distillation structure in a distillation reactor column, wherein the improvement is the operation of the catalytic distillation in two zones at different pressures, the first zone containing the catalyst packing and operated a higher pressure in the range of 100 to 200 psig in the case of C[sub 4] and 15 to 100 psig in the case of C[sub 5] which favors the etherification reaction and the second zone being a distillation operated at a lower pressure in the range of 0 to 100 psig in the case of C[sub 4] and 0 to 15 psig in the case of C[sub 5] wherein a first overhead from the first zone is fractionated to remove a portion of the unreacted alcohol from the first overhead and to return a condensed portion containing said alcohol to the first zone and to produce a second overhead having less alcohol than said first overhead. 3 figs.
International Nuclear Information System (INIS)
Osumi, Katsumi; Miki, Minoru.
1979-01-01
Purpose: To prevent stress corrosion cracks by decreasing the dissolved oxygen and hydrogen peroxide concentrations in the coolants within a reactor container upon transient operation such as at the start-up or shutdown of bwr type reactors. Method: After a condensate has been evacuated, deaeration operation is conducted while opening a main steam drain line, as well as a main steam separation valve and a by-pass valve in a turbine by-pass line connecting the main steam line and the condenser without by way of a turbine, and the reactor is started-up by the extraction of control rods after the concentration of dissolved oxygen in the cooling water within a pressure vessel has been decreased below a predetermined value. Nuclear heating is started after the reactor water has been increased to about 150 0 C by pump heating after the end of the deaeration operation for preventing the concentration of hydrogen peroxide and oxygen in the reactor water from temporarily increasing immediately after the start-up. The corrosive atmosphere in the reactor vessel can thus be moderated. (Horiuchi, T.)
Directory of Open Access Journals (Sweden)
M.H.R. Ghoreishy
2008-02-01
Full Text Available This research work is devoted to the footprint analysis of a steel-belted radial tyre (185/65R14 under vertical static load using finite element method. Two models have been developed in which in the first model the tread patterns were replaced by simple ribs while the second model was consisted of details of the tread blocks. Linear elastic and hyper elastic (Arruda-Boyce material models were selected to describe the mechanical behavior of the reinforcing and rubbery parts, respectively. The above two finite element models of the tyre were analyzed under inflation pressure and vertical static loads. The second model (with detailed tread patterns was analyzed with and without friction effect between tread and contact surfaces. In every stage of the analysis, the results were compared with the experimental data to confirm the accuracy and applicability of the model. Results showed that neglecting the tread pattern design not only reduces the computational cost and effort but also the differences between computed deformations do not show significant changes. However, more complicated variables such as shape and area of the footprint zone and contact pressure are affected considerably by the finite element model selected for the tread blocks. In addition, inclusion of friction even in static state changes these variables significantly.
Methods of channeling simulation
International Nuclear Information System (INIS)
Barrett, J.H.
1989-06-01
Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs
Methods for geochemical analysis
Baedecker, Philip A.
1987-01-01
The laboratories for analytical chemistry within the Geologic Division of the U.S. Geological Survey are administered by the Office of Mineral Resources. The laboratory analysts provide analytical support to those programs of the Geologic Division that require chemical information and conduct basic research in analytical and geochemical areas vital to the furtherance of Division program goals. Laboratories for research and geochemical analysis are maintained at the three major centers in Reston, Virginia, Denver, Colorado, and Menlo Park, California. The Division has an expertise in a broad spectrum of analytical techniques, and the analytical research is designed to advance the state of the art of existing techniques and to develop new methods of analysis in response to special problems in geochemical analysis. The geochemical research and analytical results are applied to the solution of fundamental geochemical problems relating to the origin of mineral deposits and fossil fuels, as well as to studies relating to the distribution of elements in varied geologic systems, the mechanisms by which they are transported, and their impact on the environment.
International Nuclear Information System (INIS)
Nakajima, Takeshi
1988-01-01
Purpose: To minimize the power change due to the increase in xenone and power distribution after reaching the rated power in the case of using fresh fuels no requiring conditioning operation thereby starting the nuclear reactor in a short period of time and stably. Method: When control rods are entirely inserted only with a purpose for the compensation of the reactivity in a xenon-unsaturated state such as upon starting of the nuclear reactor, peaking is generated in the lower portion of the reactor core. Therefore, it is necessary to insert control rods for additionally suppressing the peaking in the lower portion of the reactor core to a relatively shallow level. In view of the above, a plurality of control rods are divided into a first control rod group finally inserted in the rated power state and a second control rod group other than the above. Then, the power is once elevated to the rated power level by means of such an intermediate control rod pattern that the ratio of the total extraction amount between the first control rod group and the second control rod group is made constant. Then, the control rods are extracted stepwise while setting the ratio of the total extraction amount constant in accordance with the change of the accumulating amount of xenone, to thereby obtain the purpose. (kamimura, M.)
Standing footprint diagnostic method
Fan, Y. F.; Fan, Y. B.; Li, Z. Y.; Newman, T.; Lv, C. S.; Fan, Y. Z.
2013-10-01
Center of pressure is commonly used to evaluate standing balance. Even though it is incomplete, no better evaluation method has been presented. We designed our experiment with three standing postures: standing with feet together, standing with feet shoulder width apart, and standing with feet slightly wider than shoulder width. Our platform-based pressure system collected the instantaneous plantar pressure (standing footprint). A physical quantity of instantaneous standing footprint principal axis was defined, and it was used to construct an index to evaluate standing balance. Comparison between results from our newly established index and those from the center of pressure index to evaluate the stability of different standing postures revealed that the standing footprint principal axis index could better respond to the standing posture change than the existing one. Analysis indicated that the insensitive response to the relative position between feet and to the standing posture change from the center of pressure could be better detected by the standing footprint principal axis index. This predicts a wide application of standing footprint principal axis index when evaluating standing balance.
Remmel, Jeffrey; Shore, Richard; Sweedler, Moss; Progress in Computer Science and Applied Logic
1993-01-01
The twenty-six papers in this volume reflect the wide and still expanding range of Anil Nerode's work. A conference on Logical Methods was held in honor of Nerode's sixtieth birthday (4 June 1992) at the Mathematical Sciences Institute, Cornell University, 1-3 June 1992. Some of the conference papers are here, but others are from students, co-workers and other colleagues. The intention of the conference was to look forward, and to see the directions currently being pursued, in the development of work by, or with, Nerode. Here is a brief summary of the contents of this book. We give a retrospective view of Nerode's work. A number of specific areas are readily discerned: recursive equivalence types, recursive algebra and model theory, the theory of Turing degrees and r.e. sets, polynomial-time computability and computer science. Nerode began with automata theory and has also taken a keen interest in the history of mathematics. All these areas are represented. The one area missing is Nerode's applied mathematica...
International Nuclear Information System (INIS)
Oudenhoven, M.S.
1983-01-01
A method is disclosed of breaking rock from a free surface which uses hydrofracturing to induce rock failure. Initially, a hole is cut in the rock face to a depth suitable for spalling by a high pressure water jet drill. Next, at the bottom of this hole a thin circular slot is hydraulically cut into the rock. The slot's circular axis is cut parallel to the transverse axis of the hole and the slot is made larger than the hole diameter. Following this step, a high pressure packer, with a high pressure tube passing through its center, is inserted into the drill hole. This packer is placed near the bottom of the hole above the slot and inflated. A fluid, like water, under high pressure is pumped down the hole past the packer into the slotted area. This high pressure fluid initiates a tensile fracture in the rock at the circular periphery of the slot. Tension is induced in the rock at this peripheral location due to the small radius of curvature existing there. This circular tensile fracture propagates outward away from the drill hole and upward to the free rock surface. After the rock fragment is broken free, pressure is released from the packer and it is withdrawn from the hole letting the fragment drop. To advance through the rock, the process is continuously repeated with the high pressure fluid being applied to the slotted area over a very short time period