Diffusion-synthetic acceleration methods for discrete-ordinates problems
International Nuclear Information System (INIS)
Larsen, E.W.
1984-01-01
The diffusion-synthetic acceleration (DSA) method is an iterative procedure for obtaining numerical solutions of discrete-ordinates problems. The DSA method is operationally more complicated than the standard source-iteration (SI) method, but if encoded properly it converges much more rapidly, especially for problems with diffusion-like regions. In this article we describe the basic ideas behind the DSA method and give a (roughly chronological) review of its long development. We conclude with a discussion which covers additional topics, including some remaining open problems an the status of current efforts aimed at solving these problems
A transport synthetic acceleration method for transport iterations
International Nuclear Information System (INIS)
Ramone, G.L.; Adams, M.L.
1997-01-01
A family of transport synthetic acceleration (TSA) methods for iteratively solving within group scattering problems is presented. A single iteration in these schemes consists of a transport sweep followed by a low-order calculation, which itself is a simplified transport problem. The method for isotropic-scattering problems in X-Y geometry is described. The Fourier analysis of a model problem for equations with no spatial discretization shows that a previously proposed TSA method is unstable in two dimensions but that the modifications make it stable and rapidly convergent. The same procedure for discretized transport equations, using the step characteristic and two bilinear discontinuous methods, shows that discretization enhances TSA performance. A conjugate gradient algorithm for the low-order problem is described, a crude quadrature set for the low-order problem is proposed, and the number of low-order iterations per high-order sweep is limited to a relatively small value. These features lead to simple and efficient improvements to the method. TSA is tested on a series of problems, and a set of parameters is proposed for which the method behaves especially well. TSA achieves a substantial reduction in computational cost over source iteration, regardless of discretization parameters or material properties, and this reduction increases with the difficulty of the problem
International Nuclear Information System (INIS)
Khattab, K.M.
1998-01-01
The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotopic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) methods is proposed. This method converges (Clock time) faster than the MDSA method. It is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented. (author). 9 refs., 2 tabs., 5 figs
International Nuclear Information System (INIS)
Khattab, K.M.
1997-01-01
The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotropic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) method is proposed that converges (clock time) faster than the MDSA method. This method is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented
Synthetic acceleration methods for linear transport problems with highly anisotropic scattering
International Nuclear Information System (INIS)
Khattab, K.M.
1989-01-01
One of the iterative methods which is used to solve the discretized transport equation is called the Source Iteration Method (SI). The SI method converges very slowly for problems with optically thick regions and scattering ratios (σ s /σ t ) near unity. The Diffusion-Synthetic Acceleration method (DSA) is one of the methods which has been devised to improve the convergence rate of the SI method. The DSA method is a good tool to accelerate the SI method, if the particle which is being dealt with is a neutron. This is because the scattering process for neutrons is not severely anisotropic. However, if the particle is a charged particle (electron), DSA becomes ineffective as an acceleration device because here the scattering process is severely anisotropic. To improve the DSA algorithm for electron transport, the author approaches the problem in two different ways in this thesis. He develops the first approach by accelerating more angular moments (φ 0 , φ 1 , φ 2 , φ 3 ,...) than is done in DSA; he calls this approach the Modified P N Synthetic Acceleration (MPSA) method. In the second approach he modifies the definition of the transport sweep, using the physics of the scattering; he calls this approach the Modified Diffusion Synthetic Acceleration (MDSA) method. In general, he has developed, analyzed, and implemented the MPSA and MDSA methods in this thesis and has shown that for a high order quadrature set and mesh widths about 1.0 cm, they are each about 34 times faster (clock time) than the DSA method. Also, he has found that the MDSA spectral radius decreases as the mesh size increases. This makes the MDSA method a better choice for large spatial meshes
Synthetic acceleration methods for linear transport problems with highly anisotropic scattering
International Nuclear Information System (INIS)
Khattab, K.M.; Larsen, E.W.
1992-01-01
The diffusion synthetic acceleration (DSA) algorithm effectively accelerates the iterative solution of transport problems with isotropic or mildly anisotropic scattering. However, DSA loses its effectiveness for transport problems that have strongly anisotropic scattering. Two generalizations of DSA are proposed, which, for highly anisotropic scattering problems, converge at least an order of magnitude (clock time) faster than the DSA method. These two methods are developed, the results of Fourier analysis that theoretically predict their efficiency are described, and numerical results that verify the theoretical predictions are presented. (author). 10 refs., 7 figs., 5 tabs
Synthetic acceleration methods for linear transport problems with highly anisotropic scattering
International Nuclear Information System (INIS)
Khattab, K.M.; Larsen, E.W.
1991-01-01
This paper reports on the diffusion synthetic acceleration (DSA) algorithm that effectively accelerates the iterative solution of transport problems with isotropic or mildly anisotropic scattering. However, DSA loses its effectiveness for transport problems that have strongly anisotropic scattering. Two generalizations of DSA are proposed, which, for highly anisotropic scattering problems, converge at least an order of magnitude (clock time) faster than the DSA method. These two methods are developed, the results of Fourier analyses that theoretically predict their efficiency are described, and numerical results that verify the theoretical predictions are presented
Diffusion-synthetic acceleration methods for the discrete-ordinates equations
International Nuclear Information System (INIS)
Larsen, E.W.
1983-01-01
The diffusion-synthetic acceleration (DSA) method is an iterative procedure for obtaining numerical solutions of discrete-ordinates problems. The DSA method is operationally more complicated than the standard source-iteration (SI) method, but if encoded properly it converges much more rapidly, especially for problems with diffusion-like regions. In this article we describe the basic ideas beind the DSA method and give a (roughly chronological) review of its long development. We conclude with a discussion which covers additional topics, including some remaining open problems and the status of current efforts aimed at solving these problems
Krylov iterative methods and synthetic acceleration for transport in binary statistical media
International Nuclear Information System (INIS)
Fichtl, Erin D.; Warsa, James S.; Prinja, Anil K.
2009-01-01
In particle transport applications there are numerous physical constructs in which heterogeneities are randomly distributed. The quantity of interest in these problems is the ensemble average of the flux, or the average of the flux over all possible material 'realizations.' The Levermore-Pomraning closure assumes Markovian mixing statistics and allows a closed, coupled system of equations to be written for the ensemble averages of the flux in each material. Generally, binary statistical mixtures are considered in which there are two (homogeneous) materials and corresponding coupled equations. The solution process is iterative, but convergence may be slow as either or both materials approach the diffusion and/or atomic mix limits. A three-part acceleration scheme is devised to expedite convergence, particularly in the atomic mix-diffusion limit where computation is extremely slow. The iteration is first divided into a series of 'inner' material and source iterations to attenuate the diffusion and atomic mix error modes separately. Secondly, atomic mix synthetic acceleration is applied to the inner material iteration and S 2 synthetic acceleration to the inner source iterations to offset the cost of doing several inner iterations per outer iteration. Finally, a Krylov iterative solver is wrapped around each iteration, inner and outer, to further expedite convergence. A spectral analysis is conducted and iteration counts and computing cost for the new two-step scheme are compared against those for a simple one-step iteration, to which a Krylov iterative method can also be applied.
Effectiveness of various transport synthetic acceleration methods with and without GMRES
International Nuclear Information System (INIS)
Chang, J.H.; Adams, M.L.
2005-01-01
We explore the effectiveness of three types of transport synthetic acceleration (TSA) methods as stand-alone methods and as pre-conditioners within the GMRES Krylov solver. The three types are β TSA, 'stretched' TSA, and 'stretched and filtered' (SF) TSA. We analyzed the effectiveness of these algorithms using Fourier mode analysis of model two-dimensional problems with periodic boundary conditions, including problems with alternating layers of different materials. The analyses revealed that both β-TSA and stretched TSA can diverge for fairly weak heterogeneities. Performance of SF TSA, even with the optimum filtering parameter, degrades with heterogeneity. However, with GMRES, all TSA methods are convergent. SF TSA with the optimum filtering parameter was the most effective method. Numerical results support our Fourier mode analysis. (authors)
International Nuclear Information System (INIS)
Coppa, G.G.M.; Ravetto, P.; Colombo, V.
1996-01-01
The present work concerns some aspects of the optimization of the synthesis acceleration techniques in neutron transport. The importance of non-asymptotic convergence velocity as a theoretical means to characterize and optimize acceleration methods is discussed in detail for isotropic as well as highly anisotropic scattering cases; this shows the innacuracy of results based only on the usual asyptotic analysis. A detailed study of convergence velocity behaviour for space discretized schemes and multidimensional problems is also presented. Finally, various kinds of theoretical-evaluated convergence velocities are reported to study the effective behaviour of some modifications of the classic DSA technique recently proposed to face its loss of effectiveness and optimize performances when dealing with highly anisotropic scattering; comparisons with results of already assessed DSA modification techniques are reported for various scattering cross-section configurations. (Author)
International Nuclear Information System (INIS)
Warsa, James S.; Wareing, Todd A.; Morel, Jim E.
2004-01-01
A loss in the effectiveness of diffusion synthetic acceleration (DSA) schemes has been observed with certain S N discretizations on two-dimensional Cartesian grids in the presence of material discontinuities. We will present more evidence supporting the conjecture that DSA effectiveness will degrade for multidimensional problems with discontinuous total cross sections, regardless of the particular physical configuration or spatial discretization. Fourier analysis and numerical experiments help us identify a set of representative problems for which established DSA schemes are ineffective, focusing on diffusive problems for which DSA is most needed. We consider a lumped, linear discontinuous spatial discretization of the S N transport equation on three-dimensional, unstructured tetrahedral meshes and look at a fully consistent and a 'partially consistent' DSA method for this discretization. The effectiveness of both methods is shown to degrade significantly. A Fourier analysis of the fully consistent DSA scheme in the limit of decreasing cell optical thickness supports the view that the DSA itself is failing when material discontinuities are present in a problem. We show that a Krylov iterative method, preconditioned with DSA, is an effective remedy that can be used to efficiently compute solutions for this class of problems. We show that as a preconditioner to the Krylov method, a partially consistent DSA method is more than adequate. In fact, it is preferable to a fully consistent method because the partially consistent method is based on a continuous finite element discretization of the diffusion equation that can be solved relatively easily. The Krylov method can be implemented in terms of the original S N source iteration coding with only slight modification. Results from numerical experiments show that replacing source iteration with a preconditioned Krylov method can efficiently solve problems that are virtually intractable with accelerated source iteration
Transport synthetic acceleration with opposing reflecting boundary conditions
Energy Technology Data Exchange (ETDEWEB)
Zika, M R; Adams, M L
2000-02-01
The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iterating on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.
S2 synthetic acceleration scheme for the one-dimensional S/sub n/ equations
International Nuclear Information System (INIS)
Lorence, L.J. Jr.; Larsen, E.W.; Morel, J.E.
1986-01-01
The authors have developed an S 2 synthetic acceleration method for the one-dimensional S/sub n/ equations with linear-discontinuous (LD) spatial differencing, and implemented it in a new version of the ONETRAN code. As in the diffusion-synthetic acceleration (DSA) of Morel, both the zeroth and first moments of the scattering source are accelerated. This is done by using the S 2 equations with Gauss quadrature rather than the diffusion equation as the low-order operator in the synthetic acceleration scheme
Synthetic seismic acceleration time-histories and their acceptance criteria
International Nuclear Information System (INIS)
Xu Hong
1996-01-01
In seismic dynamic response analysis of structures and equipment, time-history analysis is now widely used. The 3-D seismic acceleration time-histories or 3-D seismic displacement time-histories are required in the 3-D seismic dynamic response analysis as the seismic excitation input data. Because of the lack of actual acceleration time-histories for the field where the structures or equipment are installed, the general practice is to use the synthetic seismic acceleration time-histories, which are derived from the design seismic response spectra of the field, as the seismic excitation input data. However, from one specified design response spectrum indefinite solutions of acceleration time-histories can be derived depending on the values of the input parameters. Not all the derived synthetic time-histories can be used as seismic excitation input data. Only those which meet the acceptance criteria can be used. The factors (input parameters), which will affect the time-history solution from a specified seismic response spectrum, and the acceptance criteria are discussed
Methods for preparing synthetic freshwaters.
Smith, E J; Davison, W; Hamilton-Taylor, J
2002-03-01
Synthetic solutions that emulate the major ion compositions of natural waters are useful in experiments aimed at understanding biogeochemical processes. Standard recipes exist for preparing synthetic analogues of seawater, with its relatively constant composition, but, due to the diversity of freshwaters, a range of compositions and recipes is required. Generic protocols are developed for preparing synthetic freshwaters of any desired composition. The major problems encountered in preparing hard and soft waters include dissolving sparingly soluble calcium carbonate, ensuring that the ionic components of each concentrated stock solution cannot form an insoluble salt and dealing with the supersaturation of calcium carbonate in many hard waters. For acidic waters the poor solubility of aluminium salts requires attention. These problems are overcome by preparing concentrated stock solutions according to carefully designed reaction paths that were tested using a combination of experiment and equilibrium modeling. These stock solutions must then be added in a prescribed order to prepare a final solution that is brought into equilibrium with the atmosphere. The example calculations for preparing hard, soft and acidic freshwater surrogates with major ion compositions the same as published analyses, are presented in a generalized fashion that should allow preparation of any synthetic freshwater according to its known analysis.
Transport synthetic acceleration for long-characteristics assembly-level transport problems
Energy Technology Data Exchange (ETDEWEB)
Zika, M R; Adams, M L
2000-02-01
The authors apply the transport synthetic acceleration (TSA) scheme to the long-characteristics spatial discretization for the two-dimensional assembly-level transport problem. This synthetic method employs a simplified transport operator as its low-order approximation. Thus, in the acceleration step, the authors take advantage of features of the long-characteristics discretization that make it particularly well suited to assembly-level transport problems. The main contribution is to address difficulties unique to the long-characteristics discretization and produce a computationally efficient acceleration scheme. The combination of the long-characteristics discretization, opposing reflecting boundary conditions (which are present in assembly-level transport problems), and TSA presents several challenges. The authors devise methods for overcoming each of them in a computationally efficient way. Since the boundary angular data exist on different grids in the high- and low-order problems, they define restriction and prolongation operations specific to the method of long characteristics to map between the two grids. They implement the conjugate gradient (CG) method in the presence of opposing reflection boundary conditions to solve the TSA low-order equations. The CG iteration may be applied only to symmetric positive definite (SPD) matrices; they prove that the long-characteristics discretization yields an SPD matrix. They present results of the acceleration scheme on a simple test problem, a typical pressurized water reactor assembly, and a typical boiling water reactor assembly.
Transport synthetic acceleration for long-characteristics assembly-level transport problems
International Nuclear Information System (INIS)
Zika, M.R.; Adams, M.L.
2000-01-01
The authors apply the transport synthetic acceleration (TSA) scheme to the long-characteristics spatial discretization for the two-dimensional assembly-level transport problem. This synthetic method employs a simplified transport operator as its low-order approximation. Thus, in the acceleration step, the authors take advantage of features of the long-characteristics discretization that make it particularly well suited to assembly-level transport problems. The main contribution is to address difficulties unique to the long-characteristics discretization and produce a computationally efficient acceleration scheme. The combination of the long-characteristics discretization, opposing reflecting boundary conditions (which are present in assembly-level transport problems), and TSA presents several challenges. The authors devise methods for overcoming each of them in a computationally efficient way. Since the boundary angular data exist on different grids in the high- and low-order problems, they define restriction and prolongation operations specific to the method of long characteristics to map between the two grids. They implement the conjugate gradient (CG) method in the presence of opposing reflection boundary conditions to solve the TSA low-order equations. The CG iteration may be applied only to symmetric positive definite (SPD) matrices; they prove that the long-characteristics discretization yields an SPD matrix. They present results of the acceleration scheme on a simple test problem, a typical pressurized water reactor assembly, and a typical boiling water reactor assembly
Transport Synthetic Acceleration for Long-Characteristics Assembly-Level Transport Problems
International Nuclear Information System (INIS)
Zika, Michael R.; Adams, Marvin L.
2000-01-01
We apply the transport synthetic acceleration (TSA) scheme to the long-characteristics spatial discretization for the two-dimensional assembly-level transport problem. This synthetic method employs a simplified transport operator as its low-order approximation. Thus, in the acceleration step, we take advantage of features of the long-characteristics discretization that make it particularly well suited to assembly-level transport problems. Our main contribution is to address difficulties unique to the long-characteristics discretization and produce a computationally efficient acceleration scheme.The combination of the long-characteristics discretization, opposing reflecting boundary conditions (which are present in assembly-level transport problems), and TSA presents several challenges. We devise methods for overcoming each of them in a computationally efficient way. Since the boundary angular data exist on different grids in the high- and low-order problems, we define restriction and prolongation operations specific to the method of long characteristics to map between the two grids. We implement the conjugate gradient (CG) method in the presence of opposing reflection boundary conditions to solve the TSA low-order equations. The CG iteration may be applied only to symmetric positive definite (SPD) matrices; we prove that the long-characteristics discretization yields an SPD matrix. We present results of our acceleration scheme on a simple test problem, a typical pressurized water reactor assembly, and a typical boiling water reactor assembly
Transport synthetic acceleration scheme for multi-dimensional neutron transport problems
Energy Technology Data Exchange (ETDEWEB)
Modak, R S; Kumar, Vinod; Menon, S V.G. [Theoretical Physics Div., Bhabha Atomic Research Centre, Mumbai (India); Gupta, Anurag [Reactor Physics Design Div., Bhabha Atomic Research Centre, Mumbai (India)
2005-09-15
The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)
Transport synthetic acceleration scheme for multi-dimensional neutron transport problems
International Nuclear Information System (INIS)
Modak, R.S.; Vinod Kumar; Menon, S.V.G.; Gupta, Anurag
2005-09-01
The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)
Synthetic Self-Healing Methods
Energy Technology Data Exchange (ETDEWEB)
Bello, Mollie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-06-02
Given enough time, pressure, temperature fluctuation, and stress any material will fail. Currently, synthesized materials make up a large part of our everyday lives, and are used in a number of important applications such as; space travel, under water devices, precise instrumentation, transportation, and infrastructure. Structural failure of these material scan lead to expensive and dangerous consequences. In an attempt to prolong the life spans of specific materials and reduce efforts put into repairing them, biologically inspired, self-healing systems have been extensively investigated. The current review explores recent advances in three methods of synthesized self-healing: capsule based, vascular, and intrinsic. Ideally, self-healing materials require no human intervention to promote healing, are capable of surviving all the steps of polymer processing, and heal the same location repeatedly. Only the vascular method holds up to all of these idealities.
The Source Equivalence Acceleration Method
International Nuclear Information System (INIS)
Everson, Matthew S.; Forget, Benoit
2015-01-01
Highlights: • We present a new acceleration method, the Source Equivalence Acceleration Method. • SEAM forms an equivalent coarse group problem for any spatial method. • Equivalence is also formed across different spatial methods and angular quadratures. • Testing is conducted using OpenMOC and performance is compared with CMFD. • Results show that SEAM is preferable for very expensive transport calculations. - Abstract: Fine-group whole-core reactor analysis remains one of the long sought goals of the reactor physics community. Such a detailed analysis is typically too computationally expensive to be realized on anything except the largest of supercomputers. Recondensation using the Discrete Generalized Multigroup (DGM) method, though, offers a relatively cheap alternative to solving the fine group transport problem. DGM, however, suffered from inconsistencies when applied to high-order spatial methods. While an exact spatial recondensation method was developed and provided full spatial consistency with the fine group problem, this approach substantially increased memory requirements for realistic problems. The method described in this paper, called the Source Equivalence Acceleration Method (SEAM), forms a coarse-group problem which preserves the fine-group problem even when using higher order spatial methods. SEAM allows recondensation to converge to the fine-group solution with minimal memory requirements and little additional overhead. This method also provides for consistency when using different spatial methods and angular quadratures between the coarse group and fine group problems. SEAM was implemented in OpenMOC, a 2D MOC code developed at MIT, and its performance tested against Coarse Mesh Finite Difference (CMFD) acceleration on the C5G7 benchmark problem and on a 361 group version of the problem. For extremely expensive transport calculations, SEAM was able to outperform CMFD, resulting in speed-ups of 20–45 relative to the normal power
Algebraic collapsing acceleration of the characteristics method with anisotropic scattering
International Nuclear Information System (INIS)
Le Tellier, R.; Hebert, A.; Roy, R.
2004-01-01
In this paper, the characteristics solvers implemented in the lattice code Dragon are extended to allow a complete anisotropic treatment of the collision operator. An efficient synthetic acceleration method, called Algebraic Collapsing Acceleration (ACA), is presented. Tests show that this method can substantially speed up the convergence of scattering source iterations. The effect of boundary conditions, either specular or white reflections, on anisotropic scattering lattice-cell problems is also considered. (author)
Proposed guidelines for synthetic accelerogram generation methods
International Nuclear Information System (INIS)
Shaw, D.E.; Rizzo, P.C.; Shukla, D.K.
1975-01-01
With the advent of high speed digital computation machines and discrete structural analysis techniques, it has become attractive to use synthetically generated accelerograms as input in the seismic design and analysis of structures. Several procedures are currently available which can generate accelerograms which match a given design response spectra while not paying significant attention to other properties of seismic accelerograms. This paper studies currently available artificial time history generation techniques from the standpoint of various properties of seismic time histories consisting of; 1. Response Spectra; 2. Peak Ground Acceleration; 3. Total Duration; 4. Time dependent enveloping functions defining the rise time to strong motion, duration of significant shaking and decay of the significant shaking portion of the seismic record; 5. Fourier Amplitude and Phase Spectra; 6. Ground Motion Parameters; 7. Apparent Frequency; with the aim of providing guidelines of the time history parameters based on historic strong motion seismic records. (Auth.)
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Effectiveness of a consistently formulated diffusion-synthetic acceleration differencing approach
International Nuclear Information System (INIS)
Khalil, H.
1988-01-01
A consistently formulated differencing approach is applied to the diffusion-synthetic acceleration of discrete ordinates calculations based on various spatial differencing schemes. The diffusion ''coupling'' equations derived for each scheme are contrasted to conventional coupling relations and are shown to permit derivation of either point- or box-centered diffusion difference equations. The resulting difference equations are shown to be mathematically equivalent, in slab geometry, to equations derived by applying Larsen's four-step procedure to the S/sub 2/ equations. Fourier stability analysis of the acceleration method applied to slab model problems is used to demonstrate that, for any S/sub n/ differencing scheme (a) the upper bound on the spectral radius of the method occurs in the fine-mesh limit and equals that of the spatially continuous case (0.22466), and (b) the spectral radius decreases with increasing mesh size to an asymptotic value <0.13135. This model problem performance is somewhat superior to that of Larsen's approach, for which the spectral radius is bounded by 0.25 in the wide-mesh limit. Numerical results of multidimensional, heterogeneous, scattering-dominated problems are also presented to demonstrate the rapid convergence of accelerated discrete ordinates calculations using various spatial differencing schemes
Fourier analysis of a new P1 synthetic acceleration for Sn transport equations
International Nuclear Information System (INIS)
Turcksin, B.; Ragusa, J. C.
2010-10-01
In this work, is derived a new P1 synthetic acceleration scheme (P1SA) for the S N transport equation and analyze its convergence properties through the means of a Fourier analysis. The Fourier analysis is carried out for both continuous (i.e., not spatially discretized) S N equations and linear discontinuous Fem discretization. We show, thanks to the continuous analysis, that the scheme is unstable when the anisotropy is important (μ - >0.5). However, the discrete analysis shows that when cells are large in comparison to the mean free path, the spectral radius decreases and the acceleration scheme becomes effective, even for highly anisotropic scattering. In charged particles transport, scattering is highly anisotropic and mean free paths are very small and, thus, this scheme could be of interest. To use the P1SA when cells are small and anisotropy is important, the scheme is modified by altering the update of the accelerated flux or by using either K transport sweeps before the application of P1SA. The update scheme performs well as long as μ - - ≥0.9, the modified update scheme is unstable. The multiple transport sweeps scheme is convergent with an arbitrary μ - but the spectral radius increases when scattering is isotropic. When anisotropic increases, the frequency of use of the acceleration scheme needs to be decreased. Even if the P1SA is used less often, the spectral radius is significantly smaller when compared with a method that does not use it for high anisotropy (μ - ≥0.5). It is interesting to notice that using P1SA every two iterations gives the same spectral radius than the update method when μ - ≥0.5 but it is much less efficient when μ - <0.5. (Author)
Setterbo, Jacob J; Garcia, Tanya C; Campbell, Ian P; Reese, Jennifer L; Morgan, Jessica M; Kim, Sun Y; Hubbard, Mont; Stover, Susan M
2009-10-01
To compare hoof acceleration and ground reaction force (GRF) data among dirt, synthetic, and turf surfaces in Thoroughbred racehorses. 3 healthy Thoroughbred racehorses. Forelimb hoof accelerations and GRFs were measured with an accelerometer and a dynamometric horseshoe during trot and canter on dirt, synthetic, and turf track surfaces at a racecourse. Maxima, minima, temporal components, and a measure of vibration were extracted from the data. Acceleration and GRF variables were compared statistically among surfaces. The synthetic surface often had the lowest peak accelerations, mean vibration, and peak GRFs. Peak acceleration during hoof landing was significantly smaller for the synthetic surface (mean + or - SE, 28.5g + or - 2.9g) than for the turf surface (42.9g + or - 3.8g). Hoof vibrations during hoof landing for the synthetic surface were dirt and turf surfaces. Peak GRF for the synthetic surface (11.5 + or - 0.4 N/kg) was 83% and 71% of those for the dirt (13.8 + or - 0.3 N/kg) and turf surfaces (16.1 + or - 0.7 N/kg), respectively. The relatively low hoof accelerations, vibrations, and peak GRFs associated with the synthetic surface evaluated in the present study indicated that synthetic surfaces have potential for injury reduction in Thoroughbred racehorses. However, because of the unique material properties and different nature of individual dirt, synthetic, and turf racetrack surfaces, extending the results of this study to encompass all track surfaces should be done with caution.
International Nuclear Information System (INIS)
Rosa, M.; Warsa, J. S.; Chang, J. H.
2007-01-01
A Fourier analysis is conducted in two-dimensional (2D) Cartesian geometry for the discrete-ordinates (SN) approximation of the neutron transport problem solved with Richardson iteration (Source Iteration) and Richardson iteration preconditioned with Transport Synthetic Acceleration (TSA), using the Parallel Block-Jacobi (PBJ) algorithm. The results for the un-accelerated algorithm show that convergence of PBJ can degrade, leading in particular to stagnation of GMRES(m) in problems containing optically thin sub-domains. The results for the accelerated algorithm indicate that TSA can be used to efficiently precondition an iterative method in the optically thin case when implemented in the 'modified' version MTSA, in which only the scattering in the low order equations is reduced by some non-negative factor β<1. (authors)
New Synthetic Methods for Hypericum Natural Products
Energy Technology Data Exchange (ETDEWEB)
Jeon, Insik [Iowa State Univ., Ames, IA (United States)
2006-01-01
Organic chemistry has served as a solid foundation for interdisciplinary research areas, such as molecular biology and medicinal chemistry. An understanding of the biological activities and structural elucidations of natural products can lead to the development of clinically valuable therapeutic options. The advancements of modern synthetic methodologies allow for more elaborate and concise natural product syntheses. The theme of this study centers on the synthesis of natural products with particularly challenging structures and interesting biological activities. The synthetic expertise developed here will be applicable to analog syntheses and to other research problems.
Lasers and new methods of particle acceleration
International Nuclear Information System (INIS)
Parsa, Z.
1998-02-01
There has been a great progress in development of high power laser technology. Harnessing their potential for particle accelerators is a challenge and of great interest for development of future high energy colliders. The author discusses some of the advances and new methods of acceleration including plasma-based accelerators. The exponential increase in sophistication and power of all aspects of accelerator development and operation that has been demonstrated has been remarkable. This success has been driven by the inherent interest to gain new and deeper understanding of the universe around us. With the limitations of the conventional technology it may not be possible to meet the requirements of the future accelerators with demands for higher and higher energies and luminosities. It is believed that using the existing technology one can build a linear collider with about 1 TeV center of mass energy. However, it would be very difficult (or impossible) to build linear colliders with energies much above one or two TeV without a new method of acceleration. Laser driven high gradient accelerators are becoming more realistic and is expected to provide an alternative, (more compact, and more economical), to conventional accelerators in the future. The author discusses some of the new methods of particle acceleration, including laser and particle beam driven plasma based accelerators, near and far field accelerators. He also discusses the enhanced IFEL (Inverse Free Electron Laser) and NAIBEA (Nonlinear Amplification of Inverse-Beamstrahlung Electron Acceleration) schemes, laser driven photo-injector and the high energy physics requirements
Discontinuous diffusion synthetic acceleration for Sn transport on 2D arbitrary polygonal meshes
International Nuclear Information System (INIS)
Turcksin, Bruno; Ragusa, Jean C.
2014-01-01
In this paper, a Diffusion Synthetic Acceleration (DSA) technique applied to the S n radiation transport equation is developed using Piece-Wise Linear Discontinuous (PWLD) finite elements on arbitrary polygonal grids. The discretization of the DSA equations employs an Interior Penalty technique, as is classically done for the stabilization of the diffusion equation using discontinuous finite element approximations. The penalty method yields a system of linear equations that is Symmetric Positive Definite (SPD). Thus, solution techniques such as Preconditioned Conjugate Gradient (PCG) can be effectively employed. Algebraic MultiGrid (AMG) and Symmetric Gauss–Seidel (SGS) are employed as conjugate gradient preconditioners for the DSA system. AMG is shown to be significantly more efficient than SGS. Fourier analyses are carried out and we show that this discontinuous finite element DSA scheme is always stable and effective at reducing the spectral radius for iterative transport solves, even for grids with high-aspect ratio cells. Numerical results are presented for different grid types: quadrilateral, hexagonal, and polygonal grids as well as grids with local mesh adaptivity
Cathodic Delamination Accelerated Life Test Method
National Research Council Canada - National Science Library
Ramotowski, Thomas S
2007-01-01
A method for conducting an accelerated life test of a polymer coated metallic sample includes placing the sample below the water surface in a test tank containing water and an oxygen containing gas...
Accelerated Test Method for Corrosion Protective Coatings
National Aeronautics and Space Administration — This project seeks to develop a new accelerated corrosion test method that predicts the long-term corrosion protection performance of spaceport structure coatings as...
Coarse mesh and one-cell block inversion based diffusion synthetic acceleration
Kim, Kang-Seog
DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent
International Nuclear Information System (INIS)
Wareing, T.A.
1993-01-01
New methods are presented for diffusion-synthetic accelerating the S N equations in slab and x-y geometries with the corner balance spatial differencing scheme. With the standard diffusion-synthetic acceleration method, the discretized diffusion problem is derived from the discretized S N problem to insure stability through consistent differencing. The major difference between our new methods and standard diffusion-synthetic acceleration is that the discretized diffusion problem is derived from a discretization of the P 1 equations, independently of the discretized S N problem. We present theoretical and numerical results to show that these new methods are unconditionally efficient in slab and x-y geometries with rectangular spatial meshes and isotropic scattering. (orig.)
Cyclotron method for heavy ion acceleration
International Nuclear Information System (INIS)
Gikal, B.N.; Gul'bekyan, G.G.; Kutner, V.B.; Oganesyan, R.Ts.
1984-01-01
Studies on heavy ion beams in a wide range of masses (up to uranium) and energies disclose essential potential opportunities for solution of both fundamental scientific and significant economical problems. A cyclotron method for heavy ion acceleration is considered. Development of low and medium energy heavy ion accelerators is revealed. The design of a complex comprising two isochronous cyclotrons which is planned to be constrdcted 1n the JINR is described. The cyclotron complex includes the U-400 and the U-400 M cyclotrons and it is intended for acceleration of both 35-20 MeV/nucleon superheavy ions such as Xe-U and 120 MeV/nucleon light ions. Certain systems of the accelerators are described. Prospects of the U-400 and the U-400 M development are displayed
International Nuclear Information System (INIS)
Rosa, M.; Warsa, J. S.; Chang, J. H.
2006-01-01
A Fourier analysis is conducted for the discrete-ordinates (SN) approximation of the neutron transport problem solved with Richardson iteration (Source Iteration) and Richardson iteration preconditioned with Transport Synthetic Acceleration (TSA), using the Parallel Block-Jacobi (PBJ) algorithm. Both 'traditional' TSA (TTSA) and a 'modified' TSA (MTSA), in which only the scattering in the low order equations is reduced by some non-negative factor β and < 1, are considered. The results for the un-accelerated algorithm show that convergence of the PBJ algorithm can degrade. The PBJ algorithm with TTSA can be effective provided the β parameter is properly tuned for a given scattering ratio c, but is potentially unstable. Compared to TTSA, MTSA is less sensitive to the choice of β, more effective for the same computational effort (c'), and it is unconditionally stable. (authors)
Moments method in the theory of accelerators
International Nuclear Information System (INIS)
Perel'shtejn, Eh.A.
1984-01-01
The moments method is widely used for solution of different physical and calculation problems in the theory of accelerators, magnetic optics and dynamics of high-current beams. Techniques using moments of the second order-mean squape characteristics of charged particle beams is shown to be most developed. The moments method is suitable and sometimes even the only technique applicable for solution of computerized problems on optimization of accelerating structures, beam transport channels, matching and other systems with accout of a beam space charge
International Nuclear Information System (INIS)
Alonso-Vargas, G.
1991-01-01
A computer program has been developed which uses a technique of synthetic acceleration by diffusion by analytical schemes. Both in the diffusion equation as in that of transport, analytical schemes were used which allowed a substantial time saving in the number of iterations required by source iteration method to obtain the K e ff. The program developed ASD (Synthetic Diffusion Acceleration) by diffusion was written in FORTRAN and can be executed on a personal computer with a hard disc and mathematical O-processor. The program is unlimited as to the number of regions and energy groups. The results obtained by the ASD program for K e ff is nearly completely concordant with those of obtained utilizing the ANISN-PC code for different analytical type problems in this work. The ASD program allowed obtention of an approximate solution of the neutron transport equation with a relatively low number of internal reiterations with good precision. One of its applications would be in the direct determinations of axial distribution neutronic flow in a fuel assembly as well as in the obtention of the effective multiplication factor. (Author)
Acceleration techniques for the discrete ordinate method
International Nuclear Information System (INIS)
Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego; Trautmann, Thomas
2013-01-01
In this paper we analyze several acceleration techniques for the discrete ordinate method with matrix exponential and the small-angle modification of the radiative transfer equation. These techniques include the left eigenvectors matrix approach for computing the inverse of the right eigenvectors matrix, the telescoping technique, and the method of false discrete ordinate. The numerical simulations have shown that on average, the relative speedup of the left eigenvector matrix approach and the telescoping technique are of about 15% and 30%, respectively. -- Highlights: ► We presented the left eigenvector matrix approach. ► We analyzed the method of false discrete ordinate. ► The telescoping technique is applied for matrix operator method. ► Considered techniques accelerate the computations by 20% in average.
Validated method for the detection and quantitation of synthetic ...
African Journals Online (AJOL)
These methods were applied to postmortem cases from the Johannesburg Forensic Pathology Services Medicolegal Laboratory (FPS-MLL) to assess the prevalence of these synthetic cannabinoids amongst the local postmortem population. Urine samples were extracted utilizing a solid phase extraction (SPE) method, ...
Spectral analysis of an algebraic collapsing acceleration for the characteristics method
International Nuclear Information System (INIS)
Le Tellier, R.; Hebert, A.
2005-01-01
A spectral analysis of a diffusion synthetic acceleration called Algebraic Collapsing Acceleration (ACA) was carried out in the context of the characteristics method to solve the neutron transport equation. Two analysis were performed in order to assess the ACA performances. Both a standard Fourier analysis in a periodic and infinite slab-geometry and a direct spectral analysis for a finite slab-geometry were investigated. In order to evaluate its performance, ACA was compared with two competing techniques used to accelerate the convergence of the characteristics method, the Self-Collision Re-balancing technique and the Asymptotic Synthetic Acceleration. In the restricted framework of 1-dimensional slab-geometries, we conclude that ACA offers a good compromise between the reduction of the spectral radius of the iterative matrix and the resources to construct, store and solve the corrective system. A comparison on a monoenergetic 2-dimensional benchmark was performed and tends to confirm these conclusions. (authors)
Evolutionary optimization methods for accelerator design
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained
Benefits of EMU Participation : Estimates using the Synthetic Control Method
Verstegen, Loes; van Groezen, Bas; Meijdam, Lex
2017-01-01
This paper investigates quantitatively the benefits from participation in the Economic and Monetary Union for individual Euro area countries. Using the synthetic control method, we estimate how real GDP per capita would have developed for the EMU member states, if those countries had not joined the
International Nuclear Information System (INIS)
Brown, P.; Chang, B.
1998-01-01
The linear Boltzmann transport equation (BTE) is an integro-differential equation arising in deterministic models of neutral and charged particle transport. In slab (one-dimensional Cartesian) geometry and certain higher-dimensional cases, Diffusion Synthetic Acceleration (DSA) is known to be an effective algorithm for the iterative solution of the discretized BTE. Fourier and asymptotic analyses have been applied to various idealizations (e.g., problems on infinite domains with constant coefficients) to obtain sharp bounds on the convergence rate of DSA in such cases. While DSA has been shown to be a highly effective acceleration (or preconditioning) technique in one-dimensional problems, it has been observed to be less effective in higher dimensions. This is due in part to the expense of solving the related diffusion linear system. We investigate here the effectiveness of a parallel semicoarsening multigrid (SMG) solution approach to DSA preconditioning in several three dimensional problems. In particular, we consider the algorithmic and implementation scalability of a parallel SMG-DSA preconditioner on several types of test problems
Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.
Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil
2017-01-19
Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.
Acceleration methods and models in Sn calculations
International Nuclear Information System (INIS)
Sbaffoni, M.M.; Abbate, M.J.
1984-01-01
In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es
Method for accelerated leaching of solidified waste
International Nuclear Information System (INIS)
Fuhrmann, M.; Heiser, J.H.; Pietrzak, R.F.; Franz, E.M.; Colombo, P.
1990-11-01
An accelerated leach test method has been developed to determine the maximum leachability of solidified waste. The approach we have taken is to use a semi-dynamic leach test; that is, the leachant is sampled and replaced periodically. Parameters such as temperature, leachant volume, and specimen size are used to obtain releases that are accelerated relative to other standard leach tests and to the leaching of full-scale waste forms. The data obtained with this test can be used to model releases from waste forms, or to extrapolate from laboratory-scale to full-scale waste forms if diffusion is the dominant leaching mechanism. Diffusion can be confirmed as the leaching mechanism by using a computerized mathematical model for diffusion from a finite cylinder. We have written a computer program containing several models including diffusion to accompany this test. The program and a Users' Guide that gives screen-by-screen instructions on the use of the program are available from the authors. 14 refs., 4 figs., 1 tab
Directory of Open Access Journals (Sweden)
Anna Maria Manferdini
2010-06-01
Full Text Available Traditionally materials have been associated with a series of physical properties that can be used as inputs to production and manufacturing. Recently we witnessed an interest in materials considered not only as ‘true matter’, but also as new breeds where geometry, texture, tooling and finish are able to provoke new sensations when they are applied to a substance. These artificial materials can be described as synthetic because they are the outcome of various qualities that are not necessarily true to the original matter, but they are the combination of two or more parts, whether by design or by natural processes. The aim of this paper is to investigate the potential of architectural surfaces to produce effects through the invention of new breeds of artificial matter, using micro-scale details derived from Nature as an inspiration.
A statistical comparison of accelerated concrete testing methods
Denny Meyer
1997-01-01
Accelerated curing results, obtained after only 24 hours, are used to predict the 28 day strength of concrete. Various accelerated curing methods are available. Two of these methods are compared in relation to the accuracy of their predictions and the stability of the relationship between their 24 hour and 28 day concrete strength. The results suggest that Warm Water accelerated curing is preferable to Hot Water accelerated curing of concrete. In addition, some other methods for improving the...
International Nuclear Information System (INIS)
Cho, Nam Zin; Park, Chang Je
2001-01-01
An additive angular-dependent re-balance (AADR) factor acceleration method is described to accelerate the source iteration of discrete ordinates transport calculation. The formulation of the AADR method follows that of the angular-dependent re-balance (ADR) method in that the re-balance factor is defined only on the cell interface and in that the low-order equation is derived by integrating the transport equation (high-order equation) over angular subspaces. But, the re-balance factor is applied additively. While the AADR method is similar to the boundary projection acceleration and the alpha-weighted linear acceleration, it is more general and does have distinct features. The method is easily extendible to DP N and low-order S N re-balancing, and it does not require consistent discretizations between the high- and low-order equations as in diffusion synthetic acceleration. We find by Fourier analysis and numerical results that the AADR method with a chosen form of weighting functions is unconditionally stable and very effective. There also exists an optimal weighting parameter that leads to the smallest spectral radius. The AADR acceleration method described in this paper is simple to implement, unconditionally stable, and very effective. It uses a physically based weighting function with an optimal parameter, leading to the best spectral radius of ρ<0.1865, compared to ρ<0.2247 of DSA. The application of the AADR acceleration method with the LMB scheme on a test problem shows encouraging results
Scalable fast multipole accelerated vortex methods
Hu, Qi
2014-05-01
The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.
A synthetic method of solar spectrum based on LED
Wang, Ji-qiang; Su, Shi; Zhang, Guo-yu; Zhang, Jian
2017-10-01
A synthetic method of solar spectrum which based on the spectral characteristics of the solar spectrum and LED, and the principle of arbitrary spectral synthesis was studied by using 14 kinds of LED with different central wavelengths.The LED and solar spectrum data were selected by Origin Software firstly, then calculated the total number of LED for each center band by the transformation relation between brightness and illumination and Least Squares Curve Fit in Matlab.Finally, the spectrum curve of AM1.5 standard solar spectrum was obtained. The results met the technical indexes of the solar spectrum matching with ±20% and the solar constant with >0.5.
Directory of Open Access Journals (Sweden)
A M Poursaleh
2017-08-01
Full Text Available In this paper, the feasibility studty of a new method of RF power coupling to acceleration cavity of charged particles accelerator will be evaluated. In this method a slit is created around the accelerator cavity, and RF power amplifier modules is connected directly to the acceleration cavity. In fact, in this design, the cavity in addition to acting as an acceleration cavity, acts as a RF power combiner. The benefits of this method are avoiding the use of RF vacuum tubes, transmission lines, high power combiner and coupler. In this research, cylindrical and coaxial cavities were studied, and a small sample coaxial cavity is build by this method. The results of the resarch showed that compact, economical and safe RF accelerators can be achieved by the proposed method
Grisham, Larry R
2013-12-17
The present invention provides systems and methods for the magnetic insulation of accelerator electrodes in electrostatic accelerators. Advantageously, the systems and methods of the present invention improve the practically obtainable performance of these electrostatic accelerators by addressing, among other things, voltage holding problems and conditioning issues. The problems and issues are addressed by flowing electric currents along these accelerator electrodes to produce magnetic fields that envelope the accelerator electrodes and their support structures, so as to prevent very low energy electrons from leaving the surfaces of the accelerator electrodes and subsequently picking up energy from the surrounding electric field. In various applications, this magnetic insulation must only produce modest gains in voltage holding capability to represent a significant achievement.
Use of the preconditioned conjugate gradient method to accelerate S/sub n/ iterations
International Nuclear Information System (INIS)
Derstine, K.L.; Gelbard, E.M.
1985-01-01
It is well known that specially tailored diffusion difference equations are required in the synthetic method. The tailoring process is not trivial, and for some S/sub n/ schemes (e.g., in hexagonal geometry) tailored diffusion operators are not available. The need for alternative acceleration methods has been noted by Larsen who has, in fact, proposed two alternatives. The proposed methods, however, do not converge to the S/sub n/ solution, and their accuracy is still largely unknown. Los Alamos acceleration methods are required to converge for any mesh, no matter how coarse. Since negative flux-fix ups (normally involved when mesh widths are large) may impede convergence, it is not clear that such a strict condition is really practical. Here a lesser objective is chosen. The authors wish to develop an acceleration method useful for a wide (though finite) range of mesh widths, but to avoid the use of special diffusion difference equations. It is shown that the conjugate gradient (CG) method, with the standard box-centered (BC) diffusion equation as a preconditioner, yields an algorithm that, for fixed-source problems with isotropic scattering, is mechanically very similar to the synthetic method; but, in two-dimensional test problems in various geometries, the CG method is substantially more stable
A statistical comparison of accelerated concrete testing methods
Directory of Open Access Journals (Sweden)
Denny Meyer
1997-01-01
Full Text Available Accelerated curing results, obtained after only 24 hours, are used to predict the 28 day strength of concrete. Various accelerated curing methods are available. Two of these methods are compared in relation to the accuracy of their predictions and the stability of the relationship between their 24 hour and 28 day concrete strength. The results suggest that Warm Water accelerated curing is preferable to Hot Water accelerated curing of concrete. In addition, some other methods for improving the accuracy of predictions of 28 day strengths are suggested. In particular the frequency at which it is necessary to recalibrate the prediction equation is considered.
Integrating computational methods to retrofit enzymes to synthetic pathways.
Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula
2012-02-01
Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.
Delayless acceleration measurement method for motion control applications
Energy Technology Data Exchange (ETDEWEB)
Vaeliviita, S.; Ovaska, S.J. [Helsinki University of Technology, Otaniemi (Finland). Institute of Intelligent Power Electronics
1997-12-31
Delayless and accurate sensing of angular acceleration can improve the performance of motion control in motor drives. Acceleration control is, however, seldom implemented in practical drive systems due to prohibitively high costs or unsatisfactory results of most acceleration measurement methods. In this paper we propose an efficient and accurate acceleration measurement method based on direct differentiation of the corresponding velocity signal. Polynomial predictive filtering is used to smooth the resulting noisy signal without delay. This type of prediction is justified by noticing that a low-degree polynomial can usually be fitted into the primary acceleration curve. No additional hardware is required to implement the procedure if the velocity signal is already available. The performance of the acceleration measurement method is evaluated by applying it to a demanding motion control application. (orig.) 12 refs.
Synthetic methods in phase equilibria: A new apparatus and error analysis of the method
DEFF Research Database (Denmark)
Fonseca, José; von Solms, Nicolas
2014-01-01
of the equipment was confirmed through several tests, including measurements along the three phase co-existence line for the system ethane + methanol, the study of the solubility of methane in water, and of carbon dioxide in water. An analysis regarding the application of the synthetic isothermal method...
Glutarimides: Biological activity, general synthetic methods and physicochemical properties
Directory of Open Access Journals (Sweden)
Popović-Đorđević Jelena B.
2015-01-01
Full Text Available Glutarimides, 2,6-dioxopiperidines are compounds that rarely occur in natural sources, but so far isolated ones exert widespread pharmacological activities, which makes them valuable as potential pharmacotherapeutics. Glutarimides act as androgen receptor antagonists, anti-inflammatory, anxiolytics, antibacterials, and tumor suppressing agents. Some synthetic glutarimide derivatives are already in use as immunosuppressive and sedative (e.g., thalidomide or anxiolytics (buspirone drugs. The wide applicability of this class of compounds, justify the interest of scientists to explore new pathways for its syntheses. General methods for synthesis of six-membered imide ring, are presented in this paper. These methods include: a reaction of dicarboxylic acids with ammonia or primary amine, b reactions of cyclization: amido-acids, diamides, dinitriles, nitrilo-acids, amido-nitriles, amido-esters, amidoacyl-chlorides or diacyl-chlorides, c adition of carbon-monoxide on a,b-unsaturated amides, d oxidation reactions, e Michael adition of active methylen compounds on methacrylamide or conjugated amides. Some of the described methods are used for closing glutarimide ring in syntheses of farmacological active compounds sesbanimide and aldose reductase inhibitors (ARI. Analyses of the geometry, as well as, the spectroscopic analyses (NMR and FT-IR of some glutarimides are presented because of their broad spectrum of pharmacological activity. To elucidate structures of glutarimides, geometrical parameters of newly synthesized tert-pentyl-1-benzyl-4-methyl-glutarimide-3-carboxylate (PBMG are analyzed and compared with the experimental data from X-ray analysis for glutarimide. Moreover, molecular electrostatic potential (MEP surface which is plotted over the optimized geometry to elucidate the reactivity of PBMG molecule is analyzed. The electronic properties of glutarimide derivatives are explained on the example of thalidomide. The Frontier Molecular Orbital
Phase-of-flight method for setting the accelerating fields in the ion linear accelerator
International Nuclear Information System (INIS)
Dvortsov, S.V.; Lomize, L.G.
1983-01-01
For setting amplitudes and phases of accelerating fields in multiresonator ion accelerators presently Δt-procedure is used. The determination and setting of two unknown parameters of RF-field (amplitude and phase) in n-resonator is made according to the two increments of particle time-of-flight, measured experimentally: according to the change of the particle time-of-flight Δt 1 in the n-resonator, during the field switching in the resonator, and according to the change of Δt 2 of the time-of-flight in (n+1) resonator without RF-field with the switching of accelerating field in the n-resonator. When approaching the accelerator exit the particle energy increases, relative energy increment decreases and the accuracy of setting decreases. To enchance the accuracy of accelerating fields setting in a linear ion accelerator a phase-of-flight method is developed, in which for the setting of accelerating fields the measured time-of-flight increment Δt only in one resonator is used (the one in which the change of amplitude and phase is performed). Results of simulation of point bunch motion in the IYaI AN USSR linear accelerator are presented
Accelerated Test Method for Corrosion Protective Coatings Project
Falker, John; Zeitlin, Nancy; Calle, Luz
2015-01-01
This project seeks to develop a new accelerated corrosion test method that predicts the long-term corrosion protection performance of spaceport structure coatings as accurately and reliably as current long-term atmospheric exposure tests. This new accelerated test method will shorten the time needed to evaluate the corrosion protection performance of coatings for NASA's critical ground support structures. Lifetime prediction for spaceport structure coatings has a 5-year qualification cycle using atmospheric exposure. Current accelerated corrosion tests often provide false positives and negatives for coating performance, do not correlate to atmospheric corrosion exposure results, and do not correlate with atmospheric exposure timescales for lifetime prediction.
Chubar, Natalia; Gilmour, Robert; Gerda, Vasyl; Mičušík, Matej; Omastova, Maria; Heister, Katja; Man, Pascal; Fraissard, Jacques; Zaitsev, Vladimir
2017-07-01
This work is the first report that critically reviews the properties of layered double hydroxides (LDHs) on the level of speciation in the context of water treatment application and dynamic adsorption conditions, as well as the first report to associate these properties with the synthetic methods used for LDH preparation. Increasingly stronger maximum allowable concentrations (MAC) of various contaminants in drinking water and liquid foodstuffs require regular upgrades of purification technologies, which might also be useful in the extraction of valuable substances for reuse in accordance with modern sustainability strategies. Adsorption is the main separation technology that allows the selective extraction of target substances from multicomponent solutions. Inorganic anion exchangers arrived in the water business relatively recently to achieve the newly approved standards for arsenic levels in drinking water. LDHs (or hydrotalcites, HTs) are theoretically the best anion exchangers due to their potential to host anions in their interlayer space, which increases their anion removal capacity considerably. This potential of the interlayer space to host additional amounts of target aqueous anions makes the LDHs superior to bulk anion exchanger. The other unique advantage of these layered materials is the flexibility of the chemical composition of the metal oxide-based layers and the interlayer anions. However, until now, this group of "classical" anion exchangers has not found its industrial application in adsorption and catalysis at the industrial scale. To accelerate application of LDHs in water treatment on the industrial scale, the authors critically reviewed recent scientific and technological knowledge on the properties and adsorptive removal of LDHs from water on the fundamental science level. This also includes review of the research tools useful to reveal the adsorption mechanism and the material properties beyond the nanoscale. Further, these properties are
Generation method of synthetic training data for mobile OCR system
Chernyshova, Yulia S.; Gayer, Alexander V.; Sheshkus, Alexander V.
2018-04-01
This paper addresses one of the fundamental problems of machine learning - training data acquiring. Obtaining enough natural training data is rather difficult and expensive. In last years usage of synthetic images has become more beneficial as it allows to save human time and also to provide a huge number of images which otherwise would be difficult to obtain. However, for successful learning on artificial dataset one should try to reduce the gap between natural and synthetic data distributions. In this paper we describe an algorithm which allows to create artificial training datasets for OCR systems using russian passport as a case study.
Digestive ripening: a synthetic method par excellence for core–shell ...
Indian Academy of Sciences (India)
persity of nanoparticles. An even more remarkable feature of digestive ripening exemplified here is, it could be exercised as a synthetic method towards vari- ous heterostructured materials like core–shell particles, nanoalloys, and nanocomposites in combination with the synthetic method, solvated metal atom dispersion.
Synthetic Environments as visualization method for product design
Meijer, F.; van den Broek, Egon; Schouten, Theo E.; Damgrave, Roy Gerhardus Johannes; Damgrave, Roy G.J.; de Ridder, Huib; Rogowitz, Bernice E.; Pappas, Thrasyvoulos N.
2010-01-01
In this paper, we explored the use of low fidelity Synthetic Environments (SE; i.e., a combination of simulation techniques) for product design. We explored the usefulness of low fidelity SE to make design problems explicit. In particular, we were interested in the influence of interactivity on user
Accelerated gradient methods for constrained image deblurring
International Nuclear Information System (INIS)
Bonettini, S; Zanella, R; Zanni, L; Bertero, M
2008-01-01
In this paper we propose a special gradient projection method for the image deblurring problem, in the framework of the maximum likelihood approach. We present the method in a very general form and we give convergence results under standard assumptions. Then we consider the deblurring problem and the generality of the proposed algorithm allows us to add a energy conservation constraint to the maximum likelihood problem. In order to improve the convergence rate, we devise appropriate scaling strategies and steplength updating rules, especially designed for this application. The effectiveness of the method is evaluated by means of a computational study on astronomical images corrupted by Poisson noise. Comparisons with standard methods for image restoration, such as the expectation maximization algorithm, are also reported.
Toker, Salih; Boone-Kukoyi, Zainab; Thompson, Nishone; Ajifa, Hillary; Clement, Travis; Ozturk, Birol; Aslan, Kadir
2016-01-01
Physical stability of synthetic skin samples during their exposure to microwave heating was investigated to demonstrate the use of the metal-assisted and microwave-accelerated decrystallization (MAMAD) technique for potential biomedical applications. In this regard, optical microscopy and temperature measurements were employed for the qualitative and quantitative assessment of damage to synthetic skin samples during 20 s intermittent microwave heating using a monomode microwave source (at 8 G...
Method for accelerated aging under combined environmental stress conditions
International Nuclear Information System (INIS)
Gillen, K.T.
1979-01-01
An accelerated aging method which can be used to simulate aging in combined stress environment situations is described. It is shown how the assumptions of the method can be tested experimentally. Aging data for a chloroprene cable jacketing material in single and combined radiation and temperature environments are analyzed and it is shown that these data offer evidence for the validity of the method
Linear electron accelerator body and method of its manufacture
International Nuclear Information System (INIS)
Landa, V.; Maresova, V.; Lucek, J.; Prusa, F.
1988-01-01
The accelerator body consists of a hollow casing made of a high electric conductivity metal. The inside is partitioned with a system of resonators. The resonator body is made of one piece of the same metal as the casing or a related one (e.g., copper -coper, silver-copper, copper-copper alloy). The accelerator body is manufactured using the cathodic process on the periphery of a system of metal partitions and negative models of resonator cavities fitted to a metal pin. The pin is then removed from the system and the soluble models of the cavities are dissolved in a solvent. The advantage of the design and the method of manufacture is that the result is a compact, perfectly tight body with a perfectly lustre surface. The casing wall can be very thin, which improves accelerator performance. The claimed method can also be used in manufacturing miniature accelerators. (E.J.). 1 fig
Method Accelerates Training Of Some Neural Networks
Shelton, Robert O.
1992-01-01
Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.
A simple eigenfunction convergence acceleration method for Monte Carlo
International Nuclear Information System (INIS)
Booth, Thomas E.
2011-01-01
Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k_2/k_1. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a ± weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of ± weights. Instead, only positive weights are used in the acceleration method. (author)
Fringe counting method for synthetic phase with frequency-modulated laser diodes
International Nuclear Information System (INIS)
Onodera, Ribun; Sakuyama, Munechika; Ishii, Yukihiro
2007-01-01
Fringe counting method with laser diodes (LDs) for displacement measurement has been constructed. Two LDs are frequency modulated by mutually inverted sawtooth currents on an unbalanced two-beam interferometer. The mutually inverted sawtooth-current modulation of LDs produces interference fringe signals with opposite signs for respective wavelengths. The two fringe signals are fed to an electronic mixer to produce a synthetic fringe signal with a reduced sensitivity to the synthetic wavelength. Synthetic fringe pulses derived from the synthetic fringe signal make a fringe counting system possible for faster movement of the tested mirror
Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware
International Nuclear Information System (INIS)
Nakata, Susumu
2008-01-01
This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.
A systematic design method for robust synthetic biology to satisfy design specifications.
Chen, Bor-Sen; Wu, Chih-Hung
2009-06-30
Synthetic biology is foreseen to have important applications in biotechnology and medicine, and is expected to contribute significantly to a better understanding of the functioning of complex biological systems. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to intrinsic parameter uncertainties, external disturbances and functional variations of intra- and extra-cellular environments. The design method for a robust synthetic gene network that works properly in a host cell under these intrinsic parameter uncertainties and external disturbances is the most important topic in synthetic biology. In this study, we propose a stochastic model that includes parameter fluctuations and external disturbances to mimic the dynamic behaviors of a synthetic gene network in the host cell. Then, based on this stochastic model, four design specifications are introduced to guarantee that a synthetic gene network can achieve its desired steady state behavior in spite of parameter fluctuations, external disturbances and functional variations in the host cell. We propose a systematic method to select a set of appropriate design parameters for a synthetic gene network that will satisfy these design specifications so that the intrinsic parameter fluctuations can be tolerated, the external disturbances can be efficiently filtered, and most importantly, the desired steady states can be achieved. Thus the synthetic gene network can work properly in a host cell under intrinsic parameter uncertainties, external disturbances and functional variations. Finally, a design procedure for the robust synthetic gene network is developed and a design example is given in silico to confirm the performance of the proposed method. Based on four design specifications, a systematic design procedure is developed for designers to engineer a robust synthetic biology network that can achieve its desired steady state behavior
An Adaptively Accelerated Bayesian Deblurring Method with Entropy Prior
Directory of Open Access Journals (Sweden)
Yong-Hoon Kim
2008-05-01
Full Text Available The development of an efficient adaptively accelerated iterative deblurring algorithm based on Bayesian statistical concept has been reported. Entropy of an image has been used as a Ã¢Â€ÂœpriorÃ¢Â€Â distribution and instead of additive form, used in conventional acceleration methods an exponent form of relaxation constant has been used for acceleration. Thus the proposed method is called hereafter as adaptively accelerated maximum a posteriori with entropy prior (AAMAPE. Based on empirical observations in different experiments, the exponent is computed adaptively using first-order derivatives of the deblurred image from previous two iterations. This exponent improves speed of the AAMAPE method in early stages and ensures stability at later stages of iteration. In AAMAPE method, we also consider the constraint of the nonnegativity and flux conservation. The paper discusses the fundamental idea of the Bayesian image deblurring with the use of entropy as prior, and the analytical analysis of superresolution and the noise amplification characteristics of the proposed method. The experimental results show that the proposed AAMAPE method gives lower RMSE and higher SNR in 44% lesser iterations as compared to nonaccelerated maximum a posteriori with entropy prior (MAPE method. Moreover, AAMAPE followed by wavelet wiener filtering gives better result than the state-of-the-art methods.
Neutron Transport Methods for Accelerator-Driven Systems
International Nuclear Information System (INIS)
Nicholas Tsoulfanidis; Elmer Lewis
2005-01-01
The objective of this project has been to develop computational methods that will enable more effective analysis of Accelerator Driven Systems (ADS). The work is centered at the University of Missouri at Rolla, with a subcontract at Northwestern University, and close cooperation with the Nuclear Engineering Division at Argonne National Laboratory. The work has fallen into three categories. First, the treatment of the source for neutrons originating from the spallation target which drives the neutronics calculations of the ADS. Second, the generalization of the nodal variational method to treat the R-Z geometry configurations frequently needed for scoping calculations in Accelerator Driven Systems. Third, the treatment of void regions within variational nodal methods as needed to treat the accelerator beam tube
Fluctuation Flooding Method (FFM) for accelerating conformational transitions of proteins
Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru
2014-03-01
A powerful conformational sampling method for accelerating structural transitions of proteins, "Fluctuation Flooding Method (FFM)," is proposed. In FFM, cycles of the following steps enhance the transitions: (i) extractions of largely fluctuating snapshots along anisotropic modes obtained from trajectories of multiple independent molecular dynamics (MD) simulations and (ii) conformational re-sampling of the snapshots via re-generations of initial velocities when re-starting MD simulations. In an application to bacteriophage T4 lysozyme, FFM successfully accelerated the open-closed transition with the 6 ns simulation starting solely from the open state, although the 1-μs canonical MD simulation failed to sample such a rare event.
FDTD method using for electrodynamic simulation of resonator accelerating structures
International Nuclear Information System (INIS)
Vorogushin, M.F.; Svistunov, Yu.A.; Chetverikov, I.O.; Malyshev, V.N.; Malyukhov, M.V.
2000-01-01
The finite difference method in the time area (FDTD) makes it possible to model both stationary and nonstationary processes, originating by the beam and field interaction. Possibilities of the method by modeling the fields in the resonant accelerating structures are demonstrated. The possibility of considering the transition processes is important besides the solution of the problem on determination of frequencies and distribution in the space of the resonators oscillations proper types. The program presented makes it possible to obtain practical results for modeling accelerating structures on personal computers [ru
Libraries of Synthetic TALE-Activated Promoters: Methods and Applications.
Schreiber, T; Tissier, A
2016-01-01
The discovery of proteins with programmable DNA-binding specificities triggered a whole array of applications in synthetic biology, including genome editing, regulation of transcription, and epigenetic modifications. Among those, transcription activator-like effectors (TALEs) due to their natural function as transcription regulators, are especially well-suited for the development of orthogonal systems for the control of gene expression. We describe here the construction and testing of libraries of synthetic TALE-activated promoters which are under the control of a single TALE with a given DNA-binding specificity. These libraries consist of a fixed DNA-binding element for the TALE, a TATA box, and variable sequences of 19 bases upstream and 43 bases downstream of the DNA-binding element. These libraries were cloned using a Golden Gate cloning strategy making them usable as standard parts in a modular cloning system. The broad range of promoter activities detected and the versatility of these promoter libraries make them valuable tools for applications in the fine-tuning of expression in metabolic engineering projects or in the design and implementation of regulatory circuits. © 2016 Elsevier Inc. All rights reserved.
Acceleration methods for assembly-level transport calculations
International Nuclear Information System (INIS)
Adams, Marvin L.; Ramone, Gilles
1995-01-01
A family acceleration methods for the iterations that arise in assembly-level transport calculations is presented. A single iteration in these schemes consists of a transport sweep followed by a low-order calculation which is itself a simplified transport problem. It is shown that a previously-proposed method fitting this description is unstable in two and three dimensions. It is presented a family of methods and shown that some members are unconditionally stable. (author). 8 refs, 4 figs, 4 tabs
International Nuclear Information System (INIS)
Santos, Frederico P.; Xavier, Vinicius S.; Alves Filho, Hermes; Barros, Ricardo C.
2011-01-01
The scattering source iterative (SI) scheme is traditionally applied to converge fine-mesh numerical solutions to fixed-source discrete ordinates (S N ) neutron transport problems. The SI scheme is very simple to implement under a computational viewpoint. However, the SI scheme may show very slow convergence rate, mainly for diffusive media (low absorption) with several mean free paths in extent. In this work we describe an acceleration technique based on an improved initial guess for the scattering source distribution within the slab. In other words, we use as initial guess for the fine-mesh scattering source, the coarse-mesh solution of the neutron diffusion equation with special boundary conditions to account for the classical S N prescribed boundary conditions, including vacuum boundary conditions. Therefore, we first implement a spectral nodal method that generates coarse-mesh diffusion solution that is completely free from spatial truncation errors, then we reconstruct this coarse-mesh solution within each spatial cell of the discretization grid, to further yield the initial guess for the fine-mesh scattering source in the first S N transport sweep (μm > 0 and μm < 0, m = 1:N) across the spatial grid. We consider a number of numerical experiments to illustrate the efficiency of the offered diffusion synthetic acceleration (DSA) technique. (author)
International Nuclear Information System (INIS)
Santos, Frederico P.; Alves Filho, Hermes; Barros, Ricardo C.; Xavier, Vinicius S.
2011-01-01
The scattering source iterative (SI) scheme is traditionally applied to converge fine-mesh numerical solutions to fixed-source discrete ordinates (S N ) neutron transport problems. The SI scheme is very simple to implement under a computational viewpoint. However, the SI scheme may show very slow convergence rate, mainly for diffusive media (low absorption) with several mean free paths in extent. In this work we describe an acceleration technique based on an improved initial guess for the scattering source distribution within the slab. In other words, we use as initial guess for the fine-mesh scattering source, the coarse-mesh solution of the neutron diffusion equation with special boundary conditions to account for the classical S N prescribed boundary conditions, including vacuum boundary conditions. Therefore, we first implement a spectral nodal method that generates coarse-mesh diffusion solution that is completely free from spatial truncation errors, then we reconstruct this coarse-mesh solution within each spatial cell of the discretization grid, to further yield the initial guess for the fine-mesh scattering source in the first S N transport sweep (μm > 0 and μm < 0, m = 1:N) across the spatial grid. We consider a number of numerical experiments to illustrate the efficiency of the offered diffusion synthetic acceleration (DSA) technique. (author)
Equipment and methods for synthetic aperture anatomic and flow imaging
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Nikolov, Svetoslav; Misaridis, Thanassis
2002-01-01
Conventional ultrasound imaging is done by sequentially probing in each image direction. The frame rate is, thus, limited by the speed of sound and the number of lines necessary to form an image. This is especially limiting in flow imaging, since multiple lines are used for flow estimation. Another...... problem is that each receiving transducer element must be connected to a receiver, which makes the expansion of the number of receive channels expensive. Synthetic aperture (SA) imaging is a radical change from the sequential image formation. Here ultrasound is emitted in all directions and the image...... is formed in all directions simultaneously over a number of acquisitions. SA images can therefore be perfectly focused in both transmit and receive for all depths, thus significantly improving image quality. A further advantage is that very fast imaging can be done, since only a few emissions are needed...
A Method to Design Synthetic Cell-Cycle Networks
International Nuclear Information System (INIS)
Ke-Ke, Miao
2009-01-01
The interactions among proteins, DNA and RNA in an organism form elaborate cell-cycle networks which govern cell growth and proliferation. Understanding the common structure of cell-cycle networks will be of great benefit to science research. Here, inspired by the importance of the cell-cycle regulatory network of yeast which has been studied intensively, we focus on small networks with 11 nodes, equivalent to that of the cell-cycle regulatory network used by Li et al. [Proc. Natl. Acad. Sci. USA 101(2004)4781] Using a Boolean model, we study the correlation between structure and function, and a possible common structure. It is found that cascade-like networks with a great number of interactions between nodes are stable. Based on these findings, we are able to construct synthetic networks that have the same functions as the cell-cycle regulatory network. (condensed matter: structure, mechanical and thermal properties)
International Nuclear Information System (INIS)
Eisenhardt, W.A. Jr.; Hedaya, E.; Theodoropulos, S.
1981-01-01
This patent claim on behalf of Union Carbide Corporation, relates to a method of carrying out a competitive binding radioassay of a compound of interest in a clinical sample, using isocyanates labelled with radioiodine as synthetic antigens. (U.K.)
Schindler, Matthias; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander
2016-05-01
Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO2 and reduced to graphite to determine 14C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).
Energy Technology Data Exchange (ETDEWEB)
Schindler, Matthias, E-mail: matthias.schindler@physik.uni-erlangen.de; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander
2016-05-15
Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO{sub 2} and reduced to graphite to determine {sup 14}C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).
Convergence analysis of CMADR acceleration for the method of characteristics
International Nuclear Information System (INIS)
Park, Young Ryong; Cho, Nam Zin
2005-01-01
As the nuclear reactor core becomes more complex, heterogeneous, and geometrically irregular, the method of characteristics (MOC) is gaining its wide use in the neutron transport calculations. However, the long computer times require good acceleration methods. In our previous paper, the concept of coarse-mesh angular dependent rebalance (CMADR) acceleration was described and applied to the MOC calculations. The method is based on angular dependent rebalance factors defined on the coarse-mesh boundaries; a coarse-mesh consists of several fine meshes that may be (1) heterogeneous and (2) of mixed geometries with irregular or unstructured mesh shapes. In addition, (3) the coarse-mesh boundaries may not coincide with the structural interfaces of the problem and can be chosen artificially for convenience. The CMADR acceleration method on the MOC scheme that enables the very desirable features (1), (2), and (3) above is new in the neutron transport literature to the best of the authors knowledge. In this paper, we analyze the convergence of CMADR acceleration for MOC calculation in x-y-z (infinite) geometry by using Fourier analysis
An accelerated test method for efflorescence in clay bricks
International Nuclear Information System (INIS)
Beggan, John Edward
1998-01-01
An investigation into the creation of accelerated efflorescence in clay bricks was undertaken with a view to creating a viable test procedure for determining efflorescence potential. The testing programme incorporated ambient conditions similar to those which promote efflorescence growth in bricks in use. Theoretical investigations into the physical mechanism underlying the creation of efflorescence directed the attempts to accelerate the process. It was found that calcium sulphate efflorescence could not be sufficiently accelerated such that a useful efflorescence test procedure could be proposed. The inability to produce accelerated efflorescence in brick samples was attributed to limitations associated with time dependent salt diffusion in the efflorescence mechanism. The preliminary testing that was undertaken into the creation of efflorescence prompted the use of acid assisted methods to accelerate efflorescence. The acid assisted method that was adopted to provide a possible indication of efflorescence potential relies upon the transformation of low solubility calcium to a more soluble form. The movement of the transformed salt is then induced by cyclic temperature exposure at temperatures similar to those experienced in Spring. The appearance of the transformed calcium salt on the surface of the brick specimen provides an indication of the efflorescence potential. Brick piers constructed on an exposed site and monitored over a 12 month period provided information on the validity of the acid assisted test method. The efflorescence observed on the piers correlated well with that predicted by the acid assisted test, suggesting that the new test has the potential to accurately predict the efflorescence potential of clay bricks Relationships between other properties such as air permeability, sorptivity and tensile strength were investigated such that an alternative method of predicting efflorescence could be achieved. It was found that (within the bounds of the
Acceleration of Multidimensional Discrete Ordinates Methods Via Adjacent-Cell Preconditioners
International Nuclear Information System (INIS)
Azmy, Y.Y.
2000-01-01
The adjacent-cell preconditioner (AP) formalism originally derived in slab geometry is extended to multidimensional Cartesian geometry for generic fixed-weight, weighted diamond difference neutron transport methods. This is accomplished for the thick-cell regime (KAP) and thin-cell regime (NAP). A spectral analysis of the resulting acceleration schemes demonstrates their excellent spectral properties for model problem configurations, characterized by a uniform mesh of infinite extent and homogeneous material composition, each in its own cell-size regime. Thus, the spectral radius of KAP vanishes as the computational cell size approaches infinity, but it exceeds unity for very thin cells, thereby implying instability. In contrast, NAP is stable and robust for all cell sizes, but its spectral radius vanishes more slowly as the cell size increases. For this reason, and to avoid potential complication in the case of cells that are thin in one dimension and thick in another, NAP is adopted in the remainder of this work. The most important feature of AP for practical implementation in production level codes is that it is cell centered, reducing the size of the algebraic system comprising the acceleration stage compared to face-centered schemes. Boundary conditions for finite extent problems and a mixing formula across material and cell-size discontinuity are derived and used to implement NAP in a test code, AHOT, and a production code, TORT. Numerical testing for algebraically linear iterative schemes for the cases embodied in Burre's Suite of Test Problems demonstrates the high efficiency of the new method in reducing the number of iterations required to achieve convergence, especially for optically thick cells where acceleration is most needed. Also, for algebraically nonlinear (adaptive) methods, AP generally performs better than the partial current rebalance method in TORT and the diffusion synthetic acceleration method in TWODANT. Finally, application of the AP
Electrodeless plasma acceleration system using rotating magnetic field method
Directory of Open Access Journals (Sweden)
T. Furukawa
2017-11-01
Full Text Available We have proposed Rotating Magnetic Field (RMF acceleration method as one of electrodeless plasma accelerations. In our experimental scheme, plasma generated by an rf (radio frequency antenna, is accelerated by RMF antennas, which consist of two-pair, opposed, facing coils, and these antennas are outside of a discharge tube. Therefore, there is no wear of electrodes, degrading the propulsion performance. Here, we will introduce our RMF acceleration system developed, including the experimental device, e.g., external antennas, a tapered quartz tube, a vacuum chamber, external magnets, and a pumping system. In addition, we can change RMF operation parameters (RMF applied current IRMF and RMF current phase difference ϕ, focusing on RMF current frequency fRMF by adjusting matching conditions of RMF, and investigate the dependencies on plasma parameters (electron density ne and ion velocity vi; e.g., higher increases of ne and vi (∼360 % and 55 %, respectively than previous experimental results were obtained by decreasing fRMF from 5 MHz to 0.7 MHz, whose RMF penetration condition was better according to Milroy’s expression. Moreover, time-varying component of RMF has been measured directly to survey the penetration condition experimentally.
Synthetic polymers and methods of making and using the same
Daily, Michael D.; Grate, Jay W.; Mo, Kai-For
2016-06-14
Monomer embodiments that can be used to make polymers, such as homopolymers, heteropolymers, and that can be used in particular embodiments to make sequence-defined polymers are described. Also described are methods of making polymers using such monomer embodiments. Methods of using the polymers also are described.
GPU accelerated manifold correction method for spinning compact binaries
Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying
2018-04-01
The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.
A Validated Method for the Detection and Quantitation of Synthetic ...
African Journals Online (AJOL)
NICOLAAS
A LC-HRMS (liquid chromatography coupled with high resolution mass spectrometry) method for the ... its ease of availability, from head shops (shops selling predomi- ..... cannabinoids in whole blood in plastic containers with several common ...
Toker, Salih; Boone-Kukoyi, Zainab; Thompson, Nishone; Ajifa, Hillary; Clement, Travis; Ozturk, Birol; Aslan, Kadir
2016-11-30
Physical stability of synthetic skin samples during their exposure to microwave heating was investigated to demonstrate the use of the metal-assisted and microwave-accelerated decrystallization (MAMAD) technique for potential biomedical applications. In this regard, optical microscopy and temperature measurements were employed for the qualitative and quantitative assessment of damage to synthetic skin samples during 20 s intermittent microwave heating using a monomode microwave source (at 8 GHz, 2-20 W) up to 120 s. The extent of damage to synthetic skin samples, assessed by the change in the surface area of skin samples, was negligible for microwave power of ≤7 W and more extensive damage (>50%) to skin samples occurred when exposed to >7 W at initial temperature range of 20-39 °C. The initial temperature of synthetic skin samples significantly affected the extent of change in temperature of synthetic skin samples during their exposure to microwave heating. The proof of principle use of the MAMAD technique was demonstrated for the decrystallization of a model biological crystal (l-alanine) placed under synthetic skin samples in the presence of gold nanoparticles. Our results showed that the size (initial size ∼850 μm) of l-alanine crystals can be reduced up to 60% in 120 s without damage to synthetic skin samples using the MAMAD technique. Finite-difference time-domain-based simulations of the electric field distribution of an 8 GHz monomode microwave radiation showed that synthetic skin samples are predicted to absorb ∼92.2% of the microwave radiation.
2016-01-01
Physical stability of synthetic skin samples during their exposure to microwave heating was investigated to demonstrate the use of the metal-assisted and microwave-accelerated decrystallization (MAMAD) technique for potential biomedical applications. In this regard, optical microscopy and temperature measurements were employed for the qualitative and quantitative assessment of damage to synthetic skin samples during 20 s intermittent microwave heating using a monomode microwave source (at 8 GHz, 2–20 W) up to 120 s. The extent of damage to synthetic skin samples, assessed by the change in the surface area of skin samples, was negligible for microwave power of ≤7 W and more extensive damage (>50%) to skin samples occurred when exposed to >7 W at initial temperature range of 20–39 °C. The initial temperature of synthetic skin samples significantly affected the extent of change in temperature of synthetic skin samples during their exposure to microwave heating. The proof of principle use of the MAMAD technique was demonstrated for the decrystallization of a model biological crystal (l-alanine) placed under synthetic skin samples in the presence of gold nanoparticles. Our results showed that the size (initial size ∼850 μm) of l-alanine crystals can be reduced up to 60% in 120 s without damage to synthetic skin samples using the MAMAD technique. Finite-difference time-domain-based simulations of the electric field distribution of an 8 GHz monomode microwave radiation showed that synthetic skin samples are predicted to absorb ∼92.2% of the microwave radiation. PMID:27917407
High power ring methods and accelerator driven subcritical reactor application
Energy Technology Data Exchange (ETDEWEB)
Tahar, Malek Haj [Univ. of Grenoble (France)
2016-08-07
High power proton accelerators allow providing, by spallation reaction, the neutron fluxes necessary in the synthesis of fissile material, starting from Uranium 238 or Thorium 232. This is the basis of the concept of sub-critical operation of a reactor, for energy production or nuclear waste transmutation, with the objective of achieving cleaner, safer and more efficient process than today’s technologies allow. Designing, building and operating a proton accelerator in the 500-1000 MeV energy range, CW regime, MW power class still remains a challenge nowadays. There is a limited number of installations at present achieving beam characteristics in that class, e.g., PSI in Villigen, 590 MeV CW beam from a cyclotron, SNS in Oakland, 1 GeV pulsed beam from a linear accelerator, in addition to projects as the ESS in Europe, a 5 MW beam from a linear accelerator. Furthermore, coupling an accelerator to a sub-critical nuclear reactor is a challenging proposition: some of the key issues/requirements are the design of a spallation target to withstand high power densities as well as ensure the safety of the installation. These two domains are the grounds of the PhD work: the focus is on the high power ring methods in the frame of the KURRI FFAG collaboration in Japan: upgrade of the installation towards high intensity is crucial to demonstrate the high beam power capability of FFAG. Thus, modeling of the beam dynamics and benchmarking of different codes was undertaken to validate the simulation results. Experimental results revealed some major losses that need to be understood and eventually overcome. By developing analytical models that account for the field defects, one identified major sources of imperfection in the design of scaling FFAG that explain the important tune variations resulting in the crossing of several betatron resonances. A new formula is derived to compute the tunes and properties established that characterize the effect of the field imperfections on the
Accelerated gradient methods for total-variation-based CT image reconstruction
Energy Technology Data Exchange (ETDEWEB)
Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology
2011-07-01
Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)
A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics
Pujol, O.; Perez, J. P.
2007-01-01
The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…
Acceleration and parallelization calculation of EFEN-SP_3 method
International Nuclear Information System (INIS)
Yang Wen; Zheng Youqi; Wu Hongchun; Cao Liangzhi; Li Yunzhao
2013-01-01
Due to the fact that the exponential function expansion nodal-SP_3 (EFEN-SP_3) method needs further improvement in computational efficiency to routinely carry out PWR whole core pin-by-pin calculation, the coarse mesh acceleration and spatial parallelization were investigated in this paper. The coarse mesh acceleration was built by considering discontinuity factor on each coarse mesh interface and preserving neutron balance within each coarse mesh in space, angle and energy. The spatial parallelization based on MPI was implemented by guaranteeing load balancing and minimizing communications cost to fully take advantage of the modern computing and storage abilities. Numerical results based on a commercial nuclear power reactor demonstrate an speedup ratio of about 40 for the coarse mesh acceleration and a parallel efficiency of higher than 60% with 40 CPUs for the spatial parallelization. With these two improvements, the EFEN code can complete a PWR whole core pin-by-pin calculation with 289 × 289 × 218 meshes and 4 energy groups within 100 s by using 48 CPUs (2.40 GHz frequency). (authors)
An accelerated training method for back propagation networks
Shelton, Robert O. (Inventor)
1993-01-01
The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.
Shelf life prediction of apple brownies using accelerated method
Pulungan, M. H.; Sukmana, A. D.; Dewi, I. A.
2018-03-01
The aim of this research was to determine shelf life of apple brownies. Shelf life was determined with Accelerated Shelf Life Testing method and Arrhenius equation. Experiment was conducted at 25, 35, and 45°C for 30 days. Every five days, the sample was analysed for free fatty acid (FFA), water activity (Aw), and organoleptic acceptance (flavour, aroma, and texture). The shelf life of the apple brownies based on FFA were 110, 54, and 28 days at temperature of 25, 35, and 45°C, respectively.
CERN. Geneva
2001-01-01
The talk summarizes the principles of particle acceleration and addresses problems related to storage rings like LEP and LHC. Special emphasis will be given to orbit stability, long term stability of the particle motion, collective effects and synchrotron radiation.
A tuning method for nonuniform traveling-wave accelerating structures
International Nuclear Information System (INIS)
Gong Cunkui; Zheng Shuxin; Shao Jiahang; Jia Xiaoyu; Chen Huaibi
2013-01-01
The tuning method of uniform traveling-wave structures based on non-resonant perturbation field distribution measurement has been widely used in tuning both constant-impedance and constant-gradient structures. In this paper, the method of tuning nonuniform structures is proposed on the basis of the above theory. The internal reflection coefficient of each cell is obtained from analyzing the normalized voltage distribution. A numerical simulation of tuning process according to the coupled cavity chain theory has been done and the result shows each cell is in right phase advance after tuning. The method will be used in the tuning of a disk-loaded traveling-wave structure being developed at the Accelerator Laboratory, Tsinghua University. (authors)
Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long
2018-01-01
With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637
International Nuclear Information System (INIS)
Lakhov, V.M.; Gerling, V.Eh.; Il'ina, L.K.; Trojnina, G.G.; Galisheva, Eh.P.
1987-01-01
The papers on the problems of developing and application of synthetic standard samples (SS), imitating the substance and material (rocks, ores) element composition aimed at calibration, testing and certification of the equipment as well as check on the results of neutron-activation, X-ray spectral, X-ray radiometric, X-ray fluorescence and other nuclear-physical methods of analysis, are reviewed. It is shown that choice of SS preparation method is defined by peculiarities of analysis method for which calibration SS is designed. Experience in application of SS imitators of element composition in interlaboratory comparisons testifies to potential application of synthetic SS for calibration in different methods of analysis including, nuclear-physical ones
Patel Satish A; Hariyani Kaushik P
2012-01-01
The present manuscript describes simple, sensitive, rapid, accurate, precise and economical spectrophotometric method for the simultaneous determination of Diclofenac sodium and Tolperisone hydrochloride in bulk and synthetic mixture. The method is based on the simultaneous equations for analysis of both the drugs using methanol as solvent. Diclofenac sodium has absorbance maxima at 281 nm and Tolperisone hydrochloride has absorbance maxima at 255 nm in methanol. The linearity was obtained in...
Patel Paresh U; Patel Sejal K; Patel Umang J
2012-01-01
The present manuscript describes simple, sensitive, rapid, accurate, precise and economical spectrophotometric method for the simultaneous determination of diclofenac sodium and Eperisone hydrochloride in bulk and synthetic mixture. The method is based on the simultaneous equations for analysis of both the drugs using methanol as solvent. Diclofenac sodium has absorbance maxima at 281 nm and Eperisone hydrochloride has absorbance maxima at 255 nm in methanol. The linearity was obtained in the...
Accelerated weight histogram method for exploring free energy landscapes
Energy Technology Data Exchange (ETDEWEB)
Lindahl, V.; Lidmar, J.; Hess, B. [Department of Theoretical Physics and Swedish e-Science Research Center, KTH Royal Institute of Technology, 10691 Stockholm (Sweden)
2014-07-28
Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.
Acceleration of monte Carlo solution by conjugate gradient method
International Nuclear Information System (INIS)
Toshihisa, Yamamoto
2005-01-01
The conjugate gradient method (CG) was applied to accelerate Monte Carlo solutions in fixed source problems. The equilibrium model based formulation enables to use CG scheme as well as initial guess to maximize computational performance. This method is available to arbitrary geometry provided that the neutron source distribution in each subregion can be regarded as flat. Even if it is not the case, the method can still be used as a powerful tool to provide an initial guess very close to the converged solution. The major difference of Monte Carlo CG to deterministic CG is that residual error is estimated using Monte Carlo sampling, thus statistical error exists in the residual. This leads to a flow diagram specific to Monte Carlo-CG. Three pre-conditioners were proposed for CG scheme and the performance was compared with a simple 1-D slab heterogeneous test problem. One of them, Sparse-M option, showed an excellent performance in convergence. The performance per unit cost was improved by four times in the test problem. Although direct estimation of efficiency of the method is impossible mainly because of the strong problem-dependence of the optimized pre-conditioner in CG, the method seems to have efficient potential as a fast solution algorithm for Monte Carlo calculations. (author)
International Nuclear Information System (INIS)
Azmy, Y.Y.
1999-01-01
The author proposes preconditioning as a viable acceleration scheme for the inner iterations of transport calculations in slab geometry. In particular he develops Adjacent-Cell Preconditioners (AP) that have the same coupling stencil as cell-centered diffusion schemes. For lowest order methods, e.g., Diamond Difference, Step, and 0-order Nodal Integral Method (ONIM), cast in a Weighted Diamond Difference (WDD) form, he derives AP for thick (KAP) and thin (NAP) cells that for model problems are unconditionally stable and efficient. For the First-Order Nodal Integral Method (INIM) he derives a NAP that possesses similarly excellent spectral properties for model problems. The two most attractive features of the new technique are:(1) its cell-centered coupling stencil, which makes it more adequate for extension to multidimensional, higher order situations than the standard edge-centered or point-centered Diffusion Synthetic Acceleration (DSA) methods; and (2) its decreasing spectral radius with increasing cell thickness to the extent that immediate pointwise convergence, i.e., in one iteration, can be achieved for problems with sufficiently thick cells. He implemented these methods, augmented with appropriate boundary conditions and mixing formulas for material heterogeneities, in the test code APID that he uses to successfully verify the analytical spectral properties for homogeneous problems. Furthermore, he conducts numerical tests to demonstrate the robustness of the KAP and NAP in the presence of sharp mesh or material discontinuities. He shows that the AP for WDD is highly resilient to such discontinuities, but for INIM a few cases occur in which the scheme does not converge; however, when it converges, AP greatly reduces the number of iterations required to achieve convergence
Synthetic methods for beam to beam power balancing capability of large laser facilities
International Nuclear Information System (INIS)
Chen Guangyu; Zhang Xiaomin; Zhao Runchang; Zheng Wanguo; Yang Xiaoyu; You Yong; Wang Chengcheng; Shao Yunfei
2011-01-01
To account for output power balancing capability of large laser facilities, a synthetic method with beam to beam root-mean-square is presented. Firstly, a conversion process for the facilities from original data of beam powers to regular data is given. The regular data contribute to the normal distribution approximately, and then a corresponding simple method of root-mean-square for beam to beam power balancing capability is given.Secondly, based on theory of total control charts and cause-selecting control charts, control charts with root-mean-square are established which show short-term variety of power balancing capability of the facilities. Mean rate of failure occurrence is also defined and used to describe long-term trend of global balancing capabilities of the facilities. Finally, advantages of the intuitive and efficient diagnosis for synthetic methods are illustrated by analysis of experimental data. (authors)
Acceleration methods for multi-physics compressible flow
Peles, Oren; Turkel, Eli
2018-04-01
In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation
Computer codes and methods for simulating accelerator driven systems
International Nuclear Information System (INIS)
Sartori, E.; Byung Chan Na
2003-01-01
A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)
Omics Methods for Probing the Mode of Action of Natural and Synthetic Phytotoxins
Duke, Stephen O.; Bajsa, Joanna; Pan, Zhiqiang
2013-01-01
For a little over a decade, omics methods (transcriptomics, proteomics, metabolomics, and physionomics) have been used to discover and probe the mode of action of both synthetic and natural phytotoxins. For mode of action discovery, the strategy for each of these approaches is to generate an omics profile for phytotoxins with known molecular targets and to compare this library of responses to the responses of compounds with unknown modes of action. Using more than one omics approach enhances ...
Generalized Coarse-Mesh Rebalance Method for Acceleration of Neutron Transport Calculations
International Nuclear Information System (INIS)
Yamamoto, Akio
2005-01-01
This paper proposes a new acceleration method for neutron transport calculations: the generalized coarse-mesh rebalance (GCMR) method. The GCMR method is a unified scheme of the traditional coarse-mesh rebalance (CMR) and the coarse-mesh finite difference (CMFD) acceleration methods. Namely, by using an appropriate acceleration factor, formulation of the GCMR method becomes identical to that of the CMR or CMFD method. This also indicates that the convergence property of the GCMR method can be controlled by the acceleration factor since the convergence properties of the CMR and CMFD methods are generally different. In order to evaluate the convergence property of the GCMR method, a linearized Fourier analysis was carried out for a one-group homogeneous medium, and the results clarified the relationship between the acceleration factor and the spectral radius. It was also shown that the spectral radius of the GCMR method is smaller than those of the CMR and CMFD methods. Furthermore, the Fourier analysis showed that when an appropriate acceleration factor was used, the spectral radius of the GCMR method did not exceed unity in this study, which was in contrast to the results of the CMR or the CMFD method. Application of the GCMR method to practical calculations will be easy when the CMFD acceleration is already adopted in a transport code. By multiplying a suitable acceleration factor to a coefficient (D FD ) of a finite difference formulation, one can improve the numerical instability of the CMFD acceleration method
Screening method for piping wall loss by flow accelerated corrosion
International Nuclear Information System (INIS)
Ryu, Kyung Ha; Hwang, Il Soon; Lee, Na Young; Oh, Young Jin; Kim, Ji Hyun; Park, Jin Ho; Sohn, Chang Ho
2008-01-01
Flow accelerated corrosion (FAC) phenomenon has persisted its impact on plant reliability and personnel safety. Unless we change the operation condition drastically, most parameters affecting FAC will not be effectively controlled. In order to help expand piping inspection coverage, we have developed a screening approach to monitor the wall thinning by direct current potential drop (DCPD) technique. To improve the applicability to the complex piping network such as the secondary cooling water system in PWR's, we devised the equipotential control method that can eliminate undesired leakage currents outside a measurement section. In this paper, we present Wide Range Monitoring (WiRM) and Narrow Range Monitoring (NaRM) with Equipotential Switching Direct Current Potential Drop (ES-DCPD) method to rapidly monitor the thinning of piping. Based on the WiRM results, susceptible locations can be identified for further inspection by ultrasound technique (UT). On-line monitoring of a thinned location can be made by NaRM. Finite element analysis results and a closed-form resistance model are developed for the comparison with measured wall thinning by the developed DCPD technique. Verification experiments were conducted using UT as the reference. The result shows that model predictions and the experimental results agree well to confirm that both WiRM and NaRM based on ES-DCPD can be applicable to FAC management efforts
Screening method for piping wall loss by flow accelerated corrosion
International Nuclear Information System (INIS)
Ryu, K.H.; Hwang, I.S.; Lee, N.Y.; Oh, Y.J.; Park, J.H.; Sohn, C.H.
2007-01-01
Flow accelerated corrosion (FAC) phenomenon has persisted in its impact on plant reliability and personnel safety. Unless we change the operation condition drastically, most parameters affecting FAC will not be effectively controlled. In order to help expand piping inspection coverage, we have developed a screening approach to monitor the wall thinning by a Direct Current Potential drop (DCPD) technique. To improve the applicability to the complex piping network such as the secondary cooling water system in PWR's, we devised the equipotential control method that can eliminate undesired leakage currents outside a measurement section. In this paper, we present Wide Range Monitoring (WiRM) and Narrow Range Monitoring (NaRM) with Equipotential Switching Direct Current Potential Drop (ES-DCPD) method to rapidly monitor the thinning of piping. Based on the WiRM results, susceptible locations can be identified for further inspection by Ultrasonic Technique (UT). On-line monitoring of a thinned location can be made by NaRM. Finite element analysis results and a closed-form resistance model are developed for the comparison with measured wall thinning by the developed DCPD technique. Verification experiments were conducted using UT as the reference. The result shows that model predictions and the experimental results agree well to confirm that both WiRM and NaRM based on ES-DCPD can be applicable to FAC management efforts. (author)
On some Aitken-like acceleration of the Schwarz method
Garbey, M.; Tromeur-Dervout, D.
2002-12-01
In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.
Accelerated molecular dynamics methods: introduction and recent developments
International Nuclear Information System (INIS)
Uberuaga, Blas Pedro; Voter, Arthur F.; Perez, Danny; Shim, Y.; Amar, J.G.
2009-01-01
reaction pathways may be important, we return instead to a molecular dynamics treatment, in which the trajectory itself finds an appropriate way to escape from each state of the system. Since a direct integration of the trajectory would be limited to nanoseconds, while we are seeking to follow the system for much longer times, we modify the dynamics in some way to cause the first escape to happen much more quickly, thereby accelerating the dynamics. The key is to design the modified dynamics in a way that does as little damage as possible to the probability for escaping along a given pathway - i.e., we try to preserve the relative rate constants for the different possible escape paths out of the state. We can then use this modified dynamics to follow the system from state to state, reaching much longer times than we could reach with direct MD. The dynamics within any one state may no longer be meaningful, but the state-to-state dynamics, in the best case, as we discuss in the paper, can be exact. We have developed three methods in this accelerated molecular dynamics (AMD) class, in each case appealing to TST, either implicitly or explicitly, to design the modified dynamics. Each of these methods has its own advantages, and we and others have applied these methods to a wide range of problems. The purpose of this article is to give the reader a brief introduction to how these methods work, and discuss some of the recent developments that have been made to improve their power and applicability. Note that this brief review does not claim to be exhaustive: various other methods aiming at similar goals have been proposed in the literature. For the sake of brevity, our focus will exclusively be on the methods developed by the group
Accelerated molecular dynamics methods: introduction and recent developments
Energy Technology Data Exchange (ETDEWEB)
Uberuaga, Blas Pedro [Los Alamos National Laboratory; Voter, Arthur F [Los Alamos National Laboratory; Perez, Danny [Los Alamos National Laboratory; Shim, Y [UNIV OF TOLEDO; Amar, J G [UNIV OF TOLEDO
2009-01-01
reaction pathways may be important, we return instead to a molecular dynamics treatment, in which the trajectory itself finds an appropriate way to escape from each state of the system. Since a direct integration of the trajectory would be limited to nanoseconds, while we are seeking to follow the system for much longer times, we modify the dynamics in some way to cause the first escape to happen much more quickly, thereby accelerating the dynamics. The key is to design the modified dynamics in a way that does as little damage as possible to the probability for escaping along a given pathway - i.e., we try to preserve the relative rate constants for the different possible escape paths out of the state. We can then use this modified dynamics to follow the system from state to state, reaching much longer times than we could reach with direct MD. The dynamics within any one state may no longer be meaningful, but the state-to-state dynamics, in the best case, as we discuss in the paper, can be exact. We have developed three methods in this accelerated molecular dynamics (AMD) class, in each case appealing to TST, either implicitly or explicitly, to design the modified dynamics. Each of these methods has its own advantages, and we and others have applied these methods to a wide range of problems. The purpose of this article is to give the reader a brief introduction to how these methods work, and discuss some of the recent developments that have been made to improve their power and applicability. Note that this brief review does not claim to be exhaustive: various other methods aiming at similar goals have been proposed in the literature. For the sake of brevity, our focus will exclusively be on the methods developed by the group.
An, L.; Zhang, J.; Gong, L.
2018-04-01
Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.
VALU, AVX and GPU acceleration techniques for parallel FDTD methods
Yu, Wenhua
2013-01-01
This book introduces a general hardware acceleration technique that can significantly speed up FDTD simulations and their applications to engineering problems without requiring any additional hardware devices. This acceleration of complex problems can be efficient in saving both time and money and once learned these new techniques can be used repeatedly.
Zhang, H.-m.; Chen, X.-f.; Chang, S.
- It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.
A Novel Method for Vertical Acceleration Noise Suppression of a Thrust-Vectored VTOL UAV.
Li, Huanyu; Wu, Linfeng; Li, Yingjie; Li, Chunwen; Li, Hangyu
2016-12-02
Acceleration is of great importance in motion control for unmanned aerial vehicles (UAVs), especially during the takeoff and landing stages. However, the measured acceleration is inevitably polluted by severe noise. Therefore, a proper noise suppression procedure is required. This paper presents a novel method to reduce the noise in the measured vertical acceleration for a thrust-vectored tail-sitter vertical takeoff and landing (VTOL) UAV. In the new procedure, a Kalman filter is first applied to estimate the UAV mass by using the information in the vertical thrust and measured acceleration. The UAV mass is then used to compute an estimate of UAV vertical acceleration. The estimated acceleration is finally fused with the measured acceleration to obtain the minimum variance estimate of vertical acceleration. By doing this, the new approach incorporates the thrust information into the acceleration estimate. The method is applied to the data measured in a VTOL UAV takeoff experiment. Two other denoising approaches developed by former researchers are also tested for comparison. The results demonstrate that the new method is able to suppress the acceleration noise substantially. It also maintains the real-time performance in the final estimated acceleration, which is not seen in the former denoising approaches. The acceleration treated with the new method can be readily used in the motion control applications for UAVs to achieve improved accuracy.
A Novel Method for Vertical Acceleration Noise Suppression of a Thrust-Vectored VTOL UAV
Directory of Open Access Journals (Sweden)
Huanyu Li
2016-12-01
Full Text Available Acceleration is of great importance in motion control for unmanned aerial vehicles (UAVs, especially during the takeoff and landing stages. However, the measured acceleration is inevitably polluted by severe noise. Therefore, a proper noise suppression procedure is required. This paper presents a novel method to reduce the noise in the measured vertical acceleration for a thrust-vectored tail-sitter vertical takeoff and landing (VTOL UAV. In the new procedure, a Kalman filter is first applied to estimate the UAV mass by using the information in the vertical thrust and measured acceleration. The UAV mass is then used to compute an estimate of UAV vertical acceleration. The estimated acceleration is finally fused with the measured acceleration to obtain the minimum variance estimate of vertical acceleration. By doing this, the new approach incorporates the thrust information into the acceleration estimate. The method is applied to the data measured in a VTOL UAV takeoff experiment. Two other denoising approaches developed by former researchers are also tested for comparison. The results demonstrate that the new method is able to suppress the acceleration noise substantially. It also maintains the real-time performance in the final estimated acceleration, which is not seen in the former denoising approaches. The acceleration treated with the new method can be readily used in the motion control applications for UAVs to achieve improved accuracy.
Complexity analysis of accelerated MCMC methods for Bayesian inversion
International Nuclear Information System (INIS)
Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M
2013-01-01
approximation methods used, in order for the accelerations of MCMC resulting from these strategies to lead to complexity reductions over ‘plain’ MCMC algorithms for the Bayesian inversion of PDEs. (paper)
A synthetic-eddy-method for generating inflow conditions for large-eddy simulations
International Nuclear Information System (INIS)
Jarrin, N.; Benhamadouche, S.; Laurence, D.; Prosser, R.
2006-01-01
The generation of inflow data for spatially developing turbulent flows is one of the challenges that must be addressed prior to the application of LES to industrial flows and complex geometries. A new method of generation of synthetic turbulence, suitable for complex geometries and unstructured meshes, is presented herein. The method is based on the classical view of turbulence as a superposition of coherent structures. It is able to reproduce prescribed first and second order one point statistics, characteristic length and time scales, and the shape of coherent structures. The ability of the method to produce realistic inflow conditions in the test cases of a spatially decaying homogeneous isotropic turbulence and of a fully developed turbulent channel flow is presented. The method is systematically compared to other methods of generation of inflow conditions (precursor simulation, spectral methods and algebraic methods)
Demonstration recommendations for accelerated testing of concrete decontamination methods
Energy Technology Data Exchange (ETDEWEB)
Dickerson, K.S.; Ally, M.R.; Brown, C.H.; Morris, M.I.; Wilson-Nichols, M.J.
1995-12-01
A large number of aging US Department of Energy (DOE) surplus facilities located throughout the US require deactivation, decontamination, and decommissioning. Although several technologies are available commercially for concrete decontamination, emerging technologies with potential to reduce secondary waste and minimize the impact and risk to workers and the environment are needed. In response to these needs, the Accelerated Testing of Concrete Decontamination Methods project team described the nature and extent of contaminated concrete within the DOE complex and identified applicable emerging technologies. Existing information used to describe the nature and extent of contaminated concrete indicates that the most frequently occurring radiological contaminants are {sup 137}Cs, {sup 238}U (and its daughters), {sup 60}Co, {sup 90}Sr, and tritium. The total area of radionuclide-contaminated concrete within the DOE complex is estimated to be in the range of 7.9 {times} 10{sup 8} ft{sup 2}or approximately 18,000 acres. Concrete decontamination problems were matched with emerging technologies to recommend demonstrations considered to provide the most benefit to decontamination of concrete within the DOE complex. Emerging technologies with the most potential benefit were biological decontamination, electro-hydraulic scabbling, electrokinetics, and microwave scabbling.
Demonstration recommendations for accelerated testing of concrete decontamination methods
International Nuclear Information System (INIS)
Dickerson, K.S.; Ally, M.R.; Brown, C.H.; Morris, M.I.; Wilson-Nichols, M.J.
1995-12-01
A large number of aging US Department of Energy (DOE) surplus facilities located throughout the US require deactivation, decontamination, and decommissioning. Although several technologies are available commercially for concrete decontamination, emerging technologies with potential to reduce secondary waste and minimize the impact and risk to workers and the environment are needed. In response to these needs, the Accelerated Testing of Concrete Decontamination Methods project team described the nature and extent of contaminated concrete within the DOE complex and identified applicable emerging technologies. Existing information used to describe the nature and extent of contaminated concrete indicates that the most frequently occurring radiological contaminants are 137 Cs, 238 U (and its daughters), 60 Co, 90 Sr, and tritium. The total area of radionuclide-contaminated concrete within the DOE complex is estimated to be in the range of 7.9 x 10 8 ft 2 or approximately 18,000 acres. Concrete decontamination problems were matched with emerging technologies to recommend demonstrations considered to provide the most benefit to decontamination of concrete within the DOE complex. Emerging technologies with the most potential benefit were biological decontamination, electro-hydraulic scabbling, electrokinetics, and microwave scabbling
Energy Technology Data Exchange (ETDEWEB)
Shlikhter, E B; Khor' kov, A V; Zhorov, Yu M
1980-11-01
Promising methods for obtaining synthetic liquid fuel from coal are surveyed and described: thermal dissolution of coal by means of a hydrogen donor solution: hydrogenation; gasification with subsequent synthesis and pyrolysis. A technological and economic assessment of the above processes is given. Emphasis is placed on methods employing catalytic conversion of methanol into hydrocarbon fuels. On the basis of thermodynamic calculations of the process for obtaining high-calorific liquid fuel from methanol the possibility of obtaining diesel fractions as well as gasoline is demonstrated. (12 refs.) (In Russian)
Third order TRANSPORT with MAD [Methodical Accelerator Design] input
International Nuclear Information System (INIS)
Carey, D.C.
1988-01-01
This paper describes computer-aided design codes for particle accelerators. Among the topics discussed are: input beam description; parameters and algebraic expressions; the physical elements; beam lines; operations; and third-order transfer matrix
THE SYNTHETIC-OVERSAMPLING METHOD: USING PHOTOMETRIC COLORS TO DISCOVER EXTREMELY METAL-POOR STARS
Energy Technology Data Exchange (ETDEWEB)
Miller, A. A., E-mail: amiller@astro.caltech.edu [Jet Propulsion Laboratory, 4800 Oak Grove Drive, MS 169-506, Pasadena, CA 91109 (United States)
2015-09-20
Extremely metal-poor (EMP) stars ([Fe/H] ≤ −3.0 dex) provide a unique window into understanding the first generation of stars and early chemical enrichment of the universe. EMP stars are exceptionally rare, however, and the relatively small number of confirmed discoveries limits our ability to exploit these near-field probes of the first ∼500 Myr after the Big Bang. Here, a new method to photometrically estimate [Fe/H] from only broadband photometric colors is presented. I show that the method, which utilizes machine-learning algorithms and a training set of ∼170,000 stars with spectroscopically measured [Fe/H], produces a typical scatter of ∼0.29 dex. This performance is similar to what is achievable via low-resolution spectroscopy, and outperforms other photometric techniques, while also being more general. I further show that a slight alteration to the model, wherein synthetic EMP stars are added to the training set, yields the robust identification of EMP candidates. In particular, this synthetic-oversampling method recovers ∼20% of the EMP stars in the training set, at a precision of ∼0.05. Furthermore, ∼65% of the false positives from the model are very metal-poor stars ([Fe/H] ≤ −2.0 dex). The synthetic-oversampling method is biased toward the discovery of warm (∼F-type) stars, a consequence of the targeting bias from the Sloan Digital Sky Survey/Sloan Extension for Galactic Understanding survey. This EMP selection method represents a significant improvement over alternative broadband optical selection techniques. The models are applied to >12 million stars, with an expected yield of ∼600 new EMP stars, which promises to open new avenues for exploring the early universe.
International Nuclear Information System (INIS)
Valdes Parra, J.J.
1986-01-01
One of the main problems in reactor physics is to determine the neutron distribution in reactor core, since knowing that, it is possible to calculate the rapidity of occurrence of different nuclear reaction inside the reactor core. Within different theories existing in nuclear reactor physics, is neutron transport the one in which equation who govern the exact behavior of neutronic distribution are developed even inside the proper neutron transport theory, there exist different methods of solution which are approximations to exact solution; still more, with the purpose to reach a more precise solution, the majority of methods have been approached to the obtention of solutions in numerical form with the aim of take the advantages of modern computers, and for this reason a great deal of effort is dedicated to numerical solution of the equations of neutron transport. In agreement with the above mentioned, in this work has been developed a computer program which uses a relatively new techniques known as 'acceleration of synthetic diffusion' which has been applied to solve the neutron transport equation with 'classical schemes of spatial integration' obtaining results with a smaller quantity of interactions, if they compare to done without using such equation (Author)
Envelope method for determination of the ion linear accelerator acceptance
International Nuclear Information System (INIS)
Sharshanov, A.A.; Goncharenko, I.I.; Revutskij, E.I.
1974-01-01
The acceptance defined by the slit u 2 2 in space u, ν, z (u=coordinate of the accelerated particle in the direction perpendicular to the accelerator axis, ν=ratio of the transverse particle velocity component to the longitudinal component, z=accelerator axis, a=dimensions of slit) represents a convex curvilinear polygon with centre of symmetry at the origin of the co-ordinates. The sides of the polygon are sections of ellipses and straight lines, the ellipses being part of an envelope to the set of proto-types of all cross-sections of the slit in planes z=3, where 0<=xi<=z and z is the length of the accelerator, and the straight lines are tangents to the ends of the envelope. In the paper the equations of the ellipses forming the sides of the polygon are written using an elementary variable matrix of the accelerator structure, and the co-ordinates of the polygon apexes are found. A numerical value is derived for the area of the polygon for one transverse co-ordinate of the particular accelerator, the pre-stripping section of the LUMZI-10. (author)
Energy Technology Data Exchange (ETDEWEB)
Guida, Mateus Rodrigues; Alves Filho, Hermes; Barros, Ricardo C., E-mail: mguida@iprj.uerj.br, E-mail: halves@iprj.uerj.br, E-mail: rcbarros@pq.cnpq.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico. Programa de Pos-Graduacao em Modelagem Computacional
2015-07-01
The scattering source iterative (SI) scheme is applied traditionally to converge fine-mesh numerical solutions to fixed-source discrete ordinates (S{sub N}) neutron transport problems with linearly anisotropic scattering. The SI scheme is very simple to implement under a computational viewpoint. However, the SI scheme may show very slow convergence rate, mainly for diffusive media (low absorption) with several mean free paths in extent. In this work we describe two acceleration techniques based on improved initial guesses for the SI scheme, wherein we initialize the scattering source distribution within the slab using the P{sub 1} and P{sub 3} approximations. In order to estimate these initial guesses, we use the coarse-mesh solution of the PN equations with special boundary conditions to account for the classical S{sub N} prescribed boundary conditions, including vacuum boundary conditions. To apply this coarse-mesh P{sub N} solution for the accelerated scheme, we first perform within-node spatial reconstruction, and then we determine the fine-mesh average scalar flux and total current for initializing the linearly anisotropic scattering source terms for the SI scheme. We consider a number of numerical experiments to illustrate the efficiency of the offered P{sub N} synthetic acceleration (P{sub N}SA) technique based on initial guess. (author)
Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies
Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on
Directory of Open Access Journals (Sweden)
Erin O Sills
Full Text Available Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012. This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and
International Nuclear Information System (INIS)
Laraufie, Romain; Deck, Sébastien
2013-01-01
Highlights: • Present various Reynolds stresses reconstruction methods from a RANS-SA flow field. • Quantify the accuracy of the reconstruction methods for a wide range of Reynolds. • Evaluate the capabilities of the overall process (Reconstruction + SEM). • Provide practical guidelines to realize a streamwise RANS/LES (or WMLES) transition. -- Abstract: Hybrid or zonal RANS/LES approaches are recognized as the most promising way to accurately simulate complex unsteady flows under current computational limitations. One still open issue concerns the transition from a RANS to a LES or WMLES resolution in the stream-wise direction, when near wall turbulence is involved. Turbulence content has then to be prescribed at the transition to prevent from turbulence decay leading to possible flow relaminarization. The present paper aims to propose an efficient way to generate this switch, within the flow, based on a synthetic turbulence inflow condition, named Synthetic Eddy Method (SEM). As the knowledge of the whole Reynolds stresses is often missing, the scope of this paper is focused on generating the quantities required at the SEM inlet from a RANS calculation, namely the first and second order statistics of the aerodynamic field. Three different methods based on two different approaches are presented and their capability to accurately generate the needed aerodynamic values is investigated. Then, the ability of the combination SEM + Reconstruction method to manufacture well-behaved turbulence is demonstrated through spatially developing flat plate turbulent boundary layers. In the mean time, important intrinsic features of the Synthetic Eddy method are pointed out. The necessity of introducing, within the SEM, accurate data, with regards to the outer part of the boundary layer, is illustrated. Finally, user’s guidelines are given depending on the Reynolds number based on the momentum thickness, since one method is suitable for low Reynolds number while the
International Nuclear Information System (INIS)
Fujimura, Toichiro; Okumura, Keisuke
2002-11-01
A prototype version of a diffusion code has been developed to analyze the hexagonal core as reduced moderation reactor and the applicability of some acceleration methods have been investigated to accelerate the convergence of the iterative solution method. The hexagonal core is divided into regular triangular prisms in the three-dimensional code MOSRA-Prism and a polynomial expansion nodal method is applied to approximate the neutron flux distribution by a cubic polynomial. The multi-group diffusion equation is solved iteratively with ordinal inner and outer iterations and the effectiveness of acceleration methods is ascertained by applying an adaptive acceleration method and a neutron source extrapolation method, respectively. The formulation of the polynomial expansion nodal method is outlined in the report and the local and global effectiveness of the acceleration methods is discussed with various sample calculations. A new general expression of vacuum boundary condition, derived in the formulation is also described. (author)
Laser-driven ion acceleration: methods, challenges and prospects
Badziak, J.
2018-01-01
The recent development of laser technology has resulted in the construction of short-pulse lasers capable of generating fs light pulses with PW powers and intensities exceeding 1021 W/cm2, and has laid the basis for the multi-PW lasers, just being built in Europe, that will produce fs pulses of ultra-relativistic intensities ~ 1023 - 1024 W/cm2. The interaction of such an intense laser pulse with a dense target can result in the generation of collimated beams of ions of multi-MeV to GeV energies of sub-ps time durations and of extremely high beam intensities and ion fluencies, barely attainable with conventional RF-driven accelerators. Ion beams with such unique features have the potential for application in various fields of scientific research as well as in medical and technological developments. This paper provides a brief review of state-of-the art in laser-driven ion acceleration, with a focus on basic ion acceleration mechanisms and the production of ultra-intense ion beams. The challenges facing laser-driven ion acceleration studies, in particular those connected with potential applications of laser-accelerated ion beams, are also discussed.
Development of wide area environment accelerator operation and diagnostics method
Uchiyama, Akito; Furukawa, Kazuro
2015-08-01
Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs), the use of standard protocols such as the hypertext transfer protocol (HTTP) is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN) faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.
Development of wide area environment accelerator operation and diagnostics method
Directory of Open Access Journals (Sweden)
Akito Uchiyama
2015-08-01
Full Text Available Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs, the use of standard protocols such as the hypertext transfer protocol (HTTP is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.
Prospects and technical and economic evaluation of methods for obtaining synthetic liquid from coal
Energy Technology Data Exchange (ETDEWEB)
Shlikhter, E B; Khor' kov, A V; Zhorov, Y M
1980-11-01
Rising oil prices and the exhaustion of cheap organic fuels point to the need for chemical processing of coal to obtain synthetic liquid fuels. Added importance for such development in the USSR is dictated by the remote location of many coal deposits, such as the Kansko-Achinsk basin. Methods for synthesizing described include thermal dissolution in a hydrogen donor solvent, hydrogenation, and gasification with subsequent synthesis and pyrolysis. The need for improved technology is stressed. Cost factors are related to the chemical process involved, rather than to losses in fuel quantities, and the methanol produced is readily transported by pipeline. It can be used for both gasoline and diesel fuels.
Method for In-vivo Synthetic Aperture B-flow Imaging
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2004-01-01
. The signal received by the 64 elements closets to the en-fission are sampled at 40 MHz and 12 bits at a pulse repetition frequency of 3 kHz. A full second of data is acquired from a healthy 29 years old male volunteer from the carotid artery. The data is beamformed, combined, and echo canceled off-line. High-pass......B-flow techniques introduced in commercial scanners have been useful is visualizing places of flow. The method is relatively independent of flow angle and can give a good perception of vessel location and turbulence. This paper introduces a technique for making a synthetic aperture B-flow system...
Wang, Harris H; Church, George M
2011-01-01
Engineering at the scale of whole genomes requires fundamentally new molecular biology tools. Recent advances in recombineering using synthetic oligonucleotides enable the rapid generation of mutants at high efficiency and specificity and can be implemented at the genome scale. With these techniques, libraries of mutants can be generated, from which individuals with functionally useful phenotypes can be isolated. Furthermore, populations of cells can be evolved in situ by directed evolution using complex pools of oligonucleotides. Here, we discuss ways to utilize these multiplexed genome engineering methods, with special emphasis on experimental design and implementation. Copyright © 2011 Elsevier Inc. All rights reserved.
Development of a fast voltage control method for electrostatic accelerators
International Nuclear Information System (INIS)
Lobanov, Nikolai R.; Linardakis, Peter; Tsifakis, Dimitrios
2014-01-01
The concept of a novel fast voltage control loop for tandem electrostatic accelerators is described. This control loop utilises high-frequency components of the ion beam current intercepted by the image slits to generate a correction voltage that is applied to the first few gaps of the low- and high-energy acceleration tubes adjoining the high voltage terminal. New techniques for the direct measurement of the transfer function of an ultra-high impedance structure, such as an electrostatic accelerator, have been developed. For the first time, the transfer function for the fast feedback loop has been measured directly. Slow voltage variations are stabilised with common corona control loop and the relationship between transfer functions for the slow and new fast control loops required for optimum operation is discussed. The main source of terminal voltage instabilities, which are due to variation of the charging current caused by mechanical oscillations of charging chains, has been analysed
Directory of Open Access Journals (Sweden)
Junyi Li
2017-01-01
Full Text Available A BP (backpropagation neural network method is employed to solve the problems existing in the synthetic characteristic curve processing of hydroturbine at present that most studies are only concerned with data in the high efficiency and large guide vane opening area, which can hardly meet the research requirements of transition process especially in large fluctuation situation. The principle of the proposed method is to convert the nonlinear characteristics of turbine to torque and flow characteristics, which can be used for real-time simulation directly based on neural network. Results show that obtained sample data can be extended successfully to cover working areas wider under different operation conditions. Another major contribution of this paper is the resampling technique proposed in the paper to overcome the limitation to sample period simulation. In addition, a detailed analysis for improvements of iteration convergence of the pressure loop is proposed, leading to a better iterative convergence during the head pressure calculation. Actual applications verify that methods proposed in this paper have better simulation results which are closer to the field and provide a new perspective for hydroturbine synthetic characteristic curve fitting and modeling.
The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation
Directory of Open Access Journals (Sweden)
Bing-Yuan Pu
2013-01-01
Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.
Kerr, W; Pierce, S G; Rowe, P
2016-12-01
Synthetic aperture imaging methods have been employed widely in recent research in non-destructive testing (NDT), but uptake has been more limited in medical ultrasound imaging. Typically offering superior focussing power over more traditional phased array methods, these techniques have been employed in NDT applications to locate and characterise small defects within large samples, but have rarely been used to image surfaces. A desire to ultimately employ ultrasonic surface imaging for bone surface geometry measurement prior to surgical intervention motivates this research, and results are presented for initial laboratory trials of a surface reconstruction technique based on global thresholding of ultrasonic 3D point cloud data. In this study, representative geometry artefacts were imaged in the laboratory using two synthetic aperture techniques; the Total Focusing Method (TFM) and the Synthetic Aperture Focusing Technique (SAFT) employing full and narrow synthetic apertures, respectively. Three high precision metallic samples of known geometries (cuboid, sphere and cylinder) which featured a range of elementary surface primitives were imaged using a 5MHz, 128 element 1D phased array employing both SAFT and TFM approaches. The array was manipulated around the samples using a precision robotic positioning system, allowing for repeatable ultrasound derived 3D surface point clouds to be created. A global thresholding technique was then developed that allowed the extraction of the surface profiles, and these were compared with the known geometry samples to provide a quantitative measure of error of 3D surface reconstruction. The mean errors achieved with optimised SAFT imaging for the cuboidal, spherical and cylindrical samples were 1.3mm, 2.9mm and 2.0mm respectively, while those for TFM imaging were 3.7mm, 3.0mm and 3.1mm, respectively. These results were contrary to expectations given the higher information content associated with the TFM images. However, it was
LOO: a low-order nonlinear transport scheme for acceleration of method of characteristics
International Nuclear Information System (INIS)
Li, Lulu; Smith, Kord; Forget, Benoit; Ferrer, Rodolfo
2015-01-01
This paper presents a new physics-based multi-grid nonlinear acceleration method: the low-order operator method, or LOO. LOO uses a coarse space-angle multi-group method of characteristics (MOC) neutron transport calculation to accelerate the fine space-angle MOC calculation. LOO is designed to capture more angular effects than diffusion-based acceleration methods through a transport-based low-order solver. LOO differs from existing transport-based acceleration schemes in that it emphasizes simplified coarse space-angle characteristics and preserves physics in quadrant phase-space. The details of the method, including the restriction step, the low-order iterative solver and the prolongation step are discussed in this work. LOO shows comparable convergence behavior to coarse mesh finite difference on several two-dimensional benchmark problems while not requiring any under-relaxation, making it a robust acceleration scheme. (author)
Method and apparatus for accelerating a solid mass
International Nuclear Information System (INIS)
Tidman, D.A.; Goldstein, Y.A.
1984-01-01
An axi-symmetrical projectile, having a mass ranging from fractions of a gram to kilograms, is accelerated to velocities in the range of 10 5 to 10 7 centimeters per second by a propelling force produced by a plasma resulting from electric discharge. The discharge is imploded against the projectile surface so lines of the magnetic fields are approximately azimuthal around the projectile axis. The projectile is tapered so it experiences a net, stable axial accelerating force along the accelerator axis by the combined action of the magnetic field producing radially directed momentum and pressure on the plasma, the interaction of the magnetic field and ions induced by the plasma on the surface, as well as material the plasma ablates from the surface. The plasma discharge is initiated either in low density background gas between anode and cathode of a discharge module, or along an insulator surface between the electrodes in low density background gas. Alternatively, in either of these situations the discharge can be initiated in a gas which is produced by ablation of the projectile surface. In an alternative situation, the projectile acts as a switch for triggering discharges. Eddy current heating of the projectile is minimized by shaping the discharge current pulse so the plasma has a relatively weak magnetic field when it arrives at the surface, or by making the projectile electrically non-conducting. To provide a long acceleration path, a series of modules is aligned. In one embodiment, the projectile position, as it advances between modules, is sensed and discharges are switched on sequentially in the modules
Development of the cybernetic methods in the new generation of superhigh-energy accelerators
International Nuclear Information System (INIS)
Vasil'ev, A.A.; Berezhnoj, V.A.
1985-01-01
The problems related to the use of cybernetic methods in case of the development of control systems for superhigh-energy accelerators, particularly for parameters control which determine betatron particle oscillations are discussed. It is pointed out that early in 1960-s the development of the 1 TeV cybernetic accelerating complex consisting of a linear accelerator - injector, booster and main accelerator has been started. The conclusion is drawn that with the increase of accelerator energy, increase of ring magnet perimeter and decrease of vacuum chaber aperture as well as owing to comlication of accelerating complexes complication of operational modes and increase of particle beams intensity the use of cybernetic methods and completely automated control systems created on their base becomes in future still more pressing
Ultrahigh impedance method to assess electrostatic accelerator performance
Directory of Open Access Journals (Sweden)
Nikolai R. Lobanov
2015-06-01
Full Text Available This paper describes an investigation of problem-solving procedures to troubleshoot electrostatic accelerators. A novel technique to diagnose issues with high-voltage components is described. The main application of this technique is noninvasive testing of electrostatic accelerator high-voltage grading systems, measuring insulation resistance, or determining the volume and surface resistivity of insulation materials used in column posts and acceleration tubes. In addition, this technique allows verification of the continuity of the resistive divider assembly as a complete circuit, revealing if an electrical path exists between equipotential rings, resistors, tube electrodes, and column post-to-tube conductors. It is capable of identifying and locating a “microbreak” in a resistor and the experimental validation of the transfer function of the high impedance energy control element. A simple and practical fault-finding procedure has been developed based on fundamental principles. The experimental distributions of relative resistance deviations (ΔR/R for both accelerating tubes and posts were collected during five scheduled accelerator maintenance tank openings during 2013 and 2014. Components with measured ΔR/R>±2.5% were considered faulty and put through a detailed examination, with faults categorized. In total, thirty four unique fault categories were identified and most would not be identifiable without the new technique described. The most common failure mode was permanent and irreversible insulator current leakage that developed after being exposed to the ambient environment. As a result of efficient in situ troubleshooting and fault-elimination techniques, the maximum values of |ΔR/R| are kept below 2.5% at the conclusion of maintenance procedures. The acceptance margin could be narrowed even further by a factor of 2.5 by increasing the test voltage from 40 V up to 100 V. Based on experience over the last two years, resistor and
International Nuclear Information System (INIS)
Yin haihua; Yao Zhigang
2014-01-01
This article describes the environmental impact assessment methods of the radiation generated by the runing. medical linear accelerator. The material and thickness of shielding wall and protective doors of the linear accelerator were already knew, therefore we can evaluate the radiation by the runing. medical linear accelerator whether or not in the normal range of national standard by calculating the annual effective radiation dose of the surrounding personnel suffered. (authors)
PETSTEP: Generation of synthetic PET lesions for fast evaluation of segmentation methods
Berthon, Beatrice; Häggström, Ida; Apte, Aditya; Beattie, Bradley J.; Kirov, Assen S.; Humm, John L.; Marshall, Christopher; Spezi, Emiliano; Larsson, Anne; Schmidtlein, C. Ross
2016-01-01
Purpose This work describes PETSTEP (PET Simulator of Tracers via Emission Projection): a faster and more accessible alternative to Monte Carlo (MC) simulation generating realistic PET images, for studies assessing image features and segmentation techniques. Methods PETSTEP was implemented within Matlab as open source software. It allows generating three-dimensional PET images from PET/CT data or synthetic CT and PET maps, with user-drawn lesions and user-set acquisition and reconstruction parameters. PETSTEP was used to reproduce images of the NEMA body phantom acquired on a GE Discovery 690 PET/CT scanner, and simulated with MC for the GE Discovery LS scanner, and to generate realistic Head and Neck scans. Finally the sensitivity (S) and Positive Predictive Value (PPV) of three automatic segmentation methods were compared when applied to the scanner-acquired and PETSTEP-simulated NEMA images. Results PETSTEP produced 3D phantom and clinical images within 4 and 6 min respectively on a single core 2.7 GHz computer. PETSTEP images of the NEMA phantom had mean intensities within 2% of the scanner-acquired image for both background and largest insert, and 16% larger background Full Width at Half Maximum. Similar results were obtained when comparing PETSTEP images to MC simulated data. The S and PPV obtained with simulated phantom images were statistically significantly lower than for the original images, but led to the same conclusions with respect to the evaluated segmentation methods. Conclusions PETSTEP allows fast simulation of synthetic images reproducing scanner-acquired PET data and shows great promise for the evaluation of PET segmentation methods. PMID:26321409
Thermal decomposition of synthetic antlerite prepared by microwave-assisted hydrothermal method
Energy Technology Data Exchange (ETDEWEB)
Koga, Nobuyoshi [Chemistry Laboratory, Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima 739-8524 (Japan)], E-mail: nkoga@hiroshima-u.ac.jp; Mako, Akira; Kimizu, Takaaki; Tanaka, Yuu [Chemistry Laboratory, Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima 739-8524 (Japan)
2008-01-30
Copper(II) hydroxide sulfate was synthesized by a microwave-assisted hydrothermal method from a mixed solution of CuSO{sub 4} and urea. Needle-like crystals of ca. 20-30 {mu}m in length precipitated by irradiating microwave for 1 min were characterized as Cu{sub 3}(OH){sub 4}SO{sub 4} corresponding to mineral antlerite. The reaction pathway and kinetics of the thermal decomposition of the synthetic antlerite Cu{sub 3}(OH){sub 4}SO{sub 4} were investigated by means of thermoanalytical techniques complemented by powder X-ray diffractometry and microscopic observations. The thermal decomposition of Cu{sub 3}(OH){sub 4}SO{sub 4} proceeded via two separated reaction steps of dehydroxylation and desulfation to produce CuO, where crystalline phases of Cu{sub 2}OSO{sub 4} and CuO appeared as the intermediate products. The kinetic characteristics of the respective steps were discussed in comparison with those of the synthetic brochantite Cu{sub 4}(OH){sub 6}SO{sub 4} reported previously.
Tien, Shin-Ming; Hsu, Chih-Yuan; Chen, Bor-Sen
2016-01-01
Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella's rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the "brake component" in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each "brake component" were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the "brake component". Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate "brake component" in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains.
Method of predicting air pollution of coal mines with use of new synthetic materials
Energy Technology Data Exchange (ETDEWEB)
Sukhanov, V V; Putilina, O.N.
1988-08-01
Presents a methodological approach that enables on the basis of laboratory experiment to give a hygienic evaluation of synthetic materials used in coal mines to harden coal and rock masses to prevent rock falls and caving and for hermetization of ventilation equipment. Polyurethane, carbamidoformaldehyde and phenolformaldehyde plastic foam are studied in an experiment tha examined quantitative emission ofsubstances from their original components in the process of forming contaminants. Synthetics in a beaker are placed in an exsiccator, mixed with air, samples of volatiles particles collected and dynamics of their emission are calculated using regression and linear equations. Amounts of 2,4- toluenediisocyanate and diethylamine produced by polyurethane, and formaldehyde and methanol from carbamidoformaldehyde did not exceed limits of maximum concentrations; phenolformaldehyde plastic foam produced amounts of phenols and formaldehydes that are significantly higher than maximal permissible concentrations. Laboratory procedure and use of formulae were confirmed by testing air in a Donetsugol' mine. Polyurethane and carbamidoformaldehyde didnot contaminate air above hygienically safe limits, while phenolformaldehyde plastic foam exceeded safety limits proving need for hygienic measures to protect miners from its contaminants. Adequacy of laboratory-mathematical method to evaluate emissions of harmful chemicals from resins under mining conditions shows value of laboratory testing of many resins for safety in mine use. 4 refs.
Directory of Open Access Journals (Sweden)
Shin-Ming Tien
Full Text Available Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella's rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the "brake component" in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each "brake component" were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the "brake component". Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate "brake component" in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains.
International Nuclear Information System (INIS)
Laengstroem, B.; Sjoeberg, S.; Ragnarsson, U.
1981-01-01
11 C-labelling of methionine residues in a synthetic peptide via the preparation of the corresponding protected, pure homocysteine peptide has been investigated. Complete deprotection of the peptide and specific methylation of the homocysteine residue can be performed in one step in liquid ammonia. As a first application of this method the synthesis of the tripeptide, Z-Gly-L-Hcy(Bzl)-Gly-O-Bzl, and its conversion to Gly-Met-Gly and the corresponding labelled Gly-([ 11 C]-methyl)-Met-Gly, is reported. Starting with the protected peptide the labelling was performed in 20 +- 5 min (starting with 11 CO 2 ), yielding the labelled peptide in 92 +- 5 % radiochemical yield. Analyses and preparative LC can be performed within 6 min. (author)
Local region power spectrum-based unfocused ship detection method in synthetic aperture radar images
Wei, Xiangfei; Wang, Xiaoqing; Chong, Jinsong
2018-01-01
Ships on synthetic aperture radar (SAR) images will be severely defocused and their energy will disperse into numerous resolution cells under long SAR integration time. Therefore, the image intensity of ships is weak and sometimes even overwhelmed by sea clutter on SAR image. Consequently, it is hard to detect the ships from SAR intensity images. A ship detection method based on local region power spectrum of SAR complex image is proposed. Although the energies of the ships are dispersed on SAR intensity images, their spectral energies are rather concentrated or will cause the power spectra of local areas of SAR images to deviate from that of sea surface background. Therefore, the key idea of the proposed method is to detect ships via the power spectra distortion of local areas of SAR images. The local region power spectrum of a moving target on SAR image is analyzed and the way to obtain the detection threshold through the probability density function (pdf) of the power spectrum is illustrated. Numerical P- and L-band airborne SAR ocean data are utilized and the detection results are also illustrated. Results show that the proposed method can well detect the unfocused ships, with a detection rate of 93.6% and a false-alarm rate of 8.6%. Moreover, by comparing with some other algorithms, it indicates that the proposed method performs better under long SAR integration time. Finally, the applicability of the proposed method and the way of parameters selection are also discussed.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
International Nuclear Information System (INIS)
Cho, M. S.; Song, Y. C.; Bang, K. S.; Lee, J. S.; Kim, D. K.
1999-01-01
Service life prediction of nuclear power plants depends on the application of history of structures, field inspection and test, the development of laboratory acceleration tests, their analysis method and predictive model. In this study, laboratory acceleration test method for service life prediction of concrete structures and application of experimental test results are introduced. This study is concerned with environmental condition of concrete structures and is to develop the acceleration test method for durability factors of concrete structures e.g. carbonation, sulfate attack, freeze-thaw cycles and shrinkage-expansion etc
Omics methods for probing the mode of action of natural and synthetic phytotoxins.
Duke, Stephen O; Bajsa, Joanna; Pan, Zhiqiang
2013-02-01
For a little over a decade, omics methods (transcriptomics, proteomics, metabolomics, and physionomics) have been used to discover and probe the mode of action of both synthetic and natural phytotoxins. For mode of action discovery, the strategy for each of these approaches is to generate an omics profile for phytotoxins with known molecular targets and to compare this library of responses to the responses of compounds with unknown modes of action. Using more than one omics approach enhances the probability of success. Generally, compounds with the same mode of action generate similar responses with a particular omics method. Stress and detoxification responses to phytotoxins can be much clearer than effects directly related to the target site. Clues to new modes of action must be validated with in vitro enzyme effects or genetic approaches. Thus far, the only new phytotoxin target site discovered with omics approaches (metabolomics and physionomics) is that of cinmethylin and structurally related 5-benzyloxymethyl-1,2-isoxazolines. These omics approaches pointed to tyrosine amino-transferase as the target, which was verified by enzyme assays and genetic methods. In addition to being a useful tool of mode of action discovery, omics methods provide detailed information on genetic and biochemical impacts of phytotoxins. Such information can be useful in understanding the full impact of natural phytotoxins in both agricultural and natural ecosystems.
International Nuclear Information System (INIS)
Yang, Liang; Li, Saiyi
2015-01-01
The synthetic driving force (SDF) molecular dynamics method, which imposes crystalline orientation-dependent driving forces for grain boundary (GB) migration, has been considered deficient in many cases. In this work, we revealed the cause of the deficiency and proposed a modified method by introducing a new technique to distinguish atoms in grains and GB such that the driving forces can be imposed properly. This technique utilizes cross-reference order parameter (CROP) to characterize local lattice orientations in a bicrystal and introduces a CROP-based definition of interface region to minimize interference from thermal fluctuations in distinguishing atoms. A validation of the modified method was conducted by applying it to simulate the migration behavior of Ni 〈1 0 0〉 and Al 〈1 1 2〉 symmetrical tilt GBs, in comparison with the original method. The discrepancies between the migration velocities predicted by the two methods are found to be proportional to their differences in distinguishing atoms. For the Al 〈1 1 2〉 GBs, the modified method predicts a negative misorientation dependency for both the driving pressure threshold for initiating GB movement and the mobility, which agree with experimental findings and other molecular dynamics computations but contradict those predicted using the original method. Last, the modified method was applied to evaluate the mobility of Ni Σ5 〈1 0 0〉 symmetrical tilt GB under different driving pressure and temperature conditions. The results reveal a strong driving pressure dependency of the mobility at relatively low temperatures and suggest that the driving pressure should be as low as possible but large enough to trigger continuous migration.
Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I
2017-12-01
The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
The Lozanov Method for Accelerating the Learning of Foreign Languages.
Stanton, H. E.
1978-01-01
Discusses the Lozanov Method of teaching foreign languages developed by Lozanov in Bulgaria. This method (also known as Suggestopedia) uses various techniques such as physical relaxation exercises, mental concentration, classical music, and ego-enhancing suggestions. (CFM)
Application of normal form methods to the analysis of resonances in particle accelerators
International Nuclear Information System (INIS)
Davies, W.G.
1992-01-01
The transformation to normal form in a Lie-algebraic framework provides a very powerful method for identifying and analysing non-linear behaviour and resonances in particle accelerators. The basic ideas are presented and illustrated. (author). 4 refs
Limitations and Strengths of the Fourier Transform Method to Detect Accelerating Targets
National Research Council Canada - National Science Library
Thayaparan, Thayananthan
2000-01-01
.... In using a Pulse Doppler Radar to detect a non-accelerating target in additive white Gaussian noise and to estimate its radial velocity, the Fourier method provides an output signal-to-noise ratio (SNR...
International Nuclear Information System (INIS)
Gel'fand, E.K.; Komochkov, M.M.; Man'ko, B.V.; Salatskaya, M.I.; Sychev, B.S.
1980-01-01
Calculations of regularities to form radiation dose beyond proton accelerator shielding are carried out. Numerical data on photographic monitoring dosemeter in radiation fields investigated are obtained. It was shown how to determine the total equivalent dose of radiation fields beyond proton accelerator shielding by means of the photographic monitoring method by introduction into the procedure of considering nuclear emulsions of division of particle tracks into the black and grey ones. A comparison of experimental and calculational data has shown the applicability of the used calculation method for modelling dose radiation characteristics beyond proton accelerator shielding [ru
A new method of measuring gravitational acceleration in an undergraduate laboratory program
Wang, Qiaochu; Wang, Chang; Xiao, Yunhuan; Schulte, Jurgen; Shi, Qingfan
2018-01-01
This paper presents a high accuracy method to measure gravitational acceleration in an undergraduate laboratory program. The experiment is based on water in a cylindrical vessel rotating about its vertical axis at a constant speed. The water surface forms a paraboloid whose focal length is related to rotational period and gravitational acceleration. This experimental setup avoids classical source errors in determining the local value of gravitational acceleration, so prevalent in the common simple pendulum and inclined plane experiments. The presented method combines multiple physics concepts such as kinematics, classical mechanics and geometric optics, offering the opportunity for lateral as well as project-based learning.
Measured emittance dependence on injection method in laser plasma accelerators
Barber, Samuel; van Tilborg, Jeroen; Schroeder, Carl; Lehe, Remi; Tsai, Hai-En; Swanson, Kelly; Steinke, Sven; Nakamura, Kei; Geddes, Cameron; Benedetti, Carlo; Esarey, Eric; Leemans, Wim
2017-10-01
The success of many laser plasma accelerator (LPA) based applications relies on the ability to produce electron beams with excellent 6D brightness, where brightness is defined as the ratio of charge to the product of the three normalized emittances. As such, parametric studies of the emittance of LPA generated electron beams are essential. Profiting from a stable and tunable LPA setup, combined with a carefully designed single-shot transverse emittance diagnostic, we present a direct comparison of charge dependent emittance measurements of electron beams generated by two different injection mechanisms: ionization injection and shock induced density down-ramp injection. Notably, the measurements reveal that ionization injection results in significantly higher emittance. With the down-ramp injection configuration, emittances less than 1 micron at spectral charge densities up to 2 pC/MeV were measured. This work was supported by the U.S. DOE under Contract No. DE-AC02-05CH11231, by the NSF under Grant No. PHY-1415596, by the U.S. DOE NNSA, DNN R&D (NA22), and by the Gordon and Betty Moore Foundation under Grant ID GBMF4898.
Computer control of large accelerators design concepts and methods
International Nuclear Information System (INIS)
Beck, F.; Gormley, M.
1984-05-01
Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references
Computer control of large accelerators design concepts and methods
Energy Technology Data Exchange (ETDEWEB)
Beck, F.; Gormley, M.
1984-05-01
Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references.
Bismuth-ceramic nanocomposites through ball milling and liquid crystal synthetic methods
Dellinger, Timothy Michael
Three methods were developed for the synthesis of bismuth-ceramic nanocomposites, which are of interest due to possible use as thermoelectric materials. In the first synthetic method, high energy ball milling of bismuth metal with either MgO or SiO2 was found to produce nanostructured bismuth dispersed on a ceramic material. The morphology of the resulting bismuth depended on its wetting behavior with respect to the ceramic: the metal wet the MgO, but did not wet on the SiO2. Differential Scanning Calorimetry measurements on these composites revealed unusual thermal stability, with nanostructure retained after multiple cycles of heating and cooling through the metal's melting point. The second synthesis methodology was based on the use of lyotropic liquid crystals. These mixtures of water and amphiphilic molecules self-assemble to form periodic structures with nanometer-scale hydrophilic and hydrophobic domains. A novel shear mixing methodology was developed for bringing together reactants which were added to the liquid crystals as dissolved salts. The liquid crystals served to mediate synthesis by acting as nanoreactors to confine chemical reactions within the nanoscale domains of the mesophase, and resulted in the production of nanoparticles. By synthesizing lead sulfide (PbS) and bismuth (Bi) particles as proof-of-concept, it was shown that nanoparticle size could be controlled by controlling the dimensionality of the nanoreactors through control of the liquid crystalline phase. Particle size was shown to decrease upon going from three-dimensionally percolating nanoreactors, to two dimensional sheet-like nanoreactors, to one dimensional rod-like nanoreactors. Additionally, particle size could be controlled by varying the precursor salt concentration. Since the nanoparticles did not agglomerate in the liquid crystal immediately after synthesis, bismuth-ceramic nanocomposites could be prepared by synthesizing Bi nanoparticles and mixing in SiO2 particles which
Accelerated in-vitro release testing methods for extended-release parenteral dosage forms.
Shen, Jie; Burgess, Diane J
2012-07-01
This review highlights current methods and strategies for accelerated in-vitro drug release testing of extended-release parenteral dosage forms such as polymeric microparticulate systems, lipid microparticulate systems, in-situ depot-forming systems and implants. Extended-release parenteral dosage forms are typically designed to maintain the effective drug concentration over periods of weeks, months or even years. Consequently, 'real-time' in-vitro release tests for these dosage forms are often run over a long time period. Accelerated in-vitro release methods can provide rapid evaluation and therefore are desirable for quality control purposes. To this end, different accelerated in-vitro release methods using United States Pharmacopeia (USP) apparatus have been developed. Different mechanisms of accelerating drug release from extended-release parenteral dosage forms, along with the accelerated in-vitro release testing methods currently employed are discussed. Accelerated in-vitro release testing methods with good discriminatory ability are critical for quality control of extended-release parenteral products. Methods that can be used in the development of in-vitro-in-vivo correlation (IVIVC) are desirable; however, for complex parenteral products this may not always be achievable. © 2012 The Authors. JPP © 2012 Royal Pharmaceutical Society.
Accelerated in vitro release testing methods for extended release parenteral dosage forms
Shen, Jie; Burgess, Diane J.
2012-01-01
Objectives This review highlights current methods and strategies for accelerated in vitro drug release testing of extended release parenteral dosage forms such as polymeric microparticulate systems, lipid microparticulate systems, in situ depot-forming systems, and implants. Key findings Extended release parenteral dosage forms are typically designed to maintain the effective drug concentration over periods of weeks, months or even years. Consequently, “real-time” in vitro release tests for these dosage forms are often run over a long time period. Accelerated in vitro release methods can provide rapid evaluation and therefore are desirable for quality control purposes. To this end, different accelerated in vitro release methods using United States Pharmacopoeia (USP) apparatus have been developed. Different mechanisms of accelerating drug release from extended release parenteral dosage forms, along with the accelerated in vitro release testing methods currently employed are discussed. Conclusions Accelerated in vitro release testing methods with good discriminatory ability are critical for quality control of extended release parenteral products. Methods that can be used in the development of in vitro-in vivo correlation (IVIVC) are desirable, however for complex parenteral products this may not always be achievable. PMID:22686344
MR-based synthetic CT generation using a deep convolutional neural network method.
Han, Xiao
2017-04-01
Interests have been rapidly growing in the field of radiotherapy to replace CT with magnetic resonance imaging (MRI), due to superior soft tissue contrast offered by MRI and the desire to reduce unnecessary radiation dose. MR-only radiotherapy also simplifies clinical workflow and avoids uncertainties in aligning MR with CT. Methods, however, are needed to derive CT-equivalent representations, often known as synthetic CT (sCT), from patient MR images for dose calculation and DRR-based patient positioning. Synthetic CT estimation is also important for PET attenuation correction in hybrid PET-MR systems. We propose in this work a novel deep convolutional neural network (DCNN) method for sCT generation and evaluate its performance on a set of brain tumor patient images. The proposed method builds upon recent developments of deep learning and convolutional neural networks in the computer vision literature. The proposed DCNN model has 27 convolutional layers interleaved with pooling and unpooling layers and 35 million free parameters, which can be trained to learn a direct end-to-end mapping from MR images to their corresponding CTs. Training such a large model on our limited data is made possible through the principle of transfer learning and by initializing model weights from a pretrained model. Eighteen brain tumor patients with both CT and T1-weighted MR images are used as experimental data and a sixfold cross-validation study is performed. Each sCT generated is compared against the real CT image of the same patient on a voxel-by-voxel basis. Comparison is also made with respect to an atlas-based approach that involves deformable atlas registration and patch-based atlas fusion. The proposed DCNN method produced a mean absolute error (MAE) below 85 HU for 13 of the 18 test subjects. The overall average MAE was 84.8 ± 17.3 HU for all subjects, which was found to be significantly better than the average MAE of 94.5 ± 17.8 HU for the atlas-based method. The DCNN
Energy Technology Data Exchange (ETDEWEB)
Shinde, Pravin V.; Shingate, Bapurao B.; Shingare, Murlidhar S. [Babasaheb Ambedkar Marathwada University, Aurngabad (India)
2011-04-15
In the present work, successful implementation of ultrasound irradiations for the rapid synthesis of 1,5- benzodiazepine derivatives under solvent-free conditions is demonstrated. Use of a novel catalyst i.e. camphor sulphonic acid in combination with ultrasound technique is reported for the first time. Comparative study for the synthesis of 1,5-benzodiazepines using conventional as well as ultrasonication method is discussed.
Methods of Phase and Power Control in Magnetron Transmitters for Superconducting Accelerators
Energy Technology Data Exchange (ETDEWEB)
Kazadevich, G. [MUONS Inc., Batavia; Johnson, R. [MUONS Inc., Batavia; Neubauer, M. [MUONS Inc., Batavia; Lebedev, V. [Fermilab; Schappert, W. [Fermilab; Yakovlev, V. [Fermilab
2017-05-01
Various methods of phase and power control in magnetron RF sources of superconducting accelerators intended for ADS-class projects were recently developed and studied with conventional 2.45 GHz, 1 kW, CW magnetrons operating in pulsed and CW regimes. Magnetron transmitters excited by a resonant (injection-locking) phasemodulated signal can provide phase and power control with the rates required for precise stabilization of phase and amplitude of the accelerating field in Superconducting RF (SRF) cavities of the intensity-frontier accelerators. An innovative technique that can significantly increase the magnetron transmitter efficiency at the widerange power control required for superconducting accelerators was developed and verified with the 2.45 GHz magnetrons operating in CW and pulsed regimes. High efficiency magnetron transmitters of this type can significantly reduce the capital and operation costs of the ADSclass accelerator projects.
Directory of Open Access Journals (Sweden)
Shi Jun
2015-02-01
Full Text Available Downward-looking Linear Array Synthetic Aperture Radar (LASAR has many potential applications in the topographic mapping, disaster monitoring and reconnaissance applications, especially in the mountainous area. However, limited by the sizes of platforms, its resolution in the linear array direction is always far lower than those in the range and azimuth directions. This disadvantage leads to the blurring of Three-Dimensional (3D images in the linear array direction, and restricts the application of LASAR. To date, the research on 3D SAR image enhancement has focused on the sparse recovery technique. In this case, the one-to-one mapping of Digital Elevation Model (DEM brakes down. To overcome this, an optimal DEM reconstruction method for LASAR based on the variational model is discussed in an effort to optimize the DEM and the associated scattering coefficient map, and to minimize the Mean Square Error (MSE. Using simulation experiments, it is found that the variational model is more suitable for DEM enhancement applications to all kinds of terrains compared with the Orthogonal Matching Pursuit (OMPand Least Absolute Shrinkage and Selection Operator (LASSO methods.
Solvothermal and electrochemical synthetic method of HKUST-1 and its methane storage capacity
Wahyu Lestari, Witri; Adreane, Marisa; Purnawan, Candra; Fansuri, Hamzah; Widiastuti, Nurul; Budi Rahardjo, Sentot
2016-02-01
A comparison synthetic strategy of Metal-Organic Frameworks, namely, Hongkong University of Techhnology-1 {HKUST-1[Cu3(BTC)]2} (BTC = 1,3,5-benzene-tri-carboxylate) through solvothermal and electrochemical method in ethanol:water (1:1) has been conducted. The obtained material was analyzed using powder X-ray diffraction, Scanning Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FTIR), Thermo-Gravimetric Analysis (TGA) and Surface Area Analysis (SAA). While the voltage in the electrochemical method are varied, ranging from 12 to 15 Volt. The results show that at 15 V the texture of the material has the best degree of crystallinity and comparable with solvothermal product. This indicated from XRD data and supported by the SEM image to view the morphology. The thermal stability of the synthesized compounds is up to 320 °C. The shape of the nitrogen sorption isotherm of the compound corresponds to type I of the IUPAC adsorption isotherm classification for microporous materials with BET surface area of 629.2 and 324.3 m2/g (for solvothermal and electrochemical product respectively) and promising for gas storage application. Herein, the methane storage capacities of these compounds are also tested.
Method for validating radiobiological samples using a linear accelerator
International Nuclear Information System (INIS)
Brengues, Muriel; Liu, David; Korn, Ronald; Zenhausern, Frederic
2014-01-01
There is an immediate need for rapid triage of the population in case of a large scale exposure to ionizing radiation. Knowing the dose absorbed by the body will allow clinicians to administer medical treatment for the best chance of recovery for the victim. In addition, today's radiotherapy treatment could benefit from additional information regarding the patient's sensitivity to radiation before starting the treatment. As of today, there is no system in place to respond to this demand. This paper will describe specific procedures to mimic the effects of human exposure to ionizing radiation creating the tools for optimization of administered radiation dosimetry for radiotherapy and/or to estimate the doses of radiation received accidentally during a radiation event that could pose a danger to the public. In order to obtain irradiated biological samples to study ionizing radiation absorbed by the body, we performed ex-vivo irradiation of human blood samples using the linear accelerator (LINAC). The LINAC was implemented and calibrated for irradiating human whole blood samples. To test the calibration, a 2 Gy test run was successfully performed on a tube filled with water with an accuracy of 3% in dose distribution. To validate our technique the blood samples were ex-vivo irradiated and the results were analyzed using a gene expression assay to follow the effect of the ionizing irradiation by characterizing dose responsive biomarkers from radiobiological assays. The response of 5 genes was monitored resulting in expression increase with the dose of radiation received. The blood samples treated with the LINAC can provide effective irradiated blood samples suitable for molecular profiling to validate radiobiological measurements via the gene-expression based biodosimetry tools. (orig.)
Method for validating radiobiological samples using a linear accelerator.
Brengues, Muriel; Liu, David; Korn, Ronald; Zenhausern, Frederic
2014-04-29
There is an immediate need for rapid triage of the population in case of a large scale exposure to ionizing radiation. Knowing the dose absorbed by the body will allow clinicians to administer medical treatment for the best chance of recovery for the victim. In addition, today's radiotherapy treatment could benefit from additional information regarding the patient's sensitivity to radiation before starting the treatment. As of today, there is no system in place to respond to this demand. This paper will describe specific procedures to mimic the effects of human exposure to ionizing radiation creating the tools for optimization of administered radiation dosimetry for radiotherapy and/or to estimate the doses of radiation received accidentally during a radiation event that could pose a danger to the public. In order to obtain irradiated biological samples to study ionizing radiation absorbed by the body, we performed ex-vivo irradiation of human blood samples using the linear accelerator (LINAC). The LINAC was implemented and calibrated for irradiating human whole blood samples. To test the calibration, a 2 Gy test run was successfully performed on a tube filled with water with an accuracy of 3% in dose distribution. To validate our technique the blood samples were ex-vivo irradiated and the results were analyzed using a gene expression assay to follow the effect of the ionizing irradiation by characterizing dose responsive biomarkers from radiobiological assays. The response of 5 genes was monitored resulting in expression increase with the dose of radiation received. The blood samples treated with the LINAC can provide effective irradiated blood samples suitable for molecular profiling to validate radiobiological measurements via the gene-expression based biodosimetry tools.
Planetary method to measure the neutrons spectrum in lineal accelerators of medical use
International Nuclear Information System (INIS)
Vega C, H. R.; Benites R, J. L.
2014-08-01
A novel procedure to measure the neutrons spectrum originated in a lineal accelerator of medical use has been developed. The method uses a passive spectrometer of Bonner spheres. The main advantage of the method is that only requires of a single shot of the accelerator. When this is used around a lineal accelerator is necessary to operate it under the same conditions so many times like the spheres that contain the spectrometer, activity that consumes enough time. The developed procedure consists on situating all the spheres of the spectrometer at the same time and to realize the reading making a single shot. With this method the photo neutrons spectrum produced by a lineal accelerator Varian ix of 15 MV to 100 cm of the isocenter was determined, with the spectrum is determined the total flow and the ambient dose equivalent. (Author)
ACCELERATED METHODS FOR ESTIMATING THE DURABILITY OF PLAIN BEARINGS
Directory of Open Access Journals (Sweden)
Myron Czerniec
2014-09-01
Full Text Available The paper presents methods for determining the durability of slide bearings. The developed methods enhance the calculation process by even 100000 times, compared to the accurate solution obtained with the generalized cumulative model of wear. The paper determines the accuracy of results for estimating the durability of bearings depending on the size of blocks of constant conditions of contact interaction between the shaft with small out-of-roundedness and the bush with a circular contour. The paper gives an approximate dependence for determining accurate durability using either a more accurate or an additional method.
International Nuclear Information System (INIS)
Yan, Shiju; Qian, Wei; Guan, Yubao; Zheng, Bin
2016-01-01
Purpose: This study aims to investigate the potential to improve lung cancer recurrence risk prediction performance for stage I NSCLS patients by integrating oversampling, feature selection, and score fusion techniques and develop an optimal prediction model. Methods: A dataset involving 94 early stage lung cancer patients was retrospectively assembled, which includes CT images, nine clinical and biological (CB) markers, and outcome of 3-yr disease-free survival (DFS) after surgery. Among the 94 patients, 74 remained DFS and 20 had cancer recurrence. Applying a computer-aided detection scheme, tumors were segmented from the CT images and 35 quantitative image (QI) features were initially computed. Two normalized Gaussian radial basis function network (RBFN) based classifiers were built based on QI features and CB markers separately. To improve prediction performance, the authors applied a synthetic minority oversampling technique (SMOTE) and a BestFirst based feature selection method to optimize the classifiers and also tested fusion methods to combine QI and CB based prediction results. Results: Using a leave-one-case-out cross-validation (K-fold cross-validation) method, the computed areas under a receiver operating characteristic curve (AUCs) were 0.716 ± 0.071 and 0.642 ± 0.061, when using the QI and CB based classifiers, respectively. By fusion of the scores generated by the two classifiers, AUC significantly increased to 0.859 ± 0.052 (p < 0.05) with an overall prediction accuracy of 89.4%. Conclusions: This study demonstrated the feasibility of improving prediction performance by integrating SMOTE, feature selection, and score fusion techniques. Combining QI features and CB markers and performing SMOTE prior to feature selection in classifier training enabled RBFN based classifier to yield improved prediction accuracy.
Energy Technology Data Exchange (ETDEWEB)
Yan, Shiju [School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China and School of Electrical and Computer Engineering, University of Oklahoma, Norman, Oklahoma 73019 (United States); Qian, Wei [Department of Electrical and Computer Engineering, University of Texas, El Paso, Texas 79968 and Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang 110819 (China); Guan, Yubao [Department of Radiology, Guangzhou Medical University, Guangzhou 510182 (China); Zheng, Bin, E-mail: Bin.Zheng-1@ou.edu [School of Electrical and Computer Engineering, University of Oklahoma, Norman, Oklahoma 73019 (United States)
2016-06-15
Purpose: This study aims to investigate the potential to improve lung cancer recurrence risk prediction performance for stage I NSCLS patients by integrating oversampling, feature selection, and score fusion techniques and develop an optimal prediction model. Methods: A dataset involving 94 early stage lung cancer patients was retrospectively assembled, which includes CT images, nine clinical and biological (CB) markers, and outcome of 3-yr disease-free survival (DFS) after surgery. Among the 94 patients, 74 remained DFS and 20 had cancer recurrence. Applying a computer-aided detection scheme, tumors were segmented from the CT images and 35 quantitative image (QI) features were initially computed. Two normalized Gaussian radial basis function network (RBFN) based classifiers were built based on QI features and CB markers separately. To improve prediction performance, the authors applied a synthetic minority oversampling technique (SMOTE) and a BestFirst based feature selection method to optimize the classifiers and also tested fusion methods to combine QI and CB based prediction results. Results: Using a leave-one-case-out cross-validation (K-fold cross-validation) method, the computed areas under a receiver operating characteristic curve (AUCs) were 0.716 ± 0.071 and 0.642 ± 0.061, when using the QI and CB based classifiers, respectively. By fusion of the scores generated by the two classifiers, AUC significantly increased to 0.859 ± 0.052 (p < 0.05) with an overall prediction accuracy of 89.4%. Conclusions: This study demonstrated the feasibility of improving prediction performance by integrating SMOTE, feature selection, and score fusion techniques. Combining QI features and CB markers and performing SMOTE prior to feature selection in classifier training enabled RBFN based classifier to yield improved prediction accuracy.
An Adaptive Laboratory Evolution Method to Accelerate Autotrophic Metabolism
DEFF Research Database (Denmark)
Zhang, Tian; Tremblay, Pier-Luc
2018-01-01
Adaptive laboratory evolution (ALE) is an approach enabling the development of novel characteristics in microbial strains via the application of a constant selection pressure. This method is also an efficient tool to acquire insights on molecular mechanisms responsible for specific phenotypes. ALE...... autotrophically and reducing CO2 into acetate more efficiently. Strains developed via this ALE method were also used to gain knowledge on the autotrophic metabolism of S. ovata as well as other acetogenic bacteria....
A GPU-accelerated implicit meshless method for compressible flows
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
A novel accelerated oxidative stability screening method for pharmaceutical solids.
Zhu, Donghua Alan; Zhang, Geoff G Z; George, Karen L S T; Zhou, Deliang
2011-08-01
Despite the fact that oxidation is the second most frequent degradation pathway for pharmaceuticals, means of evaluating the oxidative stability of pharmaceutical solids, especially effective stress testing, are still lacking. This paper describes a novel experimental method for peroxide-mediated oxidative stress testing on pharmaceutical solids. The method utilizes urea-hydrogen peroxide, a molecular complex that undergoes solid-state decomposition and releases hydrogen peroxide vapor at elevated temperatures (e.g., 30°C), as a source of peroxide. The experimental setting for this method is simple, convenient, and can be operated routinely in most laboratories. The fundamental parameter of the system, that is, hydrogen peroxide vapor pressure, was determined using a modified spectrophotometric method. The feasibility and utility of the proposed method in solid form selection have been demonstrated using various solid forms of ephedrine. No degradation was detected for ephedrine hydrochloride after exposure to the hydrogen peroxide vapor for 2 weeks, whereas both anhydrate and hemihydrate free base forms degraded rapidly under the test conditions. In addition, both the anhydrate and the hemihydrate free base degraded faster when exposed to hydrogen peroxide vapor at 30°C under dry condition than at 30°C/75% relative humidity (RH). A new degradation product was also observed under the drier condition. The proposed method provides more relevant screening conditions for solid dosage forms, and is useful in selecting optimal solid form(s), determining potential degradation products, and formulation screening during development. Copyright © 2011 Wiley-Liss, Inc.
Accelerated in vitro release testing method for naltrexone loaded PLGA microspheres.
Andhariya, Janki V; Choi, Stephanie; Wang, Yan; Zou, Yuan; Burgess, Diane J; Shen, Jie
2017-03-30
The objective of the present study was to develop a discriminatory and reproducible accelerated release testing method for naltrexone loaded parenteral polymeric microspheres. The commercially available naltrexone microsphere product (Vivitrol ® ) was used as the testing formulation in the in vitro release method development, and both sample-and-separate and USP apparatus 4 methods were investigated. Following an in vitro drug stability study, frequent media replacement and addition of anti-oxidant in the release medium were used to prevent degradation of naltrexone during release testing at "real-time" (37°C) and "accelerated" (45°C), respectively. The USP apparatus 4 method was more reproducible than the sample-and-separate method. In addition, the accelerated release profile obtained using USP apparatus 4 had a shortened release duration (within seven days), and good correlation with the "real-time" release profile. Lastly, the discriminatory ability of the developed accelerated release method was assessed using compositionally equivalent naltrexone microspheres with different release characteristics. The developed accelerated USP apparatus 4 release method was able to detect differences in the release characteristics of the prepared naltrexone microspheres. Moreover, a linear correlation was observed between the "real-time" and accelerated release profiles of all the formulations investigated, suggesting that the release mechanism(s) may be similar under both conditions. These results indicate that the developed accelerated USP apparatus 4 method has the potential to be an appropriate fast quality control tool for long-acting naltrexone PLGA microspheres. Copyright © 2017 Elsevier B.V. All rights reserved.
A novel method to accelerate orthodontic tooth movement
Buyuk, S. Kutalmış; Yavuz, Mustafa C.; Genc, Esra; Sunar, Oguzhan
2018-01-01
This clinical case report presents fixed orthodontic treatment of a patient with moderately crowded teeth. It was performed with a new technique called ‘discision’. Discision method that was described for the first time by the present authors yielded predictable outcomes, and orthodontic treatment was completed in a short period of time. The total duration of orthodontic treatment was 4 months. Class I molar and canine relationships were established at the end of the treatment. Moreover, crowding in the mandible and maxilla was corrected, and optimal overjet and overbite were established. No scar tissue was observed in any gingival region on which discision was performed. The discision technique was developed as a minimally invasive alternative method to piezocision technique, and the authors suggest that this new method yields good outcomes in achieving rapid tooth movement. PMID:29436571
Advanced FDTD methods parallelization, acceleration, and engineering applications
Yu, Wenhua
2011-01-01
The finite-difference time-domain (FDTD) method has revolutionized antenna design and electromagnetics engineering. Here's a cutting-edge book that focuses on the performance optimization and engineering applications of FDTD simulation systems. Covering the latest developments in this area, this unique resource offer you expert advice on the FDTD method, hardware platforms, and network systems. Moreover the book offers guidance in distinguishing between the many different electromagnetics software packages on the market today. You also find a complete chapter dedicated to large multi-scale pro
An accelerated test method of luminous flux depreciation for LED luminaires and lamps
International Nuclear Information System (INIS)
Qian, C.; Fan, X.J.; Fan, J.J.; Yuan, C.A.; Zhang, G.Q.
2016-01-01
Light Emitting Diode (LED) luminaires and lamps are energy-saving and environmental friendly alternatives to traditional lighting products. However, current luminous flux depreciation test at luminaire and lamp level requires a minimum of 6000 h testing, which is even longer than the product development cycle time. This paper develops an accelerated test method for luminous flux depreciation to reduce the test time within 2000 h at an elevated temperature. The method is based on lumen maintenance boundary curve, obtained from a collection of LED source lumen depreciation data, known as LM-80 data. The exponential decay model and Arrhenius acceleration relationship are used to determine the new threshold of lumen maintenance and acceleration factor. The proposed method has been verified by a number of simulation studies and experimental data for a wide range of LED luminaire and lamp types from both internal and external experiments. The qualification results obtained by the accelerated test method agree well with traditional 6000 h tests. - Highlights: • We develop an accelerated test method for LED luminaires and lamps. • The method is proposed based on a “Boundary Curve” concept. • The parameters of the boundary curve are extracted from LM-80 test reports. • Qualification results from the proposed method agree with ES requirements.
Gao, He-Gang; Gong, Wen-Jie; Zhao, Yong-Gang
2015-01-01
Synthetic pigments are still used instead of natural pigments in many foods and their residues in food could be an important risk to human health. A simple and rapid analytical method combining the low-cost extraction protocol with ultra-fast liquid chromatography-tandem quadrupole mass spectrometry (UFLC-MS/MS) was developed for the simultaneous determination of seven synthetic pigments used in colored Chinese steamed buns. For the first time, ethanol/ammonia solution/water (7:2:1, v/v/v) was used as extraction solution for the synthetic pigments in colored Chinese steamed buns. The results showed that the property of the extraction solution used in this method was more effective than critic acid solution, which is used in the polyamide adsorption method. The limits of quantification for the seven synthetic pigments ranged from 0.15 to 0.50 μg/kg. The present method was successfully applied to samples of colored Chinese steamed buns for food-safety risk monitoring in Zhejiang Province, China. The results found sunset yellow pigment in six out of 300 colored Chinese steamed buns (from 0.50 to 32.6 μg/kg).
International Nuclear Information System (INIS)
Zheng Youqi; Wu Hongchun; Cao Liangzhi
2013-01-01
This paper describes the stability analysis for the coarse mesh finite difference (CMFD) acceleration used in the wavelet expansion method. The nonlinear CMFD acceleration scheme is transformed by linearization and the Fourier ansatz is introduced into the linearized formulae. The spectral radius is defined as the stability criterion, which is the least upper bound (LUB) of the largest eigenvalue of Fourier analysis matrix. The stability analysis considers the effect of mesh size (spectral length), coarse mesh division and scattering ratio. The results show that for the wavelet expansion method, the CMFD acceleration is conditionally stable. The small size of fine mesh brings stability and fast convergent. With the increase of the mesh size, the stability becomes worse. The scattering ratio does not impact the stability obviously. It makes the CMFD acceleration highly efficient in the strong scattering case. The results of Fourier analysis are verified by the numerical tests based on a homogeneous slab problem.
Novel methods for evaluation of the Reynolds number of synthetic jets
Czech Academy of Sciences Publication Activity Database
Kordík, Jozef; Broučková, Zuzana; Vít, T.; Pavelka, Miroslav; Trávníček, Zdeněk
2014-01-01
Roč. 55, č. 6 (2014), 1757_1-1757_16 ISSN 0723-4864 R&D Projects: GA ČR GPP101/12/P556 Institutional support: RVO:61388998 Keywords : synthetic jet * synthetic jet actuator * Reynolds number Subject RIV: BK - Fluid Dynamics Impact factor: 1.670, year: 2014 http://link.springer.com/article/10.1007%2Fs00348-014-1757-x
Fast multipole acceleration of the MEG/EEG boundary element method
International Nuclear Information System (INIS)
Kybic, Jan; Clerc, Maureen; Faugeras, Olivier; Keriven, Renaud; Papadopoulo, Theo
2005-01-01
The accurate solution of the forward electrostatic problem is an essential first step before solving the inverse problem of magneto- and electroencephalography (MEG/EEG). The symmetric Galerkin boundary element method is accurate but cannot be used for very large problems because of its computational complexity and memory requirements. We describe a fast multipole-based acceleration for the symmetric boundary element method (BEM). It creates a hierarchical structure of the elements and approximates far interactions using spherical harmonics expansions. The accelerated method is shown to be as accurate as the direct method, yet for large problems it is both faster and more economical in terms of memory consumption
Monte Carlo burnup codes acceleration using the correlated sampling method
International Nuclear Information System (INIS)
Dieudonne, C.
2013-01-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr
Accelerated H-LBP-based edge extraction method for digital radiography
Energy Technology Data Exchange (ETDEWEB)
Qiao, Shuang; Zhao, Chen-yi; Huang, Ji-peng [School of Physics, Northeast Normal University, Changchun 130024 (China); Sun, Jia-ning, E-mail: sunjn118@nenu.edu.cn [School of Mathematics and Statistics, Northeast Normal University, Changchun 130024 (China)
2015-01-11
With the goal of achieving real time and efficient edge extraction for digital radiography, an accelerated H-LBP-based edge extraction method (AH-LBP) is presented in this paper by improving the existing framework of local binary pattern with the H function (H-LBP). Since the proposed method avoids computationally expensive operations with no loss of quality, it possesses much lower computational complexity than H-LBP. Experimental results on real radiographies show desirable performance of our method. - Highlights: • An accelerated H-LBP method for edge extraction on digital radiography is proposed. • The novel AH-LBP relies on numerical analysis of the existing H-LBP method. • Aiming at accelerating, H-LBP is reformulated as a direct binary processing. • AH-LBP provides the same edge extraction result as H-LBP does. • AH-LBP has low computational complexity satisfying real time requirements.
On accelerated flow of MHD powell-eyring fluid via homotopy analysis method
Salah, Faisal; Viswanathan, K. K.; Aziz, Zainal Abdul
2017-09-01
The aim of this article is to obtain the approximate analytical solution for incompressible magnetohydrodynamic (MHD) flow for Powell-Eyring fluid induced by an accelerated plate. Both constant and variable accelerated cases are investigated. Approximate analytical solution in each case is obtained by using the Homotopy Analysis Method (HAM). The resulting nonlinear analysis is carried out to generate the series solution. Finally, Graphical outcomes of different values of the material constants parameters on the velocity flow field are discussed and analyzed.
CMFD and GPU acceleration on method of characteristics for hexagonal cores
International Nuclear Information System (INIS)
Han, Yu; Jiang, Xiaofeng; Wang, Dezhong
2014-01-01
Highlights: • A merged hex-mesh CMFD method solved via tri-diagonal matrix inversion. • Alternative hardware acceleration of using inexpensive GPU. • A hex-core benchmark with solution to confirm two acceleration methods. - Abstract: Coarse Mesh Finite Difference (CMFD) has been widely adopted as an effective way to accelerate the source iteration of transport calculation. However in a core with hexagonal assemblies there are non-hexagonal meshes around the edges of assemblies, causing a problem for CMFD if the CMFD equations are still to be solved via tri-diagonal matrix inversion by simply scanning the whole core meshes in different directions. To solve this problem, we propose an unequal mesh CMFD formulation that combines the non-hexagonal cells on the boundary of neighboring assemblies into non-regular hexagonal cells. We also investigated the alternative hardware acceleration of using graphics processing units (GPU) with graphics card in a personal computer. The tool CUDA is employed, which is a parallel computing platform and programming model invented by the company NVIDIA for harnessing the power of GPU. To investigate and implement these two acceleration methods, a 2-D hexagonal core transport code using the method of characteristics (MOC) is developed. A hexagonal mini-core benchmark problem is established to confirm the accuracy of the MOC code and to assess the effectiveness of CMFD and GPU parallel acceleration. For this benchmark problem, the CMFD acceleration increases the speed 16 times while the GPU acceleration speeds it up 25 times. When used simultaneously, they provide a speed gain of 292 times
CMFD and GPU acceleration on method of characteristics for hexagonal cores
Energy Technology Data Exchange (ETDEWEB)
Han, Yu, E-mail: hanyu1203@gmail.com [School of Nuclear Science and Engineering, Shanghai Jiaotong University, Shanghai 200240 (China); Jiang, Xiaofeng [Shanghai NuStar Nuclear Power Technology Co., Ltd., No. 81 South Qinzhou Road, XuJiaHui District, Shanghai 200000 (China); Wang, Dezhong [School of Nuclear Science and Engineering, Shanghai Jiaotong University, Shanghai 200240 (China)
2014-12-15
Highlights: • A merged hex-mesh CMFD method solved via tri-diagonal matrix inversion. • Alternative hardware acceleration of using inexpensive GPU. • A hex-core benchmark with solution to confirm two acceleration methods. - Abstract: Coarse Mesh Finite Difference (CMFD) has been widely adopted as an effective way to accelerate the source iteration of transport calculation. However in a core with hexagonal assemblies there are non-hexagonal meshes around the edges of assemblies, causing a problem for CMFD if the CMFD equations are still to be solved via tri-diagonal matrix inversion by simply scanning the whole core meshes in different directions. To solve this problem, we propose an unequal mesh CMFD formulation that combines the non-hexagonal cells on the boundary of neighboring assemblies into non-regular hexagonal cells. We also investigated the alternative hardware acceleration of using graphics processing units (GPU) with graphics card in a personal computer. The tool CUDA is employed, which is a parallel computing platform and programming model invented by the company NVIDIA for harnessing the power of GPU. To investigate and implement these two acceleration methods, a 2-D hexagonal core transport code using the method of characteristics (MOC) is developed. A hexagonal mini-core benchmark problem is established to confirm the accuracy of the MOC code and to assess the effectiveness of CMFD and GPU parallel acceleration. For this benchmark problem, the CMFD acceleration increases the speed 16 times while the GPU acceleration speeds it up 25 times. When used simultaneously, they provide a speed gain of 292 times.
A Method for Accelerating the Maturation of Toxocara cati Eggs
Directory of Open Access Journals (Sweden)
B Sarkari
2007-04-01
Full Text Available Background: The effect of temperature and humidity on the maturation of Toxocara cati eggs in an in vitro system was investigated. Methods: Suspensions of Toxocara cati eggs, with 5% formalin/saline or 2.5% formalin/ringer were prepared and maintained at 37 °C under 40% humidity or at 25 °C under 98% humidity for 3 weeks for egg development. Results: The suspension sample mixed by 2.5% formalin/ringer and maintained at 25 ºC and 98% humidity could fully embryonated the eggs of Toxocara cati in 3 weeks. Conclusion: The main advantage of this method is the increase of recovery and also reducing of the eggs maturation time.
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling
Energy Technology Data Exchange (ETDEWEB)
Kunz, Josiah [Anderson U.; Snopok, Pavel [Fermilab; Berz, Martin [Michigan State U.; Makino, Kyoko [Michigan State U.
2018-03-28
Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochastic nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.
Externbrink, Anna; Eggenreich, Karin; Eder, Simone; Mohr, Stefan; Nickisch, Klaus; Klein, Sandra
2017-01-01
Accelerated drug release testing is a valuable quality control tool for long-acting non-oral extended release formulations. Currently, several intravaginal ring candidates designed for the long-term delivery of steroids or anti-infective drugs are being in the developing pipeline. The present article addresses the demand for accelerated drug release methods for these formulations. We describe the development and evaluation of accelerated release methods for a steroid releasing matrix-type intravaginal ring. The drug release properties of the formulation were evaluated under real-time and accelerated test conditions. Under real-time test conditions drug release from the intravaginal ring was strongly affected by the steroid solubility in the release medium. Under sufficient sink conditions that were provided in release media containing surfactants drug release was Fickian diffusion driven. Both temperature and hydro-organic dissolution media were successfully employed to accelerate drug release from the formulation. Drug release could be further increased by combining the temperature effect with the application of a hydro-organic release medium. The formulation continued to exhibit a diffusion controlled release kinetic under the investigated accelerated conditions. Moreover, the accelerated methods were able to differentiate between different prototypes of the intravaginal ring that exhibited different release profiles under real-time test conditions. Overall, the results of the present study indicate that both temperature and hydro-organic release media are valid parameters for accelerating drug release from the intravaginal ring. Variation of either a single or both parameters yielded release profiles that correlated well with real-time release. Copyright © 2016 Elsevier B.V. All rights reserved.
Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue
2017-06-24
With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.
Directory of Open Access Journals (Sweden)
Chen Yang
2017-06-01
Full Text Available With the development of satellite load technology and very large scale integrated (VLSI circuit technology, onboard real-time synthetic aperture radar (SAR imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT, which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array−application-specific integrated circuit (FPGA-ASIC hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.
International Nuclear Information System (INIS)
Nowak, P.F.
1993-01-01
A grey diffusion acceleration method is presented and is shown by Fourier analysis and test calculations to be effective in accelerating radiative transfer calculations. The spectral radius is bounded by 0.9 for the continuous equations, but is significantly smaller for the discretized equations, especially in the optically thick regimes characteristic to radiation transport problems. The GDA method is more efficient than the multigroup DSA method because its slightly higher iteration count is more than offset by the much lower cost per iteration. A wide range of test calculations confirm the efficiency of GDA compared to multifrequency DSA. (orig.)
International Nuclear Information System (INIS)
Saito, H.; Nakane, S.; Ikari, S.; Fujiwara, A.
1992-01-01
Development of a deterioration model for cementitious materials is important in assessing long-term integrity of nuclear waste repositories. The authors preliminarily examined a new test method for acceleration of aging of mortar specimens by application of electrical potential gradients and observed whether the method could throw light on the deterioration process of cementitious materials under repository conditions. As a result, it was concluded that the application of a potential gradient to a mortar specimen might be useful as an accelerated test method for assessing the deterioration behavior of cementitious materials due to leaching. (orig.)
Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers
Danby, Gordon T.; Jackson, John W.
1991-01-01
A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.
TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method
International Nuclear Information System (INIS)
Han, X
2016-01-01
Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When a new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.
Formulation of nonlinear chromaticity in circular accelerators by canonical perturbation method
International Nuclear Information System (INIS)
Takao, Masaru
2005-01-01
The formulation of nonlinear chromaticity in circular accelerators based on the canonical perturbation method is presented. Since the canonical perturbation method directly relates the tune shift to the perturbation Hamiltonian, it greatly simplifies the calculation of the nonlinear chromaticity. The obtained integral representation for nonlinear chromaticity can be systematically extended to higher orders
Directory of Open Access Journals (Sweden)
Ian Hamerton
Full Text Available A number of historical texts are investigated to ascertain the optimum conditions for the preparation of synthetic ultramarine, using preparative methods that would have been available to alchemists and colour chemists of the nineteenth century. The effect of varying the proportion of sulphur in the starting material on the colour of the final product is investigated. The optimum preparation involves heating a homogenised, pelletised mixture of kaolin (100 parts, sodium carbonate (100 parts, bitumen emulsion (or any 'sticky' carbon source (12 parts and sulphur (60 parts at 750°C for ca. 4 hours. At this stage the ingress of air should be limited. The sample is allowed to cool in the furnace to 500°C, the ingress of air is permitted and additional sulphur (30 parts is introduced before a second calcination step is undertaken at 500°C for two hours. The products obtained from the optimum synthesis have CIE ranges of x = 0.2945-0.3125, y = 0.2219-0.2617, Y = 0.4257-0.4836, L* = 3.8455-4.3682, a* = 4.2763-7.6943, b* = -7.6772-(-3.3033, L = 3.8455-4.3682, C = 5.3964-10.8693, h = 315.0636-322.2562. The values are calculated using UV/visible near infrared spectra using Lazurite [1], under D65 illumination, and the 1931 2° observer.
A simple method for the determination of synthetic spirit in some alcoholic beverages
International Nuclear Information System (INIS)
Majerova, P.; Fiser, B.; Leseticky, L.
2002-01-01
Measurement of carbon C-14 can be used to distinguish between natural and synthetic alcohol. Natural ethanol produced by fermentation of sugar contains approximately 16.13 DPM (0,27 Bq) per gram of carbon, synthetic ethanol should contain no carbon-14. Natural C-14 content can be determined precisely and conveniently by liquid scintillation counting. Various scintillation cocktails were tested and the best results were achieved with PCS. The optimum measurement conditions were also identified. Samples of spirits were fractionated on a short distillation column and the resulting 96% ethanol was measured. For comparison was distilled and measured A 35% aqueous solution of natural ethanol was also distilled and measured for a comparison. The natural-to-synthetic ethanol ratio was obtained for a series of commercial spirits. (P.A.)
Detecting chaos in particle accelerators through the frequency map analysis method.
Papaphilippou, Yannis
2014-06-01
The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.
Hackbusch, Sven
This dissertation encompasses work related to synthetic methods for the formation of ester linkages in organic compounds, as well as the investigation of the conformational influence of the ester functional group on the flexibility of inter-saccharide linkages, specifically, and the solution phase structure of ester-containing carbohydrate derivatives, in general. Stereoselective reactions are an important part of the field of asymmetric synthesis and an understanding of their underlying mechanistic principles is essential for rational method development. Here, the exploration of a diastereoselective O-acylation reaction on a trans-2-substituted cyclohexanol scaffold is presented, along with possible reasons for the observed reversal of stereoselectivity dependent on the presence or absence of an achiral amine catalyst. In particular, this work establishes a structure-activity relationship with regard to the trans-2-substituent and its role as a chiral auxiliary in the reversal of diastereoselectivity. In the second part, the synthesis of various ester-linked carbohydrate derivatives, and their conformational analysis is presented. Using multidimensional NMR experiments and computational methods, the compounds' solution-phase structures were established and the effect of the ester functional group on the molecules' flexibility and three-dimensional (3D) structure was investigated and compared to ether or glycosidic linkages. To aid in this, a novel Karplus equation for the C(sp2)OCH angle in ester-linked carbohydrates was developed on the basis of a model ester-linked carbohydrate. This equation describes the sinusoidal relationship between the C(sp2)OCH dihedral angle and the corresponding 3JCH coupling constant that can be determined from a J-HMBC NMR experiment. The insights from this research will be useful in describing the 3D structure of naturally occurring and lab-made ester-linked derivatives of carbohydrates, as well as guiding the de novo-design of
Nicolleau, FCGA; Redondo, J-M
2012-01-01
This book contains a collection of the main contributions from the first five workshops held by Ercoftac Special Interest Group on Synthetic Turbulence Models (SIG42. It is intended as an illustration of the sig's activities and of the latest developments in the field. This volume investigates the use of Kinematic Simulation (KS) and other synthetic turbulence models for the particular application to environmental flows. This volume offers the best syntheses on the research status in KS, which is widely used in various domains, including Lagrangian aspects in turbulence mixing/stirring, partic
Kwaramba, Farai Brian
This Ph.D. deals with the integration of nanotechnology with organometallic/ organic synthetic technologies. The first part of this research sought to develop a library of novel molecular gears programmed to exploit photo-switching and electrostatic repulsion to control the molecular rotation of covalently linked triptypyrazines. Incorporation of these two modes allows for control of triptycene based gear systems using unexplored external methods. The triptypyrazine was an attractive scaffold because of its intrinsic pH and electrochemical activity, thus providing a novel construct for controlling molecular motion. This design finds relevance in the fabrication of nano-electromechanical devices and understanding controlled molecular motion. This Ph.D. also sought to address the need to generate and recycle low cost hydrosilylation catalysts. Metal nanoparticle catalysts can potentially meet this need due to their high surface area and reactivity. Their morphology and surface texture provide avenues for selectivity in reactions. Metal-nanoparticles on a silicon matrix can be formed by reducing metal salts with silicon hydrides. Investigations towards iron-nanoparticle catalyzed hydrosilylation of unsaturated bonds were conducted. Furthermore, this research sought to develop highly functionalized silanes, as guiding scaffolds for generating chiral silicon hydrides. Fabrication of metal-nanoparticle catalysts with the same, could install surface definition on these heterogeneous green catalysts, thus allowing selectivity in their catalysis. A bottom up approach to nanofabrication, started with the generation of a library of highly functionalized alkynyl-silane building blocks using the hydrosilylation reaction. Hydrosilylation of carbon-carbon and carbon-heteroatom unsaturated bonds has proven to be an important reaction in organic syntheses. Additionally, silicon tethers have been utilized in complex organic syntheses as a way to increase reaction rates, and
Tire crumb rubber from recycled tires is widely used as infill material in synthetic turf fields in the United States. Recycled crumb rubber is a complex and potentially variable matrix with many metal, VOC, and SVOC constituents, presenting challenges for characterization and ex...
An acceleration technique for 2D MOC based on Krylov subspace and domain decomposition methods
International Nuclear Information System (INIS)
Zhang Hongbo; Wu Hongchun; Cao Liangzhi
2011-01-01
Highlights: → We convert MOC into linear system solved by GMRES as an acceleration method. → We use domain decomposition method to overcome the inefficiency on large matrices. → Parallel technology is applied and a matched ray tracing system is developed. → Results show good efficiency even in large-scale and strong scattering problems. → The emphasis is that the technique is geometry-flexible. - Abstract: The method of characteristics (MOC) has great geometrical flexibility but poor computational efficiency in neutron transport calculations. The generalized minimal residual (GMRES) method, a type of Krylov subspace method, is utilized to accelerate a 2D generalized geometry characteristics solver AutoMOC. In this technique, a form of linear algebraic equation system for angular flux moments and boundary fluxes is derived to replace the conventional characteristics sweep (i.e. inner iteration) scheme, and then the GMRES method is implemented as an efficient linear system solver. This acceleration method is proved to be reliable in theory and simple for implementation. Furthermore, as introducing no restriction in geometry treatment, it is suitable for acceleration of an arbitrary geometry MOC solver. However, it is observed that the speedup decreases when the matrix becomes larger. The spatial domain decomposition method and multiprocessing parallel technology are then employed to overcome the problem. The calculation domain is partitioned into several sub-domains. For each of them, a smaller matrix is established and solved by GMRES; and the adjacent sub-domains are coupled by 'inner-edges', where the trajectory mismatches are considered adequately. Moreover, a matched ray tracing system is developed on the basis of AutoCAD, which allows a user to define the sub-domains on demand conveniently. Numerical results demonstrate that the acceleration techniques are efficient without loss of accuracy, even in the case of large-scale and strong scattering
A reproducible accelerated in vitro release testing method for PLGA microspheres.
Shen, Jie; Lee, Kyulim; Choi, Stephanie; Qu, Wen; Wang, Yan; Burgess, Diane J
2016-02-10
The objective of the present study was to develop a discriminatory and reproducible accelerated in vitro release method for long-acting PLGA microspheres with inner structure/porosity differences. Risperidone was chosen as a model drug. Qualitatively and quantitatively equivalent PLGA microspheres with different inner structure/porosity were obtained using different manufacturing processes. Physicochemical properties as well as degradation profiles of the prepared microspheres were investigated. Furthermore, in vitro release testing of the prepared risperidone microspheres was performed using the most common in vitro release methods (i.e., sample-and-separate and flow through) for this type of product. The obtained compositionally equivalent risperidone microspheres had similar drug loading but different inner structure/porosity. When microsphere particle size appeared similar, porous risperidone microspheres showed faster microsphere degradation and drug release compared with less porous microspheres. Both in vitro release methods investigated were able to differentiate risperidone microsphere formulations with differences in porosity under real-time (37 °C) and accelerated (45 °C) testing conditions. Notably, only the accelerated USP apparatus 4 method showed good reproducibility for highly porous risperidone microspheres. These results indicated that the accelerated USP apparatus 4 method is an appropriate fast quality control tool for long-acting PLGA microspheres (even with porous structures). Copyright © 2015 Elsevier B.V. All rights reserved.
Epsilon topological accelerating algorithms for difference method for initial-value problems
International Nuclear Information System (INIS)
Hristea, V.; Posirca, M.
1992-01-01
Linear and nonlinear parabolic equations can be solved by discretization methods which lead to linear and nonlinear algebraic systems. The iterative methods (e.g. Gauss - Seidel) show a very slow convergence and instability in the case of nonlinear equations. This paper proposes an ε topological algorithm for accelerating slow iterative methods used in the thermohydraulic code COBRA and the dynamic code ADEP. The results show an executing time approximately ten times lower than original algorithms. (Author)
An FDTD method with FFT-accelerated exact absorbing boundary conditions
Sirenko, Kostyantyn
2011-07-01
An accurate and efficient finite-difference time-domain (FDTD) method for analyzing axially symmetric structures is presented. The method achieves its accuracy and efficiency using exact absorbing conditions (EACs) for terminating the computation domain and a blocked-FFT based scheme for accelerating the computation of the temporal convolutions present in non-local EACs. The method is shown to be especially useful in characterization of long-duration resonant wave interactions. © 2011 IEEE.
A chain-of-states acceleration method for the efficient location of minimum energy paths
International Nuclear Information System (INIS)
Hernández, E. R.; Herrero, C. P.; Soler, J. M.
2015-01-01
We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C 60
Implementing Expertise-Based Training Methods to Accelerate the Development of Peer Academic Coaches
Blair, Lisa
2016-01-01
The field of expertise studies offers several models from which to develop training programs that accelerate the development of novice performers in a variety of domains. This research study implemented two methods of expertise-based training in a course to develop undergraduate peer academic coaches through a ten-week program. An existing…
Directory of Open Access Journals (Sweden)
В.Т. Чемерис
2006-04-01
Full Text Available There is a method of simplified calculation and design parameters choice elaborated in this article with corresponding basing for the induction system of electron-beam sterilizer on the base of linear induction accelerator taking into account the parameters of magnetic material for production of cores and parameters of pulsed voltage.
DEFF Research Database (Denmark)
Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław
2017-01-01
We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...
International Nuclear Information System (INIS)
Young, Ryong Park; Nam, Zin Cho
2005-01-01
As the nuclear reactor core becomes more complex, heterogeneous, and geometrically irregular, the method of characteristics (MOC) is gaining its wide use in the neutron transport calculations. However, the long computing times require good acceleration methods. In this paper, the concept of coarse-mesh angular dependent re-balance (CMADR) acceleration is described and applied to the MOC calculation in x-y-z (z-infinite, uniform) geometry. The method is based on the angular dependent re-balance factors defined only on the coarse-mesh boundaries; a coarse-mesh consists of several fine meshes that may be heterogeneous and of mixed geometries with irregular or unstructured mesh shapes. In addition, the coarse-mesh boundaries may not coincide with the structural interfaces of the problem and can be chosen artificially for convenience. CMADR acceleration is tested on several test problems and the results show that CMADR is very effective in reducing the number of iterations and computing times of MOC calculations. Fourier analysis is also provided to investigate convergence of the CMADR method analytically and the results show that CMADR acceleration is unconditionally stable. (authors)
Effect of processing method on accelerated weathering of wood-flour/HDPE composites
Nicole M. Stark; Laurent M. Matuana; Craig M. Clemons
2003-01-01
Wood-plastic lumber is promoted as a low maintenance high-durability product. When exposed to accelerated weathering, however, wood-plastic composites may experience a color change and/or loss in mechanical properties. Different methods of manufacturing wood-plastic composites lead to different surface characteristics, which can influence weathering, In this study, 50...
The Experimental Stand for Research of Wakefield Method of Charged Particles Acceleration
International Nuclear Information System (INIS)
Kiselev, V.A.; Linnik, A.F.; Onishchenko, I.N.; Onishchenko, N.I.; Sotnikov, G.V.; Uskov, V.V.
2006-01-01
The experimental installation and diagnostic equipment with motivation to use for various researches of wakefield method of charged particles acceleration both in plasma and in dielectric structure has been described. The main parameters of a sequence of short relativistic electron bunch and values of physical characteristics of slow-down structures have been presented
Toward an optimal inversion method for synthetic aperture radar wind retrieval
Portabella, M.; Stoffelen, A.; Johannessen, Johnny A.
2002-01-01
In recent years, particular efforts have been made to derive wind fields over the oceans from synthetic aperture radar (SAR) images. In contrast with the scatterometer, the SAR has a higher spatial resolution and therefore has the potential to provide higher resolution wind information. Since there are at least two geophysical parameters (wind speed and wind direction) modulating the single SAR backscatter measurements, the inversion of wind fields from SAR observations has an inherent proble...
International Nuclear Information System (INIS)
Cheng, Z.D.; He, Y.L.; Cui, F.Q.
2013-01-01
This paper presents an axisymmetric steady-state computational fluid dynamics model and further studies on the complex coupled heat transfer combined radiation–convection–conduction in the pressurized volumetric receiver (PVR), by combining the Finite Volume Method (FVM) and the Monte Carlo Ray-Trace (MCRT) method. Based on this, effects of geometric parameters of the compound parabolic concentrator (CPC) and properties of the porous absorber on synthetical characteristics and performance of the photo-thermal conversion process in the PVR are further analyzed and discussed detailedly. It is found that the solar flux density distributions are always very heterogeneous with large nonuniformities, and the variation trends of the corresponding temperature distributions are very similar to these but with much lower order of magnitude. The CPC shape determined by the CPC exit aperture has much larger effects on synthetical characteristics and performance of the PVR than that of the CPC entry aperture with a constant acceptance angle. And a suitable or optimal thickness of the porous absorber could be determined by examining where the drastic decreasing trends occur at the curves of variations of synthetical characteristics and performance with the porosity. - Highlights: ► An axisymmetric steady-state CFD model of PVR is presented with MCRT–FVM method. ► The complex coupled heat transfer and synthetical performance of the PVR are studied. ► The effects of geometric parameters and porous properties are analyzed and discussed. ► Solar flux and temperature in PVR are very heterogeneous with large nonuniformities. ► An optimal absorber thickness can be determined by examining the effects of porosity.
Rashid, Nur Shahidah Abdul; Sarmani, Sukiman; Majid, Amran Ab.; Mohamed, Faizal; Siong, Khoo Kok
2015-04-01
238U radionuclide is a naturally occuring radioactive material that can be found in soil. In this study, the solubility of 238U radionuclide obtained from various types of soil in synthetic gastrointestinal fluids was analysed by "US P in vitro" digestion method. The synthetic gastrointestinal fluids were added to the samples with well-ordered, mixed throughly and incubated according to the human physiology digestive system. The concentration of 238U radionuclide in the solutions extracted from the soil was measured using Induced Coupling Plasma Mass Spectrometer (ICP-MS). The concentration of 238U radionuclide from the soil samples in synthetic gastrointestinal fluids showed different values due to different homogenity of soil types and chemical reaction of 238U radionuclide. In general, the solubility of 238U radionuclide in gastric fluid was higher (0.050 - 0.209 ppm) than gastrointestinal fluids (0.024 - 0.050 ppm). It could be concluded that the US P in vitro digestion method is practicle for estimating the solubility of 238U radionuclide from soil materials and could be useful for monitoring and risk assessment purposes applying to environmental, health and contaminated soil samples.
Directory of Open Access Journals (Sweden)
Abdalla Ahmed Abdel-Ghaly
2016-06-01
Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.
Method for pulse to pulse dose reproducibility applied to electron linear accelerators
International Nuclear Information System (INIS)
Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.
2002-01-01
An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method
International Nuclear Information System (INIS)
Dragt, A.J.
1987-01-01
A review is given of elementary Lie algebraic methods for treating Hamiltonian systems. This review is followed by a brief exposition of advanced Lie algebraic methods including resonance bases and conjugacy theorems. Finally, applications are made to the design of third-order achromats for use in accelerators, to the design of subangstroem resolution electron microscopes, and to the classification and study of high order aberrations in light optics. (orig.)
An improved method for statistical analysis of raw accelerator mass spectrometry data
International Nuclear Information System (INIS)
Gutjahr, A.; Phillips, F.; Kubik, P.W.; Elmore, D.
1987-01-01
Hierarchical statistical analysis is an appropriate method for statistical treatment of raw accelerator mass spectrometry (AMS) data. Using Monte Carlo simulations we show that this method yields more accurate estimates of isotope ratios and analytical uncertainty than the generally used propagation of errors approach. The hierarchical analysis is also useful in design of experiments because it can be used to identify sources of variability. 8 refs., 2 figs
A new method for fluid input into a hybrid synthetic jet actuator
Directory of Open Access Journals (Sweden)
Kordík J.
2014-03-01
Full Text Available A new principle of flow rectification for hybrid synthetic jet actuators is introduced in this paper. As is well known, the flow rectification can be best accomplished by means of fluidic diodes. Novelty of the present study are fluidic diodes with two mutually opposed nozzles. Interaction between the periodic jet flows from the nozzles causes a difference between the blowing and suction strokes, resulting in a particularly efficient rectification effect. The distance between the nozzle exits as well as the oscillation frequency were the parameters, which were varied during hot-wire measurements. The combination of those parameters achieving the highest volumetric effciency was identified.
Synthetic Study of 2.5-D ATEM Based on Finite Element Method
DEFF Research Database (Denmark)
Qiang, Jianke; Zhou, Junjie; Cai, Hongzhu
2013-01-01
be amplified in the process of Laplace transform and Fourier transform. In order to get accurate result, the error should be well controlled in every procedure. The induced electromagnetic force can be computed accurately from vertical magnetic component by applying Lagrange interpolation. The synthetic model...... to the anomalous field which can avoid the singularity problem caused by the source which can excite the anomalous EM field. The EM source can be imposed to our process by incorporate the background EM field. The computation error can be accumulated due to the large variation of EM field and it can also...
International Nuclear Information System (INIS)
Chang Liyun; Ho, S.-Y.; Du, Y.-C.; Lin, C.-M.; Chen Tainsong
2007-01-01
The calibration of the gantry angle indicator is an important and basic quality assurance (QA) item for the radiotherapy linear accelerator. In this study, we propose a new and practical method, which uses only the digital level, V-film, and general solid phantoms. By taking the star shot only, we can accurately calculate the true gantry angle according to the geometry of the film setup. The results on our machine showed that the gantry angle was shifted by -0.11 deg. compared with the digital indicator, and the standard deviation was within 0.05 deg. This method can also be used for the simulator. In conclusion, this proposed method could be adopted as an annual QA item for mechanical QA of the accelerator
A quality control method for detecting energy changes of medical accelerators
International Nuclear Information System (INIS)
McGinley, P.H.
2000-01-01
A description is presented of a simple and sensitive method for detecting a change in the energy of the electrons bombarding the target of medical accelerators. This technique is useful for x-ray beams with end point energy in the range of 15.7 to 25 MeV. The method is based on the photoactivation of 16 O and 14 N in a small sample of ammonium nitrate. It was found that the ratio of the activity induced in the oxygen divided by that produced in the nitrogen can be used as a quality control technique to detect a change in the energy of the electrons that bombard the target of the accelerator. An electron energy change of the order of 0.2 MeV can be determined using this method. (author)
Beam-Based Error Identification and Correction Methods for Particle Accelerators
AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas
2014-06-10
Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...
Simulation of isothermal multi-phase fuel-coolant interaction using MPS method with GPU acceleration
Energy Technology Data Exchange (ETDEWEB)
Gou, W.; Zhang, S.; Zheng, Y. [Zhejiang Univ., Hangzhou (China). Center for Engineering and Scientific Computation
2016-07-15
The energetic fuel-coolant interaction (FCI) has been one of the primary safety concerns in nuclear power plants. Graphical processing unit (GPU) implementation of the moving particle semi-implicit (MPS) method is presented and used to simulate the fuel coolant interaction problem. The governing equations are discretized with the particle interaction model of MPS. Detailed implementation on single-GPU is introduced. The three-dimensional broken dam is simulated to verify the developed GPU acceleration MPS method. The proposed GPU acceleration algorithm and developed code are then used to simulate the FCI problem. As a summary of results, the developed GPU-MPS method showed a good agreement with the experimental observation and theoretical prediction.
Energy Technology Data Exchange (ETDEWEB)
Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2007-09-21
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.
International Nuclear Information System (INIS)
Garcia-Pareja, S.; Vilches, M.; Lallena, A.M.
2007-01-01
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool
International Nuclear Information System (INIS)
Kong Xiaoxiao; Li Quanfeng
2003-01-01
A synthesis technique for the preliminary design of convergent Pierce electron guns is introduced briefly which has a series of advantages over the traditional methods. A thermal cathode electron gun used in the accelerator for radiation sterilization with the synthesis method is redesigned, and the validity of this method is proved. Based on the preliminary design parameters given by the synthesis method, a simulating calculation program, EGUN, was used in the numerical figure design of the focusing electrode and the anode. The final results can meet the engineering requirement as the current being 1A, the normalized emittance being less than 4 mm·mrad, and the final current density showing uniformity
An iterative method for accelerated degradation testing data of smart electricity meter
Wang, Xiaoming; Xie, Jinzhe
2017-01-01
In order to evaluate the performance of smart electricity meter (SEM), we must spend a lot of time censoring its status. For example, if we assess to the meter stability of the SEM which needs several years at least according to the standards. So accelerated degradation testing (ADT) is a useful method to assess the performance of the SEM. As we known, the Wiener process is a prevalent method to interpret the performance degradation. This paper proposes an iterative method for ADT data of SEM. The simulation study verifies the application and superiority of the proposed model than other ADT methods.
Szcześ, Aleksandra; Yan, Yingdi; Chibowski, Emil; Hołysz, Lucyna; Banach, Marcin
2018-03-01
Surface free energy is one of the parameters accompanying interfacial phenomena, occurring also in the biological systems. In this study the thin layer wicking method was used to determine surface free energy and its components for synthetic hydroxyapatite (HA) and natural one obtained from pig bones. The Raman, FTIR and X-Ray photoelectron spectroscopy, X-ray diffraction techniques and thermal analysis showed that both samples consist of carbonated hydroxyapatite without any organic components. Surface free energy and its apolar and polar components were found to be similar for both investigated samples and equalled γSTOT = 52.4 mJ/m2, γSLW = 40.2 mJ/m2 and γSAB = 12.3 mJ/m2 for the synthetic HA and γSTOT = 54.6 mJ/m2, γSLW = 40.3 mJ/m2 and γSAB = 14.3 mJ/m2 for the natural one. Both HA samples had different electron acceptor (γs+) and electron donor (γs-) parameters. The higher value of the electron acceptor was found for the natural HA whereas the electron donor one was higher for the synthetic HA
Toyoda, Tetsuro
2011-01-01
Synthetic biology requires both engineering efficiency and compliance with safety guidelines and ethics. Focusing on the rational construction of biological systems based on engineering principles, synthetic biology depends on a genome-design platform to explore the combinations of multiple biological components or BIO bricks for quickly producing innovative devices. This chapter explains the differences among various platform models and details a methodology for promoting open innovation within the scope of the statutory exemption of patent laws. The detailed platform adopts a centralized evaluation model (CEM), computer-aided design (CAD) bricks, and a freemium model. It is also important for the platform to support the legal aspects of copyrights as well as patent and safety guidelines because intellectual work including DNA sequences designed rationally by human intelligence is basically copyrightable. An informational platform with high traceability, transparency, auditability, and security is required for copyright proof, safety compliance, and incentive management for open innovation in synthetic biology. GenoCon, which we have organized and explained here, is a competition-styled, open-innovation method involving worldwide participants from scientific, commercial, and educational communities that aims to improve the designs of genomic sequences that confer a desired function on an organism. Using only a Web browser, a participating contributor proposes a design expressed with CAD bricks that generate a relevant DNA sequence, which is then experimentally and intensively evaluated by the GenoCon organizers. The CAD bricks that comprise programs and databases as a Semantic Web are developed, executed, shared, reused, and well stocked on the secure Semantic Web platform called the Scientists' Networking System or SciNetS/SciNeS, based on which a CEM research center for synthetic biology and open innovation should be established. Copyright © 2011 Elsevier Inc
Real-time 3D imaging methods using 2D phased arrays based on synthetic focusing techniques.
Kim, Jung-Jun; Song, Tai-Kyong
2008-07-01
A fast 3D ultrasound imaging technique using a 2D phased array transducer based on the synthetic focusing method for nondestructive testing or medical imaging is proposed. In the proposed method, each column of a 2D array is fired successively to produce transverse fan beams focused at a fixed depth along a given longitudinal direction and the resulting pulse echoes are received at all elements of a 2D array used. After firing all column arrays, a frame of high-resolution image along a given longitudinal direction is obtained with dynamic focusing employed in the longitudinal direction on receive and in the transverse direction on both transmit and receive. The volume rate of the proposed method can be increased much higher than that of the conventional 2D array imaging by employing an efficient sparse array technique. A simple modification to the proposed method can further increase the volume scan rate significantly. The proposed methods are verified through computer simulations.
Lathrop, J. W.
1985-01-01
If thin film cells are to be considered a viable option for terrestrial power generation their reliability attributes will need to be explored and confidence in their stability obtained through accelerated testing. Development of a thin film accelerated test program will be more difficult than was the case for crystalline cells because of the monolithic construction nature of the cells. Specially constructed test samples will need to be fabricated, requiring committment to the concept of accelerated testing by the manufacturers. A new test schedule appropriate to thin film cells will need to be developed which will be different from that used in connection with crystalline cells. Preliminary work has been started to seek thin film schedule variations to two of the simplest tests: unbiased temperature and unbiased temperature humidity. Still to be examined are tests which involve the passage of current during temperature and/or humidity stress, either by biasing in the forward (or reverse) directions or by the application of light during stress. Investigation of these current (voltage) accelerated tests will involve development of methods of reliably contacting the thin conductive films during stress.
Energy Technology Data Exchange (ETDEWEB)
Jansen, S.; Friedemann, S. [VKTA, Dresden (Germany); Enghardt, W. [Technische Univ. Dresden (Germany). OncoRay
2016-07-01
The process of clearance of radioactive materials according to paragraph 29 StrlSchV contents the comparison of activity to mass and surface of the material. To avoid a borderless assessment of activity of activation it is often necessary to use combined measurement methods: For clearance of collimators of a medical proton accelerator (energy range up to 200 MeV) it needs to know the nuclide composition. The energetical and geometrical calibration was fixed by experiments. The clearance based on measurements of activity distribution, gammaspectroscopy verification of nuclide composition and gamma-counting detectors. Parts of medical electron-accelerators achieved its clearance by combination of abovementioned methods and in-situ-gammaspectroscopy. Activated und contaminated lead stones achieved its clearance by measurements of fluence, laboratory evaluation of samples, in-situ-gammaspectroscopy and wipe tests.
Acceleration of step and linear discontinuous schemes for the method of characteristics in DRAGON5
Directory of Open Access Journals (Sweden)
Alain Hébert
2017-09-01
Full Text Available The applicability of the algebraic collapsing acceleration (ACA technique to the method of characteristics (MOC in cases with scattering anisotropy and/or linear sources was investigated. Previously, the ACA was proven successful in cases with isotropic scattering and uniform (step sources. A presentation is first made of the MOC implementation, available in the DRAGON5 code. Two categories of schemes are available for integrating the propagation equations: (1 the first category is based on exact integration and leads to the classical step characteristics (SC and linear discontinuous characteristics (LDC schemes and (2 the second category leads to diamond differencing schemes of various orders in space. The acceleration of these MOC schemes using a combination of the generalized minimal residual [GMRES(m] method preconditioned with the ACA technique was focused on. Numerical results are provided for a two-dimensional (2D eight-symmetry pressurized water reactor (PWR assembly mockup in the context of the DRAGON5 code.
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
Energy Technology Data Exchange (ETDEWEB)
Kong Chaocheng [Department of Engineering Physics, Tsinghua University Beijing 100084 (China)]. E-mail: kongchaocheng@tsinghua.org.cn; Li Quanfeng [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Chen Huaibi [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Du Taibin [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Cheng Cheng [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Tang Chuanxiang [Department of Engineering Physics, Tsinghua University Beijing 100084 (China); Zhu Li [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China); Zhang Hui [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China); Pei Zhigang [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China); Ming Shenjin [Laboratory of Radiation and Environmental Protection, Tsinghua University, Beijing 100084 (China)
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
New method to extract radial acceleration of target from short-duration signal at low SNR
Institute of Scientific and Technical Information of China (English)
2008-01-01
In order to extract target radial acceleration from radar echo signal at low SNR (signal-to-noise), this paper employed FRFT (fractional Fourier transformation) to analyze short-duration radar echo and studied the relations between signal convergence peaks in matched transformation domain and signal duration and modu- lated frequency of signal. When signal duration is specified, the method of multi- plying sampled signal by the known frequency modulated signal to alter modulated frequency was presented, which generated the new signal with larger convergence peaks than the initial signal in matched transformation domain. Thus, it could successfully estimate the radial acceleration of radar target at low SNR. Simulations were conducted to show the feasibility and effectiveness of the method.
A comparison of different quasi-newton acceleration methods for partitioned multi-physics codes
CSIR Research Space (South Africa)
Haelterman, R
2018-02-01
Full Text Available & structures, 88/7, pp. 446–457 (2010) 8. J.E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) A Comparison of Quasi-Newton Acceleration Methods 15 9. J.E. Dennis, R.B. Schnabel, Least Change Secant Updates... Dois Metodos de Broyden. Mat. Apl. Comput. 1/2, pp. 135– 143 (1982) 25. J.M. Martinez, A quasi-Newton method with modification of one column per iteration. Com- puting 33, pp. 353–362 (1984) 26. J.M. Martinez, M.C. Zambaldi, An Inverse Column...
Research on GPU-accelerated algorithm in 3D finite difference neutron diffusion calculation method
International Nuclear Information System (INIS)
Xu Qi; Yu Ganglin; Wang Kan; Sun Jialong
2014-01-01
In this paper, the adaptability of the neutron diffusion numerical algorithm on GPUs was studied, and a GPU-accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. The IAEA 3D PWR benchmark problem was calculated in the numerical test. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. (authors)
Directory of Open Access Journals (Sweden)
Seiichiro Fujisawa
2007-02-01
Full Text Available The radical-scavenging activities of the synthetic antioxidants 2-allyl-4-X-phenol (X=NO2, Cl, Br, OCH3, COCH3, CH3, t-(CH33, C6H5 and 2,4-dimethoxyphenol, and the natural antioxidants eugenol and isoeugenol, were investigated using differential scanning calorimetry (DSC by measuring their anti-1,1-diphenyl-2-picrylhydrazyl (DPPH radical activity and the induction period for polymerization of methyl methacrylate (MMA initiated by thermal decomposition of 2,2'-azobisisobutyronitrile (AIBN and benzoyl peroxide (BPO. 2-Allyl-4-methoxyphenol and 2,4-dimethoxy-phenol scavenged not only oxygen-centered radicals (PhCOO. derived from BPO, but also carbon-centered radicals (R. derived from the AIBN and DPPH radical much more efficiently, in comparison with eugenol and isoeugenol. 2-Allyl-4-methoxyphenol may be useful for its lower prooxidative activity.
The Cysteine S-Alkylation Reaction as a Synthetic Method to Covalently Modify Peptide Sequences.
Calce, Enrica; De Luca, Stefania
2017-01-05
Synthetic methodologies to chemically modify peptide molecules have long been investigated for their impact in the field of chemical biology. They allow the introduction of biochemical probes useful for studying protein functions, for manipulating peptides with therapeutic potential, and for structure-activity relationship investigations. The commonly used approach was the derivatization of an amino acid side chain. In this regard, the cysteine, for its unique reactivity, has been widely employed as the substrate for such modifications. Herein, we report on methodologies developed to modify the cysteine thiol group through the S-alkylation reaction. Some procedures perform the alkylation of cysteine derivatives, in order to prepare building blocks to be used during the peptide synthesis, whilst some others selectively modify peptide sequences containing a cysteine residue with a free thiol group, both in solution and in the solid phase. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
Comak, Gurbuz; Foltran, Stéphanie; Ke, Jie; Pérez, Eduardo; Sánchez-Vicente, Yolanda; George, Michael W.; Poliakoff, Martyn
2016-01-01
Highlights: • A synthetic method using ATR–FTIR spectroscopy has been developed to measure the solubility of water in CO_2_. • New data have been obtained for the dew point of the water at 4.05 MPa, 5.05 MPa and 6.03 MPa. • These data fill a gap in the literature and could be of significance for CO_2 transport in pipelines for CCS technology. - Abstract: A new synthetic method for studying phase behaviour is described using Attenuated Total Reflection (ATR) spectroscopy. The method has been developed to provide relevant information on the solubility of water in CO_2. The dew point of water has been determined at three different pressures, viz. (4.05, 5.05 and 6.03) MPa with mole fractions of water between 0.01 and 0.04. The data obtained fill the gap in the literature in these regions of pressures and temperatures and could be of high importance in the context of Carbon Capture and Storage (CCS) technology. Indeed, the presence of water in the captured CO_2 could damage the pipeline used for CO_2 transport. Hence, it is very important to have a fully understanding of the behaviour of the (CO_2 + H_2O) mixtures in wide range of temperature relevant for CCS.
Directory of Open Access Journals (Sweden)
Abdulrhman M. Dhabbah
2015-12-01
Full Text Available If there is a suspicion of arson, analysis of fire debris and identification of potential accelerants is considered to be one of the most essential examinations of the investigation. The existence of any traces of potential accelerants in a sample taken from the fire scene is crucial in determining whether the fire was started deliberately or not. This study is divided into four parts: the first part describes the most important ignition accelerators which are used in arson fires in Saudi Arabia. The second part is devoted to determining the methods that are used to collect and store trace evidences from fire scenes in Saudi Arabia, if there is a suspicion that accelerants have been used to ignite the fire. The most important techniques used in the extraction and analysis of ignitable liquid residue (ILR in arson cases are presented in the third section. Finally, the fourth part discusses the problems and difficulties which both experts and employees in The General Department of Forensic Evidence in Saudi Arabia face when collecting and sampling traces as well as some recommendations to address these issues. The results obtained from this study indicate that the most common accelerant used to start fires is gasoline, specifically ‘Octane 91’, followed by kerosene, thereafter diesel and finally paint thinner. Experts are also agreed on the difficulty of obtaining evidence from this type of crime scene, especially after the fire has been extinguished and the scene is released for investigation by the Civil Defense. They also agree that the best technique for extracting and analyzing ignitable liquid residue (ILR in the solid phase should be Gas Chromatography coupled with Headspace (GC-Headspace. In liquid samples, either Gas Chromatography coupled with Mass Spectroscopy (GC-MS or Fourier transform infrared (FT- IR can be used.
Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.
Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji
2015-12-01
A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.
Novel methods in the Particle-In-Cell accelerator Code-Framework Warp
Energy Technology Data Exchange (ETDEWEB)
Vay, J-L [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Grote, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cohen, R. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Friedman, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-26
The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.
Track detectors in particle accelerator environment: an overview on existing and new methods
International Nuclear Information System (INIS)
Tripathy, S.P.; Sarkar, P.K.
2011-01-01
The advent of high energy, high intensity particle accelerators, with increasing applications in various fields has lead to the involvement of more users and operators. The complex (secondary) radiation field in an accelerator environment, generated by the primary beam hitting a target, is highly directional, dynamic, pulsed and mixed in nature, which poses a unique challenge for the radiological safety aspects, specially the neutrons contributing to a significant dose even beyond the shields. Solid polymeric track detectors (SPTDs), due to their insensitivity to low LET radiations and integrating nature of signal registration, are found to be effective and convenient for neutron measurements. This paper reviews some of the existing and frequently used methods of neutron spectrometry and dosimetry using SPTDs and explores new approaches as well. The paper elaborates on the extended energy response and rapid etching techniques of SPTDs along with some new results. An overview on the recently introduced microwave-induced chemical etching (MICE) technique is also presented. (author)
The fingerprint method for characterization of radioactive waste in hadron accelerators
Magistris, M
2008-01-01
Beam losses are responsible for material activation in most of the components of particle accelerators. The activation is caused by several nuclear processes and varies with the irradiation history and the characteristics of the material (namely chemical composition and size). Once at the end of their operational lifetime, these materials require radiological characterization. The radionuclide inventory depends on the particle spectrum, the irradiation history and the chemical composition of the material. As long as these factors are known and the material cross-sections are available, the induced radioactivity can be calculated analytically. However, these factors vary widely among different items of waste and sometimes they are only partially known. The European Laboratory for Particle Physics (CERN, Geneva) has been operating accelerators for high-energy physics for 50 years. Different methods for the evaluation of the radionuclide inventory are currently under investigation at CERN, including the so-calle...
Method for the mechanical axis alignment of the linear induction accelerator
International Nuclear Information System (INIS)
Li Hong; China Academy of Engineering Physics, Mianyang; Yao Jin; Liu Yunlong; Zhang Linwen; Deng Jianjun
2004-01-01
Accurate mechanical axis alignment is a basic requirement for assembling a linear induction accelerator (LIA). The total length of an LIA is usually over thirty or fifty meters, and it consists of many induction cells. By using a laser tracker a new method of mechanical axis alignment for LIA is established to achieve the high accuracy. This paper introduces the method and gives implementation step and point position measure errors of the mechanical axis alignment. During the alignment process a 55 m-long alignment control survey net is built, and the theoretic revision of the coordinate of the control survey net is presented. (authors)
Multi-level nonlinear diffusion acceleration method for multigroup transport k-Eigenvalue problems
International Nuclear Information System (INIS)
Anistratov, Dmitriy Y.
2011-01-01
The nonlinear diffusion acceleration (NDA) method is an efficient and flexible transport iterative scheme for solving reactor-physics problems. This paper presents a fast iterative algorithm for solving multigroup neutron transport eigenvalue problems in 1D slab geometry. The proposed method is defined by a multi-level system of equations that includes multigroup and effective one-group low-order NDA equations. The Eigenvalue is evaluated in the exact projected solution space of smallest dimensionality, namely, by solving the effective one- group eigenvalue transport problem. Numerical results that illustrate performance of the new algorithm are demonstrated. (author)
Shahbazi, Sara; Zamanian, Ali; Pazouki, Mohammad; Jafari, Yaser
2018-05-01
A new total biomimetic technique based on both the water uptake and degradation processes is introduced in this study to provide an interesting procedure to fabricate a bioactive and biodegradable synthetic scaffold, which has a good mechanical and structural properties. The optimization of effective parameters to scaffold fabrication was done by response surface methodology/central composite design (CCD). With this method, a synthetic scaffold was fabricated which has a uniform and open-interconnected porous structure with the largest pore size of 100-200μm. The obtained compressive ultimate strength of ~35MPa and compression modulus of 58MPa are similar to some of the trabecular bone. The pore morphology, size, and distribution of the scaffold were characterized using a scanning electron microscope and mercury porosimeter. Fourier transform infrared spectroscopy, EDAX and X-ray diffraction analyses were used to determine the chemical composition, Ca/P element ratio of mineralized microparticles, and the crystal structure of the scaffolds, respectively. The optimum biodegradable synthetic scaffold based on its raw materials of polypropylene fumarate, hydroxyethyl methacrylate and nano bioactive glass (PPF/HEMA/nanoBG) as 70/30wt/wt%, 20wt%, and 1.5wt/wt% (PHB.732/1.5) with desired porosity, pore size, and geometry were created by 4weeks immersion in SBF. This scaffold showed considerable biocompatibility in the ranging from 86 to 101% for the indirect and direct contact tests and good osteoblast cell attachment when studied with the bone-like cells. Copyright © 2018 Elsevier B.V. All rights reserved.
A chain-of-states acceleration method for the efficient location of minimum energy paths
Energy Technology Data Exchange (ETDEWEB)
Hernández, E. R., E-mail: Eduardo.Hernandez@csic.es; Herrero, C. P. [Instituto de Ciencia de Materiales de Madrid (ICMM–CSIC), Campus de Cantoblanco, 28049 Madrid (Spain); Soler, J. M. [Departamento de Física de la Materia Condensada and IFIMAC, Universidad Autónoma de Madrid, 28049 Madrid (Spain)
2015-11-14
We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C{sub 60}.
Kim, Jongsik; McNamara, Nicholas D; Her, Theresa H; Hicks, Jason C
2013-11-13
This work describes a novel method for the preparation of titanium oxide nanoparticles supported on amorphous carbon with nanoporosity (Ti/NC) via the post-synthetic modification of a Zn-based MOF with an amine functionality, IRMOF-3, with titanium isopropoxide followed by its carbothermal pyrolysis. This material exhibited high purity, high surface area (>1000 m(2)/g), and a high dispersion of metal oxide nanoparticles while maintaining a small particle size (~4 nm). The material was shown to be a promising catalyst for oxidative desulfurization of diesel using dibenzothiophene as a model compound as it exhibited enhanced catalytic activity as compared with titanium oxide supported on activated carbon via the conventional incipient wetness impregnation method. The formation mechanism of Ti/NC was also proposed based on results obtained when the carbothermal reduction temperature was varied.
Nikazad, T; Davidi, R; Herman, G T
2012-03-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
Probability-neighbor method of accelerating geometry treatment in reactor Monte Carlo code RMC
International Nuclear Information System (INIS)
She, Ding; Li, Zeguang; Xu, Qi; Wang, Kan; Yu, Ganglin
2011-01-01
Probability neighbor method (PNM) is proposed in this paper to accelerate geometry treatment of Monte Carlo (MC) simulation and validated in self-developed reactor Monte Carlo code RMC. During MC simulation by either ray-tracking or delta-tracking method, large amounts of time are spent in finding out which cell one particle is located in. The traditional way is to search cells one by one with certain sequence defined previously. However, this procedure becomes very time-consuming when the system contains a large number of cells. Considering that particles have different probability to enter different cells, PNM method optimizes the searching sequence, i.e., the cells with larger probability are searched preferentially. The PNM method is implemented in RMC code and the numerical results show that the considerable time of geometry treatment in MC calculation for complicated systems is saved, especially effective in delta-tracking simulation. (author)
Directory of Open Access Journals (Sweden)
Wahab Adebayo Salami
2017-01-01
Full Text Available This paper presents the development of runoff hydrographs for selected rivers in Ogun-Osun river catchment, south west, Nigeria using Snyder and Soil Conservation Service (SCS methods of synthetic unit hydrograph to determine the ordinates. The Soil Conservation Service (SCS curve Number method was used to estimate the excess rainfall from storm of different return periods. The peak runoff hydrographs were determined by convoluting the unit hydrographs ordinates with the excess rainfall and the value of peak flows obtained by both Snyder and SCS methods observed to vary from one river watershed to the other. The peak runoff hydrograph flows obtained based on the unit hydrograph ordinate determined with Snyder method for 20-yr, 50-yr, 100-yr, 200-yr and 500-yr, return period varied from 112.63m3/s and 13364.30m3/s, while those based on the SCS method varied from 304.43m3/s and 6466.84m3/s for the eight watersheds. However, the percentage difference shows that for values of peak flows obtained with Snyder and SCS methods varies from 13.14% to 63.30%. However, SCS method is recommended to estimate the ordinate required for the development of peak runoff hydrograph in the river watersheds because it utilized additional morphometric parameters such as watershed slope and the curve number (CN which is a function of the properties of the soil and vegetation cover of the watershed.
Detection methods of pulsed X-rays for transmission tomography with a linear accelerator
International Nuclear Information System (INIS)
Glasser, F.
1988-07-01
Appropriate detection methods are studied for the development of a high energy tomograph using a linear accelerator for nondestructive testing of bulky objects. The aim is the selection of detectors adapted to a pulsed X-ray source and with a good behavior under X-ray radiations of several MeV. Performance of semiconductors (HgI 2 , Cl doped CdTe, GaAs, Bi 12 Ge0 20 ) and a scintillator (Bi 4 Ge 3 0 12 ) are examined. A prototype tomograph gave images that show the validity of detectors for analysis of medium size equipment such as a concrete drum of 60 cm in diameter [fr
Optical Flow of Small Objects Using Wavelets, Bootstrap Methods, and Synthetic Discriminant Filters
National Research Council Canada - National Science Library
Hewer, Gary
1997-01-01
...) targets in highly cluttered and noisy environments. In this paper; we present a novel wavelet detection algorithm which incorporates adaptive CFAR detection statistics using the bootstrap method...
Beam transient analyses of Accelerator Driven Subcritical Reactors based on neutron transport method
Energy Technology Data Exchange (ETDEWEB)
He, Mingtao; Wu, Hongchun [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Zheng, Youqi, E-mail: yqzheng@mail.xjtu.edu.cn [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Wang, Kunpeng [Nuclear and Radiation Safety Center, PO Box 8088, Beijing 100082 (China); Li, Xunzhao; Zhou, Shengcheng [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China)
2015-12-15
Highlights: • A transport-based kinetics code for Accelerator Driven Subcritical Reactors is developed. • The performance of different kinetics methods adapted to the ADSR is investigated. • The impacts of neutronic parameters deteriorating with fuel depletion are investigated. - Abstract: The Accelerator Driven Subcritical Reactor (ADSR) is almost external source dominated since there is no additional reactivity control mechanism in most designs. This paper focuses on beam-induced transients with an in-house developed dynamic analysis code. The performance of different kinetics methods adapted to the ADSR is investigated, including the point kinetics approximation and space–time kinetics methods. Then, the transient responds of beam trip and beam overpower are calculated and analyzed for an ADSR design dedicated for minor actinides transmutation. The impacts of some safety-related neutronics parameters deteriorating with fuel depletion are also investigated. The results show that the power distribution varying with burnup leads to large differences in temperature responds during transients, while the impacts of kinetic parameters and feedback coefficients are not very obvious. Classification: Core physic.
Kurniadi, M.; Bintang, R.; Kusumaningrum, A.; Nursiwi, A.; Nurhikmat, A.; Susanto, A.; Angwar, M.; Triwiyono; Frediansyah, A.
2017-12-01
Research on shelf-life prediction of canned fried rice using Accelerated Shelf-life Test (ASLT) of Arrhenius model has been conducted. The aim of this research to predict shelf life of canned-fried rice products. Lethality value of 121°C for 15 and 20 minutes and Total Plate count methods are used to determine time and temperatures of sterilization process.Various storage temperatures of ASLT Arrhenius method were 35, 45 and 55°C during 35days. Rancidity is one of the derivation quality of canned fried rice. In this research, sample of canned fried rice is tested using rancidity value (TBA). TBA value was used as parameter which be measured once a week periodically. The use of can for fried rice without any chemical preservative is one of the advantage of the product, additionaly the use of physicalproperties such as temperature and pressure during its process can extend the shelf life and reduce the microbial contamination. The same research has never done before for fried rice as ready to eat meal. The result showed that the optimum conditions of sterilization process were 121°C,15 minutes with total plate count number of 9,3 × 101 CFU/ml. Lethality value of canned fried rice at 121°C,15 minutes was 3.63 minutes. The calculated Shelf-life of canned fried rice using Accelerated Shelf-life Test (ASLT) of Arrhenius method was 10.3 months.
Blokland, M.H.; Tricht, van E.F.; Ginkel, van L.A.; Sterk, S.S.
2017-01-01
A robust LC–MS/MS method was developed to quantify a large number of phase I and phase II steroids in urine. The decision limit is for most compounds lower than 1 ng ml−1 with a measurement uncertainty smaller than 30%. The method is fully validated and was applied to assess the influence of
Energy Technology Data Exchange (ETDEWEB)
Kim, Jong Sung; Kim, Yong Woo [Sunchon National University, Suncheon (Korea, Republic of)
2014-10-15
Two acceleration methods, an effective force method (or inertia method) and a large mass method, have been applied for performing time history seismic analysis. The acceleration methods for uncracked structures have been verified via previous studies. However, no study has identified the validity of these acceleration methods for cracked piping. In this study, the validity of the acceleration methods for through-wall cracked piping is assessed via time history implicit dynamic elastic seismic analysis from the viewpoint of linear elastic fracture mechanics. As a result, it is identified that both acceleration methods show the same results for cracked piping if a large mass magnitude and maximum time increment are adequately selected.
International Nuclear Information System (INIS)
Kim, Jong Sung; Kim, Yong Woo
2014-01-01
Two acceleration methods, an effective force method (or inertia method) and a large mass method, have been applied for performing time history seismic analysis. The acceleration methods for uncracked structures have been verified via previous studies. However, no study has identified the validity of these acceleration methods for cracked piping. In this study, the validity of the acceleration methods for through-wall cracked piping is assessed via time history implicit dynamic elastic seismic analysis from the viewpoint of linear elastic fracture mechanics. As a result, it is identified that both acceleration methods show the same results for cracked piping if a large mass magnitude and maximum time increment are adequately selected
International Nuclear Information System (INIS)
Wang, Lizhi; Pan, Rong; Li, Xiaoyang; Jiang, Tongmin
2013-01-01
Accelerated degradation testing (ADT) is a common approach in reliability prediction, especially for products with high reliability. However, oftentimes the laboratory condition of ADT is different from the field condition; thus, to predict field failure, one need to calibrate the prediction made by using ADT data. In this paper a Bayesian evaluation method is proposed to integrate the ADT data from laboratory with the failure data from field. Calibration factors are introduced to calibrate the difference between the lab and the field conditions so as to predict a product's actual field reliability more accurately. The information fusion and statistical inference procedure are carried out through a Bayesian approach and Markov chain Monte Carlo methods. The proposed method is demonstrated by two examples and the sensitivity analysis to prior distribution assumption
Energy Technology Data Exchange (ETDEWEB)
Shin, Jong Kook; Yoon, Cheon Seog [Dept. of Mechanical Engineering, Hannam University, Daejeon (Korea, Republic of); Kim, Hong Suk [Engine Research Center, Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)
2015-11-15
Among various ammonium salts and metal ammine chlorides used as solid materials for the sources of ammonia with solid SCR for lean NOx reduction, magnesium ammine chloride was taken up for study in this paper because of its ease of handling and safety. Lab-scale synthetic method of magnesium ammine chloride were studied for different durations, temperatures, and pressures with proper ammonia gas charged, as a respect of ammonia gas adsorption rate(%). To understand material characteristics for lab-made magnesium ammine chloride, DA, IC, FT-IR, XRD and SDT analyses were performed using the published data available in literature. From the analytical results, the water content in the lab-made magnesium ammine chloride can be determined. A new test procedure for water removal was proposed, by which the adsorption rate of lab-made sample was found to be approximately 100%.
International Nuclear Information System (INIS)
Shin, Jong Kook; Yoon, Cheon Seog; Kim, Hong Suk
2015-01-01
Among various ammonium salts and metal ammine chlorides used as solid materials for the sources of ammonia with solid SCR for lean NOx reduction, magnesium ammine chloride was taken up for study in this paper because of its ease of handling and safety. Lab-scale synthetic method of magnesium ammine chloride were studied for different durations, temperatures, and pressures with proper ammonia gas charged, as a respect of ammonia gas adsorption rate(%). To understand material characteristics for lab-made magnesium ammine chloride, DA, IC, FT-IR, XRD and SDT analyses were performed using the published data available in literature. From the analytical results, the water content in the lab-made magnesium ammine chloride can be determined. A new test procedure for water removal was proposed, by which the adsorption rate of lab-made sample was found to be approximately 100%
International Nuclear Information System (INIS)
Moraes, Pedro Gabriel B.; Leite, Michel C.A.; Barros, Ricardo C.
2013-01-01
In this work we developed a software to model and generate results in tables and graphs of one-dimensional neutron transport problems in multi-group formulation of energy. The numerical method we use to solve the problem of neutron diffusion is analytic, thus eliminating the truncation errors that appear in classical numerical methods, e.g., the method of finite differences. This numerical analytical method increases the computational efficiency, since they are not refined spatial discretization necessary because for any spatial discretization grids used, the numerical result generated for the same point of the domain remains unchanged unless the rounding errors of computational finite arithmetic. We chose to develop a computational application in MatLab platform for numerical computation and program interface is simple and easy with knobs. We consider important to model this neutron transport problem with a fixed source in the context of shielding calculations of radiation that protects the biosphere, and could be sensitive to ionizing radiation
Mohammad Asif
2016-01-01
Substituted quinoxaline have considerable interest in chemistry, biology and pharmacology. Quinoxaline derivatives are capable with variety of biological activities and possess different biological activities, of which the most potent are anti-microbial, analgesic and anti-inflammatory activities. It facilitated the researchers to develop various methods for their synthesis and their applications. In this review represented different methods of synthesis, reactivity and various biological act...
One-pot synthetic method to prepare highly N-doped nanoporous carbons for CO2 adsorption
International Nuclear Information System (INIS)
Meng, Long-Yue; Park, Soo-Jin
2014-01-01
A one-pot synthetic method was used for the preparation of nanoporous carbon containing nitrogen from polypyrrole (PPY) using NaOH as the activated agent. The activation process was carried out under set conditions (NaOH/PPY = 2 and NaOH/PPY = 4) at different temperatures in 600–900 °C for 2 h. The effect of the activation conditions on the pore structure, surface functional groups and CO 2 adsorption capacities of the prepared N-doped activated carbons was examined. The carbon was analyzed by X-ray photoelectron spectroscopy (XPS), N2/77 K full isotherms, scanning electron microscopy (SEM) and transmission electron microscopy (TEM). The CO 2 adsorption capacity of the N-doped activated carbon was measured at 298 K and 1 bar. By dissolving the activation agents, the N-doped activated carbon exhibited high specific surface areas (755–2169 m 2 g −1 ) and high pore volumes (0.394–1.591 cm 3 g −1 ). In addition, the N-doped activated carbons contained a high N content at lower activation temperatures (7.05 wt.%). The N-doped activated carbons showed a very high CO 2 adsorption capacity of 177 mg g −1 at 298 K and 1 bar. The CO 2 adsorption capacity was found to be dependent on the microporosity and N contents. - Highlights: • A one-pot synthetic method was used for the preparation of N-doped nanoporous carbons. • Polypyrrole (PPY) were activated with NaOH under set conditions (NaOH/PPY = 2 and 4). • N-doped activated carbon exhibited high specific surface areas (2169 m 2 g −1 ). • The carbons showed a very high CO 2 adsorption capacity of 177 mg g −1 at 298 K
Expanding applications of gene-based targeting biotechnology in functional genomics and the treatment of plants, animals, and microbes has synergized the need for new methods to measure binding efficiencies of these products to their genetic targets. The adaptation and innovative use of Cell–Penetra...
A Rapid Synthetic Method for the Preparation of Two Tris-Cobalt(III) Compounds.
Jackman, Donald C.; Rillema, D. Paul
1989-01-01
Reports a method of preparation for tris(ethylenediamine)cobalt(III) and tris(2,2'-bipyridine)cobalt(III) that will shorten the preparation time by approximately 3 hours. Notes the time for synthesis and isolation of compound one was 1 hour (yield 38 percent) while compound two took 50 minutes (yield 71%). (MVL)
Cederkvist, Karin; Jensen, Marina B; Holm, Peter E
2017-08-01
Stormwater treatment facilities (STFs) are becoming increasingly widespread but knowledge on their performance is limited. This is due to difficulties in obtaining representative samples during storm events and documenting removal of the broad range of contaminants found in stormwater runoff. This paper presents a method to evaluate STFs by addition of synthetic runoff with representative concentrations of contaminant species, including the use of tracer for correction of removal rates for losses not caused by the STF. A list of organic and inorganic contaminant species, including trace elements representative of runoff from roads is suggested, as well as relevant concentration ranges. The method was used for adding contaminants to three different STFs including a curbstone extension with filter soil, a dual porosity filter, and six different permeable pavements. Evaluation of the method showed that it is possible to add a well-defined mixture of contaminants despite different field conditions by having a flexibly system, mixing different stock-solutions on site, and use bromide tracer for correction of outlet concentrations. Bromide recovery ranged from only 12% in one of the permeable pavements to 97% in the dual porosity filter, stressing the importance of including a conservative tracer for correction of contaminant retention values. The method is considered useful in future treatment performance testing of STFs. The observed performance of the STFs is presented in coming papers. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Ohtsuka, N.; Ishihama, S.; Kunifuda, T.; Hayasaka, N.; Miura, T.
2001-01-01
Various long-loved radionuclides, 3 H, 7 Be, 22 Na, 51 Cr, 54 Mn, 56 Co, 57 Co, 60 Co, 134 Cs, 152 Eu and 154 Eu, have been produced in the shielding concrete of high energy proton accelerator facility through both nuclear spallation reactions and thermal neutron capture reactions of concrete elements, during machine operation. Tritium is the most important nuclide from the radiation protection. There were, however, few measurements of tritium concentration induced in the shielding concrete. In this study, the conditions of measurement method of tritium concentration induced in shielding concrete have been investigated using the activated shielding concrete of the 12 GeV proton beam-line tunnel at KEK and the standard rock (JG-1) irradiated of thermal neutron at the reactor. And the depth profiles of tritium induced in the shielding concrete of slow extracted proton beam line at KEK were determined using this method. (author)
International Nuclear Information System (INIS)
Scheithauer, M.; Schwedas, M.; Wiezorek, T.; Wendt, T.
2003-01-01
The present study focused on the reconstruction of the bremsstrahlung spectrum of a clinical linear accelerator from the measured transmission curve, with the aim of improving the accuracy of this method. The essence of the method was the analytic inverse Laplace transform of a parameter function fitted to the measured transmission curve. We tested known fitting functions, however they resulted in considerable fitting inaccuracy, leading to inaccuracies of the bremsstrahlung spectrum. In order to minimise the fitting errors, we employed a linear combination of n equations with 2n-1 parameters. The fitting errors are now considerably smaller. The measurement of the transmission function requires that the energy-dependent detector response is taken into account. We analysed the underlying physical context and developed a function that corrects for the energy-dependent detector response. The factors of this function were experimentally determined or calculated from tabulated values. (orig.) [de
Acute Effect of Different Combined Stretching Methods on Acceleration and Speed in Soccer Players
Directory of Open Access Journals (Sweden)
Amiri-Khorasani Mohammadtaghi
2016-04-01
Full Text Available The purpose of this study was to investigate the acute effect of different stretching methods, during a warm-up, on the acceleration and speed of soccer players. The acceleration performance of 20 collegiate soccer players (body height: 177.25 ± 5.31 cm; body mass: 65.10 ± 5.62 kg; age: 16.85 ± 0.87 years; BMI: 20.70 ± 5.54; experience: 8.46 ± 1.49 years was evaluated after different warm-up procedures, using 10 and 20 m tests. Subjects performed five types of a warm-up: static, dynamic, combined static + dynamic, combined dynamic + static, and no-stretching. Subjects were divided into five groups. Each group performed five different warm-up protocols in five non-consecutive days. The warm-up protocol used for each group was randomly assigned. The protocols consisted of 4 min jogging, a 1 min stretching program (except for the no-stretching protocol, and 2 min rest periods, followed by the 10 and 20 m sprint test, on the same day. The current findings showed significant differences in the 10 and 20 m tests after dynamic stretching compared with static, combined, and no-stretching protocols. There were also significant differences between the combined stretching compared with static and no-stretching protocols. We concluded that soccer players performed better with respect to acceleration and speed, after dynamic and combined stretching, as they were able to produce more force for a faster execution.
FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods
Directory of Open Access Journals (Sweden)
Bakos Jason D
2010-04-01
Full Text Available Abstract Background Likelihood (ML-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. Results We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10× speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Conclusions Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs 1.
GPU-accelerated 3D neutron diffusion code based on finite difference method
Energy Technology Data Exchange (ETDEWEB)
Xu, Q.; Yu, G.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ. (China)
2012-07-01
Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)
GPU-accelerated 3D neutron diffusion code based on finite difference method
International Nuclear Information System (INIS)
Xu, Q.; Yu, G.; Wang, K.
2012-01-01
Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)
Automated Method to Develop a Clark Synthetic Unit Hydrograph within ArcGIS
2015-08-01
the totaltime.asc raster to determine how long the rain in each cell takes to get to the outlet. This second method takes a Lagrangian approach to...occurred between 15 August and 15 September 2011 when multiple large rain events occurred over the study area. This time period will serve to show how...storm sewers . ASCE (American Society of Civil Engineers) Manual on Engineering Practice No. 37 and WPCF (Water Pollution Control Federation) Manual
Amlashi, Nadiya Ekbatani; Hadjmohammadi, Mohammad Reza; Nazari, Seyed Saman Seyed Jafar
2014-09-26
For the first time, a novel water-contained surfactant-based vortex-assisted microextraction method (WSVAME) was developed for the extraction of two synthetic antioxidants (t-butyl hydroquinone (TBHQ) and butylated hydroxyanisole (BHA)) from edible oil samples. The novel microextraction method is based on the injection of an aqueous solution of non-ionic surfactant, Brij-35, into the oil sample in a conical bottom glass tube to form a cloudy solution. Vortex mixing was applied to accelerate the dispersion process. After extraction and phase separation by centrifugation, the lower sediment phase was directly analyzed by HPLC. The effects of the four experimental parameters including volume and concentration of extraction solvent (aqueous solution of Brij-35), percentage of acetic acid added to the oil sample and vortex time on the extraction efficiency were studied with a full factorial design. The central composite design and multiple linear regression method were applied for the construction of the best polynomial model based on experimental recoveries. The proposed method showed good linearity within the range of 0.200-200 μg mL(-1), the square of correlation coefficient higher than 0.999 and appropriate limit of detection (0.026 and 0.020 μg mL(-1) for TBHQ and BHA, respectively), while the precision for inner-day was ≤ 3.0 (n=5) and it was ≤ 3.80 (n=5) for inter-day assay. Under the optimal condition (30 μL of 0.10 mol L(-1) Brij-35 solution as extraction solvent and vortex time 1 min), the method was successfully applied for determination of TBHQ and BHA in different commercial edible oil samples. The recoveries in all cases were above 95%, with relative standard deviations below 5%. This approach is considered as a simple, sensitive and environmentally friendly method because of biodegradability of the extraction phase and no use of organic solvent in the extraction procedure. Copyright © 2014 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Hao Shi
2018-02-01
Full Text Available With the rapid development of remote sensing technologies, SAR satellites like China’s Gaofen-3 satellite have more imaging modes and higher resolution. With the availability of high-resolution SAR images, automatic ship target detection has become an important topic in maritime research. In this paper, a novel ship detection method based on gradient and integral features is proposed. This method is mainly composed of three steps. First, in the preprocessing step, a filter is employed to smooth the clutters and the smoothing effect can be adaptive adjusted according to the statistics information of the sub-window. Thus, it can retain details while achieving noise suppression. Second, in the candidate area extraction, a sea-land segmentation method based on gradient enhancement is presented. The integral image method is employed to accelerate computation. Finally, in the ship target identification step, a feature extraction strategy based on Haar-like gradient information and a Radon transform is proposed. This strategy decreases the number of templates found in traditional Haar-like methods. Experiments were performed using Gaofen-3 single-polarization SAR images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. In addition, this method has the potential for on-board processing.
Quality control methods for linear accelerator radiation and mechanical axes alignment.
Létourneau, Daniel; Keller, Harald; Becker, Nathan; Amin, Md Nurul; Norrlinger, Bernhard; Jaffray, David A
2018-06-01
The delivery accuracy of highly conformal dose distributions generated using intensity modulation and collimator, gantry, and couch degrees of freedom is directly affected by the quality of the alignment between the radiation beam and the mechanical axes of a linear accelerator. For this purpose, quality control (QC) guidelines recommend a tolerance of ±1 mm for the coincidence of the radiation and mechanical isocenters. Traditional QC methods for assessment of radiation and mechanical axes alignment (based on pointer alignment) are time consuming and complex tasks that provide limited accuracy. In this work, an automated test suite based on an analytical model of the linear accelerator motions was developed to streamline the QC of radiation and mechanical axes alignment. The proposed method used the automated analysis of megavoltage images of two simple task-specific phantoms acquired at different linear accelerator settings to determine the coincidence of the radiation and mechanical isocenters. The sensitivity and accuracy of the test suite were validated by introducing actual misalignments on a linear accelerator between the radiation axis and the mechanical axes using both beam steering and mechanical adjustments of the gantry and couch. The validation demonstrated that the new QC method can detect sub-millimeter misalignment between the radiation axis and the three mechanical axes of rotation. A displacement of the radiation source of 0.2 mm using beam steering parameters was easily detectable with the proposed collimator rotation axis test. Mechanical misalignments of the gantry and couch rotation axes of the same magnitude (0.2 mm) were also detectable using the new gantry and couch rotation axis tests. For the couch rotation axis, the phantom and test design allow detection of both translational and tilt misalignments with the radiation beam axis. For the collimator rotation axis, the test can isolate the misalignment between the beam radiation axis
Liu, Zhaohui
2017-06-16
Hierarchically structured zeolites combine the merits of microporous zeolites and mesoporous materials to offer enhanced molecular diffusion and mass transfer without compromising the inherent catalytic activities and selectivity of zeolites. This short review gives an introduction to the synthesis strategies for hierarchically structured zeolites with emphasis on the latest progress in the route of ‘direct synthesis’ using various templates. Several characterization methods that allow us to evaluate the ‘quality’ of complex porous structures are also introduced. At the end of this review, an outlook is given to discuss some critical issues and challenges regarding the development of novel hierarchically structured zeolites as well as their applications.
Synthetic Method for Oligonucleotide Block by Using Alkyl-Chain-Soluble Support.
Matsuno, Yuki; Shoji, Takao; Kim, Shokaku; Chiba, Kazuhiro
2016-02-19
A straightforward method for the synthesis of oligonucleotide blocks using a Cbz-type alkyl-chain-soluble support (Z-ACSS) attached to the 3'-OH group of 3'-terminal nucleosides was developed. The Z-ACSS allowed for the preparation of fully protected deoxyribo- and ribo-oligonucleotides without chromatographic purification and released dimer- to tetramer-size oligonucleotide blocks via hydrogenation using a Pd/C catalyst without significant loss or migration of protective groups such as 5'-end 4,4'-dimethoxtrityl, 2-cyanoethyl on internucleotide bonds, or 2'-TBS.
Voter, Arthur
Many important materials processes take place on time scales that far exceed the roughly one microsecond accessible to molecular dynamics simulation. Typically, this long-time evolution is characterized by a succession of thermally activated infrequent events involving defects in the material. In the accelerated molecular dynamics (AMD) methodology, known characteristics of infrequent-event systems are exploited to make reactive events take place more frequently, in a dynamically correct way. For certain processes, this approach has been remarkably successful, offering a view of complex dynamical evolution on time scales of microseconds, milliseconds, and sometimes beyond. We have recently made advances in all three of the basic AMD methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics (TAD)), exploiting both algorithmic advances and novel parallelization approaches. I will describe these advances, present some examples of our latest results, and discuss what should be possible when exascale computing arrives in roughly five years. Funded by the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, and by the Los Alamos Laboratory Directed Research and Development program.
In situ baking method for degassing of a kicker magnet in accelerator beam line
International Nuclear Information System (INIS)
Kamiya, Junichiro; Ogiwara, Norio; Yanagibashi, Toru; Kinsho, Michikazu; Yasuda, Yuichi
2016-01-01
In this study, the authors propose a new in situ degassing method by which only kicker magnets in the accelerator beam line are baked out without raising the temperature of the vacuum chamber to prevent unwanted thermal expansion of the chamber. By simply installing the heater and thermal radiation shield plates between the kicker magnet and the chamber wall, most of the heat flux from the heater directs toward the kicker magnet. The result of the verification test showed that each part of the kicker magnet was heated to above the target temperature with a small rise in the vacuum chamber temperature. A graphite heater was selected in this application to bake-out the kicker magnet in the beam line to ensure reliability and easy maintainability of the heater. The vacuum characteristics of graphite were suitable for heater operation in the beam line. A preliminary heat-up test conducted in the accelerator beam line also showed that each part of the kicker magnet was successfully heated and that thermal expansion of the chamber was negligibly small
In situ baking method for degassing of a kicker magnet in accelerator beam line
Energy Technology Data Exchange (ETDEWEB)
Kamiya, Junichiro, E-mail: kamiya.junichiro@jaea.go.jp; Ogiwara, Norio; Yanagibashi, Toru; Kinsho, Michikazu [Japan Atomic Energy Agency, J-PARC Center, Ooaza Shirakata 2-4, Tokai, Naka, Ibaraki 319-1195 (Japan); Yasuda, Yuichi [SAKAGUCHI E.H VOC CORP., Sakura Dai-san Kogyodanchi 1-8-6, Osaku, Sakura, Chiba 285-0802 (Japan)
2016-03-15
In this study, the authors propose a new in situ degassing method by which only kicker magnets in the accelerator beam line are baked out without raising the temperature of the vacuum chamber to prevent unwanted thermal expansion of the chamber. By simply installing the heater and thermal radiation shield plates between the kicker magnet and the chamber wall, most of the heat flux from the heater directs toward the kicker magnet. The result of the verification test showed that each part of the kicker magnet was heated to above the target temperature with a small rise in the vacuum chamber temperature. A graphite heater was selected in this application to bake-out the kicker magnet in the beam line to ensure reliability and easy maintainability of the heater. The vacuum characteristics of graphite were suitable for heater operation in the beam line. A preliminary heat-up test conducted in the accelerator beam line also showed that each part of the kicker magnet was successfully heated and that thermal expansion of the chamber was negligibly small.
International Nuclear Information System (INIS)
Roidl, B.; Meinke, M.; Schröder, W.
2013-01-01
Highlights: • A synthetic turbulence generation method (STGM) is presented. • STGM is applied to sub and supersonic flows at low and moderate Reynolds numbers. • STGM shows a convincing quality in zonal RANS–LES for flat-plate boundary layers (BLs). • A good agreement with the pure LES and reference DNS findings is obtained. • RANS-to-LES transition length is reduced to less than four boundary-layer thicknesses. -- Abstract: A synthetic turbulence generation (STG) method for subsonic and supersonic flows at low and moderate Reynolds numbers to provide inflow distributions of zonal Reynolds-averaged Navier–Stokes (RANS) – large-eddy simulation (LES) methods is presented. The STG method splits the LES inflow region into three planes where a local velocity signal is decomposed from the turbulent flow properties of the upstream RANS solution. Based on the wall-normal position and the local flow Reynolds number, specific length and velocity scales with different vorticity content are imposed at the inlet plane of the boundary layer. The quality of the STG method for incompressible and compressible zero-pressure gradient boundary layers is shown by comparing the zonal RANS–LES data with pure LES, pure RANS, and direct numerical simulation (DNS) solutions. The distributions of the time and spanwise wall-shear stress, Reynolds stress distributions, and two point correlations of the zonal RANS–LES simulations are smooth in the transition region and in good agreement with the pure LES and reference DNS findings. The STG approach reduces the RANS-to-LES transition length to less than four boundary-layer thicknesses
One-carbon 13C-labeled synthetic intermediates. Comparison and evaluation of preparative methods
International Nuclear Information System (INIS)
Ott, D.G.
1978-01-01
Frequently the biggest stumbling block to the synthesis of a structurally complex labeled compound is obtaining the required low molecular weight, structurally simple, isotopic intermediates. Selection of a particular scheme from various alternatives depends on the available capabilities and quantity of product desired, as well as on anticipated future requirements and need for related compounds. Many of the newer reagents for organic synthesis can be applied effectively to isotopic preparations with improvements of yields and simplification of procedures compared to established classical methods. New routes developed for higher molecular weight compounds are sometimes not directly adaptable to the one-carbon analogs, either because of isolation difficulties occasioned by physical properties or by chemical reactivities peculiar to their being first members of homologous series. Various routes for preparation of carbon-13 labeled methanol, formaldehyde, and cyanide are compared
A Vector Flow Imaging Method for Portable Ultrasound Using Synthetic Aperture Sequential Beamforming
DEFF Research Database (Denmark)
di Ianni, Tommaso; Villagómez Hoyos, Carlos Armando; Ewertsen, Caroline
2017-01-01
for the velocity estimation along the lateral and axial directions using a phase-shift estimator. The performance of the method was investigated with constant flow measurements in a flow rig system using the SARUS scanner and a 4.1-MHz linear array. A sequence was designed with interleaved B-mode and flow......, and the standard deviation (SD) was between 6% and 9.6%. The axial bias was lower than 1% with an SD around 2%. The mean estimated angles were 66.70° ± 2.86°, 72.65° ± 2.48°, and 89.13° ± 0.79° for the three cases. A proof-of-concept demonstration of the real-time processing and wireless transmission was tested...
Improvements in or relating to method of preparing porous material/synthetic polymer composites
International Nuclear Information System (INIS)
Hills, P.R.; McGahan, D.J.
1976-01-01
A method for preparing a composite material is described comprising polymerising a monoethylenically unsaturated monomer of a mixture of copolymerisable monoethylenically unsaturated monomers in a porous material, excluding a porous natural cellulosic fibre material, the polymerisable liquid being admixed in the porous material with a saturated aliphatic hydrocarbon or a halogen derivative thereof. It is preferable that the polymerisable liquid and the hydrocarbon or halogen derivative are present in the porous material. Impregnation may be carried out by a vacuum technique or by simple immersion. The monomers that may be used are listed, but a mixture of styrene and acrylonitrile is preferred in the proportions 60 : 40 by volume. Polymerisation may be effected by irradiation, preferably with 60 Co γ-radiation. Suitable porous materials include concrete, stone, and fibreboard. If concrete is used the composite material may be used for pressure pipes and other articles normally made of steel. Examples of the application of the process are given. (U.K.)
International Nuclear Information System (INIS)
Enchevich, I.B.; Dinev, D.H.
1988-01-01
The invention assures a reduced size of the supplementary electrode which leads to economy of a material and a more effective use of the accelerator space, where the elements of an axial injection system of the cyclotron particles can be situated. The amplitude homogeneity of the supplementary accelerating field is also improved. To the main high-frequency field, covering the whole scope of the acceleration radiuses, an additional accelerating high-frequency field is introduced comprising a part of the scope of the acceleration radiuses. The frequency of this additional accelerating high frequency field is a third harmonics of the main field frequency. The device consists of a supplementary accelerating electrode, connected to an additional resonator and an additional exciting high-frequency generator. 2 cls., 7 figs
Directory of Open Access Journals (Sweden)
Aslihan Okan Ibiloglu
2017-09-01
Full Text Available Synthetic cannabinoids which is a subgroup of cannabinoids are commonly used for recreational drug use throughout the whole world. Although both marijuana and synthetic cannabinoids stimulate the same receptors, cannabinoid receptor 1 (CB1 and cannabinoid receptor 2 (CB2, studies have shown that synthetic cannabinoids are much more potent than marijuana. The longer use of synthetic cannabinoids can cause severe physical and psychological symptoms that might even result in death, similar to many known illicit drugs. Main treatment options mostly involve symptom management and supportive care. The aim of this article is to discuss clinical and pharmacological properties of the increasingly used synthetic cannabinoids. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2017; 9(3.000: 317-328
PtPb nanoparticle electrocatalysts: control of activity through synthetic methods
International Nuclear Information System (INIS)
Ghosh, Tanushree; Matsumoto, Futoshi; McInnis, Jennifer; Weiss, Marilyn; Abruna, Hector D.; DiSalvo, Francis J.
2009-01-01
Solution phase synthesis of intermetallic nanoparticles without using surfactants (for catalytic applications) and subsequent control of size distribution remains a challenge: of growing interest, but not widely explored yet. To understand the questions in the syntheses of Pt containing intermetallic nanoparticles (as electrocatalysts for direct fuel cells) by using sodium naphthalide as the reducing agent, the effects of the Pt precursors' organic ligands were investigated. PtPb syntheses were studied as the model case. In particular, methods that lead to nanoparticles that are independent single crystals are desirable. Platinum acetylacetonate, which is soluble in many organic solvents, has ligands that may interfere less with nanoparticle growth and ordering. Interesting trends, contrary to expectations, were observed when precursors were injected into a reducing agent solution at high temperatures. The presence of acetylacetonate, from the precursor, on the nanoparticles was confirmed by ATR, while SEM imaging showed evidence of morphological changes in the nanoparticles with increasing reaction temperature. A definite relationship between domain size and extent of observed residue (organic material and sodium) present on the particles could be established. By varying post-reaction solvent removal techniques, room temperature crystallization of PtPb nanoparticles was also achieved. Electrochemical activity of the nanoparticles was also much higher than that of nanoparticles synthesized by previous reaction schemes using sodium naphthalide as the reducing agent. Along with the above mentioned techniques, BET, TEM, CBED, SAED, and XRD were used as characterization tools for the prepared nanoparticles.
Research on acceleration method of reactor physics based on FPGA platforms
International Nuclear Information System (INIS)
Li, C.; Yu, G.; Wang, K.
2013-01-01
The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecture achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)
Nurhayati, R.; Rahayu NH, E.; Susanto, A.; Khasanah, Y.
2017-04-01
Gudeg is traditional food from Yogyakarta. It is consist of jackfruit, chicken, egg and coconut milk. Gudeg generally have a short shelf life. Canning or commercial sterilization is one way to extend the shelf life of gudeg. This aims of this research is to predict the shelf life of Andrawinaloka canned gudeg with Accelerated Shelf Life Test methods, Arrhenius model. Canned gudeg stored at three different temperature, there are 37, 50 and 60°C for two months. Measuring the number of Thio Barbituric Acid (TBA), as a critical aspect, were tested every 7 days. Arrhenius model approach is done with the equation order 0 and order 1. The analysis showed that the equation of order 0 can be used as an approach to estimating the shelf life of canned gudeg. The storage of Andrawinaloka canned gudeg at 30°C is predicted untill 21 months and 24 months for 25°C.
Yu, H.; Wang, Z.; Zhang, C.; Chen, N.; Zhao, Y.; Sawchuk, A. P.; Dalsing, M. C.; Teague, S. D.; Cheng, Y.
2014-11-01
Existing research of patient-specific computational hemodynamics (PSCH) heavily relies on software for anatomical extraction of blood arteries. Data reconstruction and mesh generation have to be done using existing commercial software due to the gap between medical image processing and CFD, which increases computation burden and introduces inaccuracy during data transformation thus limits the medical applications of PSCH. We use lattice Boltzmann method (LBM) to solve the level-set equation over an Eulerian distance field and implicitly and dynamically segment the artery surfaces from radiological CT/MRI imaging data. The segments seamlessly feed to the LBM based CFD computation of PSCH thus explicit mesh construction and extra data management are avoided. The LBM is ideally suited for GPU (graphic processing unit)-based parallel computing. The parallel acceleration over GPU achieves excellent performance in PSCH computation. An application study will be presented which segments an aortic artery from a chest CT dataset and models PSCH of the segmented artery.
International Nuclear Information System (INIS)
Ito, Masayuki; Oka, Toshitaka; Hama, Yosimasa
2009-01-01
'Generalized modulus-ultimate elongation profile' was induced from the relationship between the modulus and the ultimate elongation of an elastomer that was quantitatively added crosslinking and scission. This profile can be used to evaluate the time-accelerated irradiation methods of ethylene-propylene-diene elastomer. The irradiation under low dose rate (0.33 kGy/h) at room temperature was the reference condition. The short-time irradiation condition was 4.2 kGy/h in 0.5 MPa oxygen at room temperature and 5.0 kGy/h in air at 70 o C. The former tended to bring about the higher ratio of scission than the reference condition; the latter tended to bring about the higher ratio of crosslinking.
On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials
Gates, Thomas S.
2003-01-01
A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.
Accelerated gradient methods for the x-ray imaging of solar flares
Bonettini, S.; Prato, M.
2014-05-01
In this paper we present new optimization strategies for the reconstruction of x-ray images of solar flares by means of the data collected by the Reuven Ramaty high energy solar spectroscopic imager. The imaging concept of the satellite is based on rotating modulation collimator instruments, which allow the use of both Fourier imaging approaches and reconstruction techniques based on the straightforward inversion of the modulated count profiles. Although in the last decade, greater attention has been devoted to the former strategies due to their very limited computational cost, here we consider the latter model and investigate the effectiveness of different accelerated gradient methods for the solution of the corresponding constrained minimization problem. Moreover, regularization is introduced through either an early stopping of the iterative procedure, or a Tikhonov term added to the discrepancy function by means of a discrepancy principle accounting for the Poisson nature of the noise affecting the data.
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
Oldfield, Lauren M; Grzesik, Peter; Voorhies, Alexander A; Alperovich, Nina; MacMath, Derek; Najera, Claudia D; Chandra, Diya Sabrina; Prasad, Sanjana; Noskov, Vladimir N; Montague, Michael G; Friedman, Robert M; Desai, Prashant J; Vashee, Sanjay
2017-10-17
Here, we present a transformational approach to genome engineering of herpes simplex virus type 1 (HSV-1), which has a large DNA genome, using synthetic genomics tools. We believe this method will enable more rapid and complex modifications of HSV-1 and other large DNA viruses than previous technologies, facilitating many useful applications. Yeast transformation-associated recombination was used to clone 11 fragments comprising the HSV-1 strain KOS 152 kb genome. Using overlapping sequences between the adjacent pieces, we assembled the fragments into a complete virus genome in yeast, transferred it into an Escherichia coli host, and reconstituted infectious virus following transfection into mammalian cells. The virus derived from this yeast-assembled genome, KOS YA , replicated with kinetics similar to wild-type virus. We demonstrated the utility of this modular assembly technology by making numerous modifications to a single gene, making changes to two genes at the same time and, finally, generating individual and combinatorial deletions to a set of five conserved genes that encode virion structural proteins. While the ability to perform genome-wide editing through assembly methods in large DNA virus genomes raises dual-use concerns, we believe the incremental risks are outweighed by potential benefits. These include enhanced functional studies, generation of oncolytic virus vectors, development of delivery platforms of genes for vaccines or therapy, as well as more rapid development of countermeasures against potential biothreats.
Thin Foil Acceleration Method for Measuring the Unloading Isentropes of Shock-Compressed Matter
International Nuclear Information System (INIS)
Asay, J.R.; Chhabildas, L.C.; Fortov, V.E.; Kanel, G.I.; Khishchenko, K.V.; Lomonosov, I.V.; Mehlhorn, T.; Razorenov, S.V.; Utkin, A.V.
1999-01-01
This work has been performed as part of the search for possible ways to utilize the capabilities of laser and particle beams techniques in shock wave and equation of state physics. The peculiarity of these techniques is that we have to deal with micron-thick targets and not well reproducible incident shock wave parameters, so all measurements should be of a high resolution and be done in one shot. Besides the Hugoniots, the experimental basis for creating the equations of state includes isentropes corresponding to unloading of shock-compressed matter. Experimental isentrope data are most important in the region of vaporization. With guns or explosive facilities, the unloading isentrope is recovered from a series of experiments where the shock wave parameters in plates of standard low-impedance materials placed behind the sample are measured [1,2]. The specific internal energy and specific volume are calculated from the measured p(u) release curve which corresponds to the Riemann integral. This way is not quite suitable for experiments with beam techniques where the incident shock waves are not well reproducible. The thick foil method [3] provides a few experimental points on the isentrope in one shot. When a higher shock impedance foil is placed on the surface of the material studied, the release phase occurs by steps, whose durations correspond to that for the shock wave to go back and forth in the foil. The velocity during the different steps, connected with the knowledge of the Hugoniot of the foil, allows us to determine a few points on the isentropic unloading curve. However, the method becomes insensitive when the low pressure range of vaporization is reached in the course of the unloading. The isentrope in this region can be measured by recording the smooth acceleration of a thin witness plate foil. With the mass of the foil known, measurements of the foil acceleration will give us the vapor pressure
Hinterberger, F
2006-01-01
The principle of electrostatic accelerators is presented. We consider Cockcroft Walton, Van de Graaff and Tandem Van de Graaff accelerators. We resume high voltage generators such as cascade generators, Van de Graaff band generators, Pelletron generators, Laddertron generators and Dynamitron generators. The speci c features of accelerating tubes, ion optics and methods of voltage stabilization are described. We discuss the characteristic beam properties and the variety of possible beams. We ...
A method for the energy calibration of a heavy ion accelerator
International Nuclear Information System (INIS)
Martin, B.; Michaelsen, R.; Sethi, R.C.; Ziegler, K.
1985-01-01
A method for the absolute energy calibration of a heavy ion accelerator was developed at VICKSI. The method is based on the use of a suitably selected heavy ion beam to calibrate an analysing magnet. In front of the entrance slit of the analysing system the beam is stripped with a thin carbon foil. The charge states of the resulting ions cover the whole range from the charge state of the injected ions to the charge state of the fully stripped ions. Ion and energy of the beam have been selected in such a way that the rigidities corresponding to the different charge states cover the full rigidity range of the analysing magnet. The field of the analysing magnet is varied and the NMR-frequency corresponding to each transmitted charge state is obtained. For the absolute calibration a standard α-source is used. The functional dependence of the rigidity versus NMR-frequency can be used to compute the energy of any beam. At present this method gives an absolute accuracy of +-0.15%. The various sources of erros are described. (orig.)
Kernel based methods for accelerated failure time model with ultra-high dimensional data
Directory of Open Access Journals (Sweden)
Jiang Feng
2010-12-01
Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.
Zhan, W.; Sun, Y.
2015-12-01
High frequency strong motion data, especially near field acceleration data, have been recorded widely through different observation station systems among the world. Due to tilting and a lot other reasons, recordings from these seismometers usually have baseline drift problems when big earthquake happens. It is hard to obtain a reasonable and precision co-seismic displacement through simply double integration. Here presents a combined method using wavelet transform and several simple liner procedures. Owning to the lack of dense high rate GNSS data in most of region of the world, we did not contain GNSS data in this method first but consider it as an evaluating mark of our results. This semi-automatic method unpacks a raw signal into two portions, a summation of high ranks and a low ranks summation using a cubic B-spline wavelet decomposition procedure. Independent liner treatments are processed against these two summations, which are then composed together to recover useable and reasonable result. We use data of 2008 Wenchuan earthquake and choose stations with a near GPS recording to validate this method. Nearly all of them have compatible co-seismic displacements when compared with GPS stations or field survey. Since seismometer stations and GNSS stations from observation systems in China are sometimes quite far from each other, we also test this method with some other earthquakes (1999 Chi-Chi earthquake and 2011 Tohoku earthquake). And for 2011 Tohoku earthquake, we will introduce GPS recordings to this combined method since the existence of a dense GNSS systems in Japan.
An ultrasonic-accelerated oxidation method for determining the oxidative stability of biodiesel.
Avila Orozco, Francisco D; Sousa, Antonio C; Domini, Claudia E; Ugulino Araujo, Mario Cesar; Fernández Band, Beatriz S
2013-05-01
Biodiesel is considered an alternative energy because it is produced from fats and vegetable oils by means of transesterification. Furthermore, it consists of fatty acid alkyl esters (FAAS) which have a great influence on biodiesel fuel properties and in the storage lifetime of biodiesel itself. The biodiesel storage stability is directly related to the oxidative stability parameter (Induction Time - IT) which is determined by means of the Rancimat® method. This method uses condutimetric monitoring and induces the degradation of FAAS by heating the sample at a constant temperature. The European Committee for Standardization established a standard (EN 14214) to determine the oxidative stability of biodiesel, which requires it to reach a minimum induction period of 6h as tested by Rancimat® method at 110°C. In this research, we aimed at developing a fast and simple alternative method to determine the induction time (IT) based on the FAAS ultrasonic-accelerated oxidation. The sonodegradation of biodiesel samples was induced by means of an ultrasonic homogenizer fitted with an immersible horn at 480Watts of power and 20 duty cycles. The UV-Vis spectrometry was used to monitor the FAAS sonodegradation by measuring the absorbance at 270nm every 2. Biodiesel samples from different feedstock were studied in this work. In all cases, IT was established as the inflection point of the absorbance versus time curve. The induction time values of all biodiesel samples determined using the proposed method was in accordance with those measured through the Rancimat® reference method by showing a R(2)=0.998. Copyright © 2012 Elsevier B.V. All rights reserved.
Lin, Shengxuan; Zhou, Xuedong; Ge, Liya; Ng, Sum Huan; Zhou, Xiaodong; Chang, Victor Wei-Chung
2016-10-01
Heavy metals and some metalloids are the most significant inorganic contaminants specified in toxicity characteristic leaching procedure (TCLP) in determining the safety of landfills or further utilization. As a consequence, a great deal of efforts had been made on the development of miniaturized analytical devices, such as Microchip Electrophoresis (ME) and μTAS for on-site testing of heavy metals and metalloids to prevent spreading of those pollutants or decrease the reutilization period of waste materials such as incineration bottom ash. However, the bottleneck lied in the long and tedious conventional TCLP that requires 18 h of leaching. Without accelerating the TCLP process, the on-site testing of the waste material leachates was impossible. In this study, therefore, a new accelerated leaching method (ALM) combining ultrasonic assisted leaching with tumbling was developed to reduce the total leaching time from 18 h to 30 min. After leaching, the concentrations of heavy metals and metalloids were determined with ICP-MS or ICP-optical emission spectroscopy. No statistical significance between ALM and TCLP was observed for most heavy metals (i.e., cobalt, manganese, mercury, molybdenum, nickel, silver, strontium, and tin) and metalloids (i.e., arsenic and selenium). For the heavy metals with statistical significance, correlation factors derived between ALM and TCLP were 0.56, 0.20, 0.037, and 0.019 for barium, cadmium, chromium, and lead, respectively. Combined with appropriate analytical techniques (e.g., ME), the ALM can be applied to rapidly prepare the incineration bottom ash samples as well as other environmental samples for on-site determination of heavy metals and metalloids. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hinterberger, F
2006-01-01
The principle of electrostatic accelerators is presented. We consider Cockcroft Walton, Van de Graaff and Tandem Van de Graaff accelerators. We resume high voltage generators such as cascade generators, Van de Graaff band generators, Pelletron generators, Laddertron generators and Dynamitron generators. The speci c features of accelerating tubes, ion optics and methods of voltage stabilization are described. We discuss the characteristic beam properties and the variety of possible beams. We sketch possible applications and the progress in the development of electrostatic accelerators.
Hopkins, Suzanna R; McGregor, Grant A; Murray, Johanne M; Downs, Jessica A; Savic, Velibor
2016-10-01
In recent years, research into synthetic lethality and how it can be exploited in cancer treatments has emerged as major focus in cancer research. However, the lack of a simple to use, sensitive and standardised assay to test for synthetic interactions has been slowing the efforts. Here we present a novel approach to synthetic lethality screening based on co-culturing two syngeneic cell lines containing individual fluorescent tags. By associating shRNAs for a target gene or control to individual fluorescence labels, we can easily follow individual cell fates upon siRNA treatment and high content imaging. We have demonstrated that the system can recapitulate the functional defects of the target gene depletion and is capable of discovering novel synthetic interactors and phenotypes. In a trial screen, we show that TIP60 exhibits synthetic lethality interaction with BAF180, and that in the absence of TIP60, there is an increase micronuclei dependent on the level of BAF180 loss, significantly above levels seen with BAF180 present. Moreover, the severity of the interactions correlates with proxy measurements of BAF180 knockdown efficacy, which may expand its usefulness to addressing synthetic interactions through titratable hypomorphic gene expression. Copyright © 2016. Published by Elsevier B.V.
Yulkifli; Afandi, Zurian; Yohandri
2018-04-01
Development of gravitation acceleration measurement using simple harmonic motion pendulum method, digital technology and photogate sensor has been done. Digital technology is more practical and optimizes the time of experimentation. The pendulum method is a method of calculating the acceleration of gravity using a solid ball that connected to a rope attached to a stative pole. The pendulum is swung at a small angle resulted a simple harmonic motion. The measurement system consists of a power supply, Photogate sensors, Arduino pro mini and seven segments. The Arduino pro mini receives digital data from the photogate sensor and processes the digital data into the timing data of the pendulum oscillation. The calculation result of the pendulum oscillation time is displayed on seven segments. Based on measured data, the accuracy and precision of the experiment system are 98.76% and 99.81%, respectively. Based on experiment data, the system can be operated in physics experiment especially in determination of the gravity acceleration.
Accelerated solvent extraction method with one-step clean-up for hydrocarbons in soil
International Nuclear Information System (INIS)
Nurul Huda Mamat Ghani; Norashikin Sain; Rozita Osman; Zuraidah Abdullah Munir
2007-01-01
The application of accelerated solvent extraction (ASE) using hexane combined with neutral silica gel and sulfuric acid/ silica gel (SA/ SG) to remove impurities prior to analysis by gas chromatograph with flame ionization detector (GC-FID) was studied. The efficiency of extraction was evaluated based on the three hydrocarbons; dodecane, tetradecane and pentadecane spiked to soil sample. The effect of ASE operating conditions (extraction temperature, extraction pressure, static time) was evaluated and the optimized condition obtained from the study was extraction temperature of 160 degree Celsius, extraction pressure of 2000 psi with 5 minutes static extraction time. The developed ASE with one-step clean-up method was applied in the extraction of hydrocarbons from spiked soil and the amount extracted was comparable to ASE extraction without clean-up step with the advantage of obtaining cleaner extract with reduced interferences. Therefore in the developed method, extraction and clean-up for hydrocarbons in soil can be achieved rapidly and efficiently with reduced solvent usage. (author)
International Nuclear Information System (INIS)
Rodríguez, Daniel González; Lira, Carlos Alberto Brayner de Oliveira
2017-01-01
The hydrogen economy is one of the most promising concepts for the energy future. In this scenario, oil is replaced by hydrogen as an energy carrier. This hydrogen, rather than oil, must be produced in volumes not provided by the currently employed methods. In this work two high temperature hydrogen production methods coupled to an advanced nuclear system are presented. A new design of a pebbled-bed accelerator nuclear driven system called TADSEA is chosen because of the advantages it has in matters of transmutation and safety. For the conceptual design of the high temperature electrolysis process a detailed computational fluid dynamics model was developed to analyze the solid oxide electrolytic cell that has a huge influence on the process efficiency. A detailed flowsheet of the high temperature electrolysis process coupled to TADSEA through a Brayton gas cycle was developed using chemical process simulation software: Aspen HYSYS®. The model with optimized operating conditions produces 0.1627 kg/s of hydrogen, resulting in an overall process efficiency of 34.51%, a value in the range of results reported by other authors. A conceptual design of the iodine-sulfur thermochemical water splitting cycle was also developed. The overall efficiency of the process was calculated performing an energy balance resulting in 22.56%. The values of efficiency, hydrogen production rate and energy consumption of the proposed models are in the values considered acceptable in the hydrogen economy concept, being also compatible with the TADSEA design parameters. (author)
Energy Technology Data Exchange (ETDEWEB)
Rodríguez, Daniel González; Lira, Carlos Alberto Brayner de Oliveira [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Fernández, Carlos García, E-mail: danielgonro@gmail.com, E-mail: mmhamada@ipen.br [Instituto Superior de Tecnologías y Ciencias aplicadas (InSTEC), La Habana (Cuba)
2017-07-01
The hydrogen economy is one of the most promising concepts for the energy future. In this scenario, oil is replaced by hydrogen as an energy carrier. This hydrogen, rather than oil, must be produced in volumes not provided by the currently employed methods. In this work two high temperature hydrogen production methods coupled to an advanced nuclear system are presented. A new design of a pebbled-bed accelerator nuclear driven system called TADSEA is chosen because of the advantages it has in matters of transmutation and safety. For the conceptual design of the high temperature electrolysis process a detailed computational fluid dynamics model was developed to analyze the solid oxide electrolytic cell that has a huge influence on the process efficiency. A detailed flowsheet of the high temperature electrolysis process coupled to TADSEA through a Brayton gas cycle was developed using chemical process simulation software: Aspen HYSYS®. The model with optimized operating conditions produces 0.1627 kg/s of hydrogen, resulting in an overall process efficiency of 34.51%, a value in the range of results reported by other authors. A conceptual design of the iodine-sulfur thermochemical water splitting cycle was also developed. The overall efficiency of the process was calculated performing an energy balance resulting in 22.56%. The values of efficiency, hydrogen production rate and energy consumption of the proposed models are in the values considered acceptable in the hydrogen economy concept, being also compatible with the TADSEA design parameters. (author)
Standard Test Method for Measuring Dose for Use in Linear Accelerator Pulsed Radiation Effects Tests
American Society for Testing and Materials. Philadelphia
2011-01-01
1.1 This test method covers a calorimetric measurement of the total dose delivered in a single pulse of electrons from an electron linear accelerator or a flash X-ray machine (FXR, e-beam mode) used as an ionizing source in radiation-effects testing. The test method is designed for use with pulses of electrons in the energy range from 10 to 50 MeV and is only valid for cases in which both the calorimeter and the test specimen to be irradiated are“thin” compared to the range of these electrons in the materials of which they are constructed. 1.2 The procedure described can be used in those cases in which (1) the dose delivered in a single pulse is 5 Gy (matl) (500 rd (matl)) or greater, or (2) multiple pulses of a lower dose can be delivered in a short time compared to the thermal time constant of the calorimeter. Matl refers to the material of the calorimeter. The minimum dose per pulse that can be acceptably monitored depends on the variables of the particular test, including pulse rate, pulse uniformity...
International Nuclear Information System (INIS)
Manrique, John Peter O.; Costa, Alessandro M.
2016-01-01
The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR_2_0_/_1_0. (author)
International Nuclear Information System (INIS)
Ceder, M.
2002-03-01
The Feynman-alpha method is used in traditional nuclear reactors to determine the subcritical reactivity of a system. The method is based on the measurement of the mean number and the variance of detector counts for different measurement times. The measurement is performed while a steady-state neutron flux is maintained in the reactor by an external neutron source, as a rule a radioactive source. From a plot of the variance-to-mean ratio as a function of measurement time ('gate length'), the reactivity can be determined by fitting the measured curve to the analytical solution. A new situation arises in the planned accelerator driven systems (ADS). An ADS will be run in a subcritical mode, and the steady flux will be maintained by an accelerator based source. Such a source has statistical properties that are different from those of a steady radioactive source. As one example, in a currently running European Community project for ADS research, the MUSE project, the source will be a periodically pulsed neutron generator. The theory of Feynman-alpha method needs to be extended to such nonstationary sources. There are two ways of performing and evaluating such pulsed source experiments. One is to synchronise the detector time gate start with the beginning of an incoming pulse. The Feynman-alpha method has been elaborated for such a case recently. The other method can be called stochastic pulsing. It means that there is no synchronisation between the detector time gate start and the source pulsing, i.e. the start of each measurement is chosen at a random time. The analytical solution to the Feynman-alpha formula from this latter method is the subject of this report. We have obtained an analytical Feynman-alpha formula for the case of stochastic pulsing by two different methods. One is completely based on the use of the symbolic algebra code Mathematica, whereas the other is based on complex function techniques. Closed form solutions could be obtained by both methods
Energy Technology Data Exchange (ETDEWEB)
Ceder, M
2002-03-01
The Feynman-alpha method is used in traditional nuclear reactors to determine the subcritical reactivity of a system. The method is based on the measurement of the mean number and the variance of detector counts for different measurement times. The measurement is performed while a steady-state neutron flux is maintained in the reactor by an external neutron source, as a rule a radioactive source. From a plot of the variance-to-mean ratio as a function of measurement time ('gate length'), the reactivity can be determined by fitting the measured curve to the analytical solution. A new situation arises in the planned accelerator driven systems (ADS). An ADS will be run in a subcritical mode, and the steady flux will be maintained by an accelerator based source. Such a source has statistical properties that are different from those of a steady radioactive source. As one example, in a currently running European Community project for ADS research, the MUSE project, the source will be a periodically pulsed neutron generator. The theory of Feynman-alpha method needs to be extended to such nonstationary sources. There are two ways of performing and evaluating such pulsed source experiments. One is to synchronise the detector time gate start with the beginning of an incoming pulse. The Feynman-alpha method has been elaborated for such a case recently. The other method can be called stochastic pulsing. It means that there is no synchronisation between the detector time gate start and the source pulsing, i.e. the start of each measurement is chosen at a random time. The analytical solution to the Feynman-alpha formula from this latter method is the subject of this report. We have obtained an analytical Feynman-alpha formula for the case of stochastic pulsing by two different methods. One is completely based on the use of the symbolic algebra code Mathematica, whereas the other is based on complex function techniques. Closed form solutions could be obtained by both methods
Earthquake acceleration amplification based on single microtremor test
Jaya Syahbana, Arifan; Kurniawan, Rahmat; Soebowo, Eko
2018-02-01
Understanding soil dynamics is needed to understand soil behaviour, including the parameters of earthquake acceleration amplification. Many researchers now conduct single microtremor tests to obtain amplification of velocity and natural periods of soil at test sites. However, these amplification parameters are rarely used, so a method is needed to convert the velocity amplification to acceleration amplification. This paper will discuss the proposed process of changing the value of amplification. The proposed method is to integrate the time histories of the synthetic earthquake acceleration of the soil surface under the deaggregation at that location so the time histories of the velocity earthquake will be obtained. Next is to conduct a “fitting curve” between amplification by a single microtremor test with amplification of the synthetic earthquake velocity time histories. After obtaining the fitting curve time histories of velocity, differentiation will be conducted to obtain fitting curve acceleration time histories. The final step after obtaining the fitting curve is to compare the acceleration of the “fitting curve” against the histories time of the acceleration of synthetic earthquake at bedrocks to obtain single microtremor acceleration amplification factor.
Socas-Rodríguez, Bárbara; Lanková, Darina; Urbancová, Kateřina; Krtková, Veronika; Hernández-Borges, Javier; Rodríguez-Delgado, Miguel Ángel; Pulkrabová, Jana; Hajšlová, Jana
2017-07-01
Within this study, a new method enabling monitoring of various estrogenic substances potentially occurring in milk and dairy products was proposed. Groups of compounds fairly differing in physico-chemical properties and biological activity were analyzed: four natural estrogens, four synthetic estrogens, five mycoestrogens, and nine phytoestrogens. Since they may pass into milk mainly in glucuronated and sulfated forms, an enzymatic hydrolysis was involved prior to the extraction based on the QuEChERS methodology. For the purification of the organic extract, a dispersive solid-phase extraction (d-SPE) with sorbent C18 was applied. The final analysis was performed by ultra-high-performance liquid chromatography (UHPLC) coupled with triple quadrupole tandem mass spectrometry (MS/MS). Method recovery ranged from 70 to 120% with a relative standard deviation (RSD) value lower than 20% and limits of quantification (LOQs) in the range of 0.02-0.60 μg/L (0.2-6.0 μg/kg dry weight) and 0.02-0.90 μg/kg (0.2-6.0 μg/kg dry weight) for milk and yogurt, respectively. The new procedure was applied for the investigation of estrogenic compounds in 11 milk samples and 13 yogurt samples from a Czech retail market. Mainly phytoestrogens were found in the studied samples. The most abundant compounds were equol and enterolactone representing 40-90% of all estrogens. The total content of phytoestrogens (free and bound) was in the range of 149-3870 μg/kg dry weight. This amount is approximately 20 times higher compared to non-bound estrogens.
A GPU-Accelerated Parameter Interpolation Thermodynamic Integration Free Energy Method.
Giese, Timothy J; York, Darrin M
2018-03-13
There has been a resurgence of interest in free energy methods motivated by the performance enhancements offered by molecular dynamics (MD) software written for specialized hardware, such as graphics processing units (GPUs). In this work, we exploit the properties of a parameter-interpolated thermodynamic integration (PI-TI) method to connect states by their molecular mechanical (MM) parameter values. This pathway is shown to be better behaved for Mg 2+ → Ca 2+ transformations than traditional linear alchemical pathways (with and without soft-core potentials). The PI-TI method has the practical advantage that no modification of the MD code is required to propagate the dynamics, and unlike with linear alchemical mixing, only one electrostatic evaluation is needed (e.g., single call to particle-mesh Ewald) leading to better performance. In the case of AMBER, this enables all the performance benefits of GPU-acceleration to be realized, in addition to unlocking the full spectrum of features available within the MD software, such as Hamiltonian replica exchange (HREM). The TI derivative evaluation can be accomplished efficiently in a post-processing step by reanalyzing the statistically independent trajectory frames in parallel for high throughput. We also show how one can evaluate the particle mesh Ewald contribution to the TI derivative evaluation without needing to perform two reciprocal space calculations. We apply the PI-TI method with HREM on GPUs in AMBER to predict p K a values in double stranded RNA molecules and make comparison with experiments. Convergence to under 0.25 units for these systems required 100 ns or more of sampling per window and coupling of windows with HREM. We find that MM charges derived from ab initio QM/MM fragment calculations improve the agreement between calculation and experimental results.
Directory of Open Access Journals (Sweden)
Mohammad-Reza Rashidi
2011-06-01
Full Text Available Introduction: 6-Mercaptopurine (6MP is an important chemotherapeutic drug in the conventional treatment of childhood acute lymphoblastic leukemia (ALL. It is catabolized to 6-thiouric acid (6TUA through 8-hydroxo-6-mercaptopurine (8OH6MP or 6-thioxanthine (6TX intermediates. Methods: High-performance liquid chromatography (HPLC is usually used to determine the contents of therapeutic drugs, metabolites and other important biomedical analytes in biological samples. In the present study, the multivariate calibration methods, partial least squares (PLS-1 and principle component regression (PCR have been developed and validated for the simultaneous determination of 6MP and its oxidative metabolites (6TUA, 8OH6MP and 6TX without analyte separation in spiked human plasma. Mixtures of 6MP, 8-8OH6MP, 6TX and 6TUA have been resolved by PLS-1 and PCR to their UV spectra. Results: Recoveries (% obtained for 6MP, 8-8OH6MP, 6TX and 6TUA were 94.5-97.5, 96.6-103.3, 95.1-96.9 and 93.4-95.8, respectively, using PLS-1 and 96.7-101.3, 96.2-98.8, 95.8-103.3 and 94.3-106.1, respectively, using PCR. The NAS (Net analyte signal concept was used to calculate multivariate analytical figures of merit such as limit of detection (LOD, selectivity and sensitivity. The limit of detections for 6MP, 8-8OH6MP, 6TX and 6TUA were calculated to be 0.734, 0.439, 0.797 and 0.482 µmol L-1, respectively, using PLS and 0.724, 0.418, 0783 and 0.535 µmol L-1, respectively, using PCR. HPLC was also applied as a validation method for simultaneous determination of these thiopurines in the synthetic solutions and human plasma. Conclusion: Combination of spectroscopic techniques and chemometric methods (PLS and PCR has provided a simple but powerful method for simultaneous analysis of multicomponent mixtures.
Accelerators of atomic particles
International Nuclear Information System (INIS)
Sarancev, V.
1975-01-01
A brief survey is presented of accelerators and methods of accelerating elementary particles. The principle of collective accelerating of elementary particles is clarified and the problems are discussed of its realization. (B.S.)
Sulistiawan, H.; Supriyadi; Yulianti, I.
2017-02-01
Microseismic is a harmonic vibration of land that occurs continuously at a low frequency. The characteristics of microseismic represents the characteristics of the soil layer based on the value of its natural frequency. This paper presents the analysis of seismic hazard at Universitas Negeri Semarang using microseismic method. The data acquisition was done at 20 points with distance between points 300 m by using three component’s seismometer. The data was processed using Horizontal to Vertical Spectral Ratio (HVSR) method to obtain the natural frequency and amplification value. The value of the natural frequency and amplification used to determine the value of the earthquake vulnerability and peak ground acceleration (PGA). The result shows then the earthquake vulnerability value range from 0.2 to 7.5, while the value of the average peak ground acceleration (PGA) is in the range 10-24 gal. Therefore, the average peak ground acceleration equal to earthquake intensity IV MMI scale.
GPU accelerated study of heat transfer and fluid flow by lattice Boltzmann method on CUDA
Ren, Qinlong
Lattice Boltzmann method (LBM) has been developed as a powerful numerical approach to simulate the complex fluid flow and heat transfer phenomena during the past two decades. As a mesoscale method based on the kinetic theory, LBM has several advantages compared with traditional numerical methods such as physical representation of microscopic interactions, dealing with complex geometries and highly parallel nature. Lattice Boltzmann method has been applied to solve various fluid behaviors and heat transfer process like conjugate heat transfer, magnetic and electric field, diffusion and mixing process, chemical reactions, multiphase flow, phase change process, non-isothermal flow in porous medium, microfluidics, fluid-structure interactions in biological system and so on. In addition, as a non-body-conformal grid method, the immersed boundary method (IBM) could be applied to handle the complex or moving geometries in the domain. The immersed boundary method could be coupled with lattice Boltzmann method to study the heat transfer and fluid flow problems. Heat transfer and fluid flow are solved on Euler nodes by LBM while the complex solid geometries are captured by Lagrangian nodes using immersed boundary method. Parallel computing has been a popular topic for many decades to accelerate the computational speed in engineering and scientific fields. Today, almost all the laptop and desktop have central processing units (CPUs) with multiple cores which could be used for parallel computing. However, the cost of CPUs with hundreds of cores is still high which limits its capability of high performance computing on personal computer. Graphic processing units (GPU) is originally used for the computer video cards have been emerged as the most powerful high-performance workstation in recent years. Unlike the CPUs, the cost of GPU with thousands of cores is cheap. For example, the GPU (GeForce GTX TITAN) which is used in the current work has 2688 cores and the price is only 1
Methods and problems in assessing the impacts of accelerated sea-level rise
Nicholls, Robert J.; Dennis, Karen C.; Volonte, Claudio R.; Leatherman, Stephen P.
1992-06-01
Accelerated sea-level rise is one of the more certain responses to global warming and presents a major challenge to mankind. However, it is important to note that sea-level rise is only manifest over long timescales (decades to centuries). Coastal scientists are increasingly being called upon to assess the physical, economic and societal impacts of sea-level rise and hence investigate appropriate response strategies. Such assessments are difficult in many developing countries due to a lack of physical, demographic and economic data. In particular, there is a lack of appropriate topographic information for the first (physical) phase of the analysis. To overcome these difficulties we have developed a new rapid and low-cost reconnaissance technique: ``aerial videotape-assisted vulnerability analysis'' (AVA). It involves: 1) videotaping the coastline from a small airplane; 2) limited ground-truth measurements; and 3) archive research. Combining the video record with the ground-truth information characterizes the coastal topography and, with an appropriate land loss model, estimates of the physical impact for different sea-level rise scenarios can be made. However, such land loss estimates raise other important questions such as the appropriate seaward limit of the beach profile. Response options also raise questions such as the long-term costs of seawalls. Therefore, realistic low and high estiimates were developed. To illustrate the method selected results from Senegal, Uruguay and Venezuela are presented.
Huang, Xuechen; Denprasert, Petcharat May; Zhou, Li; Vest, Adriana Nicholson; Kohan, Sam; Loeb, Gerald E
2017-09-01
We have developed and applied new methods to estimate the functional life of miniature, implantable, wireless electronic devices that rely on non-hermetic, adhesive encapsulants such as epoxy. A comb pattern board with a high density of interdigitated electrodes (IDE) could be used to detect incipient failure from water vapor condensation. Inductive coupling of an RF magnetic field was used to provide DC bias and to detect deterioration of an encapsulated comb pattern. Diodes in the implant converted part of the received energy into DC bias on the comb pattern. The capacitance of the comb pattern forms a resonant circuit with the inductor by which the implant receives power. Any moisture affects both the resonant frequency and the Q-factor of the resonance of the circuitry, which was detected wirelessly by its effects on the coupling between two orthogonal RF coils placed around the device. Various defects were introduced into the comb pattern devices to demonstrate sensitivity to failures and to correlate these signals with visual inspection of failures. Optimized encapsulation procedures were validated in accelerated life tests of both comb patterns and a functional neuromuscular stimulator under development. Strong adhesive bonding between epoxy and electronic circuitry proved to be necessary and sufficient to predict 1 year packaging reliability of 99.97% for the neuromuscular stimulator.
Sorensen, Asta V; Bernard, Shulamit L
2012-02-01
Learning (quality improvement) collaboratives are effective vehicles for driving coordinated organizational improvements. A central element of a learning collaborative is the change package-a catalogue of strategies, change concepts, and action steps that guide participants in their improvement efforts. Despite a vast literature describing learning collaboratives, little to no information is available on how the guiding strategies, change concepts, and action items are identified and developed to a replicable and actionable format that can be used to make measurable improvements within participating organizations. The process for developing the change package for the Health Resources and Services Administration's (HRSA) Patient Safety and Clinical Pharmacy Services Collaborative entailed environmental scan and identification of leading practices, case studies, interim debriefing meetings, data synthesis, and a technical expert panel meeting. Data synthesis involved end-of-day debriefings, systematic qualitative analyses, and the use of grounded theory and inductive data analysis techniques. This approach allowed systematic identification of innovative patient safety and clinical pharmacy practices that could be adopted in diverse environments. A case study approach enabled the research team to study practices in their natural environments. Use of grounded theory and inductive data analysis techniques enabled identification of strategies, change concepts, and actionable items that might not have been captured using different approaches. Use of systematic processes and qualitative methods in identification and translation of innovative practices can greatly accelerate the diffusion of innovations and practice improvements. This approach is effective whether or not an individual organization is part of a learning collaborative.
International Nuclear Information System (INIS)
Deken, J.
2009-01-01
Advocating for the good of the SLAC Archives and History Office (AHO) has not been a one-time affair, nor has it been a one-method procedure. It has required taking time to ascertain the current and perhaps predict the future climate of the Laboratory, and it has required developing and implementing a portfolio of approaches to the goal of building a stronger archive program by strengthening and appropriately expanding its resources. Among the successful tools in the AHO advocacy portfolio, the Archives Program Review Committee has been the most visible. The Committee and the role it serves as well as other formal and informal advocacy efforts are the focus of this case study My remarks today will begin with a brief introduction to advocacy and outreach as I understand them, and with a description of the Archives and History Office's efforts to understand and work within the corporate culture of the SLAC National Accelerator Laboratory. I will then share with you some of the tools we have employed to advocate for the Archives and History Office programs and activities; and finally, I will talk about how well - or badly - those tools have served us over the past decade.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
International Nuclear Information System (INIS)
Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu
2011-01-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S n ) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Lukes, George E.; Cain, Joel M.
1996-02-01
The Advanced Distributed Simulation (ADS) Synthetic Environments Program seeks to create robust virtual worlds from operational terrain and environmental data sources of sufficient fidelity and currency to interact with the real world. While some applications can be met by direct exploitation of standard digital terrain data, more demanding applications -- particularly those support operations 'close to the ground' -- are well-served by emerging capabilities for 'value-adding' by the user working with controlled imagery. For users to rigorously refine and exploit controlled imagery within functionally different workstations they must have a shared framework to allow interoperability within and between these environments in terms of passing image and object coordinates and other information using a variety of validated sensor models. The Synthetic Environments Program is now being expanded to address rapid construction of virtual worlds with research initiatives in digital mapping, softcopy workstations, and cartographic image understanding. The Synthetic Environments Program is also participating in a joint initiative for a sensor model applications programer's interface (API) to ensure that a common controlled imagery exploitation framework is available to all researchers, developers and users. This presentation provides an introduction to ADS and the associated requirements for synthetic environments to support synthetic theaters of war. It provides a technical rationale for exploring applications of image understanding technology to automated cartography in support of ADS and related programs benefitting from automated analysis of mapping, earth resources and reconnaissance imagery. And it provides an overview and status of the joint initiative for a sensor model API.
Synthetic biology for microbial heavy metal biosensors.
Kim, Hyun Ju; Jeong, Haeyoung; Lee, Sang Jun
2018-02-01
Using recombinant DNA technology, various whole-cell biosensors have been developed for detection of environmental pollutants, including heavy metal ions. Whole-cell biosensors have several advantages: easy and inexpensive cultivation, multiple assays, and no requirement of any special techniques for analysis. In the era of synthetic biology, cutting-edge DNA sequencing and gene synthesis technologies have accelerated the development of cell-based biosensors. Here, we summarize current technological advances in whole-cell heavy metal biosensors, including the synthetic biological components (bioparts), sensing and reporter modules, genetic circuits, and chassis cells. We discuss several opportunities for improvement of synthetic cell-based biosensors. First, new functional modules must be discovered in genome databases, and this knowledge must be used to upgrade specific bioparts through molecular engineering. Second, modules must be assembled into functional biosystems in chassis cells. Third, heterogeneity of individual cells in the microbial population must be eliminated. In the perspectives, the development of whole-cell biosensors is also discussed in the aspects of cultivation methods and synthetic cells.
International Nuclear Information System (INIS)
Soriano Carrillo, J.; Blanco Fernandez, M.; Garcia Calleja, M. A.; Leiro Lopez, A.; Mateo Sanz, B.; Aguilar Gonzalez, E.; Rubin de Celix, M.
2014-01-01
Microscopic techniques have been widely used for years in the study of inorganic materials however their use in organic materials and specifically, in synthetic geo membranes, is very limited. In this study, this innovative technology has been used with the different geo synthetic polymeric barriers with which this research team is experienced: plasticized polyvinyl chloride, polyethylenes, rubbers such as ethenyltriphenyl-diene monomer terpolymer and butyl, polyolefins, ethylene-vinyl acetate copolymer, chlorosulfonated polyethylene and polypropylene. the influence of the extraction area and the time since their application is tested. (Author)
Unconditionally stable diffusion-acceleration of the transport equation
International Nuclear Information System (INIS)
Larson, E.W.
1982-01-01
The standard iterative procedure for solving fixed-source discrete-ordinates problems converges very slowly for problems in optically thick regions with scattering ratios c near unity. The diffusion-synthetic acceleration method has been proposed to make use of the fact that for this class of problems, the diffusion equation is often an accurate approximation to the transport equation. However, stability difficulties have historically hampered the implementation of this method for general transport differencing schemes. In this article we discuss a recently developed procedure for obtaining unconditionally stable diffusion-synthetic acceleration methods for various transport differencing schemes. We motivate the analysis by first discussing the exact transport equation; then we illustrate the procedure by deriving a new stable acceleration method for the linear discontinuous transport differencing scheme. We also provide some numerical results
Unconditionally stable diffusion-acceleration of the transport equation
International Nuclear Information System (INIS)
Larsen, E.W.
1982-01-01
The standard iterative procedure for solving fixed-source discrete-ordinates problems converges very slowly for problems in optically large regions with scattering ratios c near unity. The diffusion-synthetic acceleration method has been proposed to make use of the fact that for this class of problems the diffusion equation is often an accurate approximation to the transport equation. However, stability difficulties have historically hampered the implementation of this method for general transport differencing schemes. In this article we discuss a recently developed procedure for obtaining unconditionally stable diffusion-synthetic acceleration methods for various transport differencing schemes. We motivate the analysis by first discussing the exact transport equation; then we illustrate the procedure by deriving a new stable acceleration method for the linear discontinuous transport differencing scheme. We also provide some numerical results
Pepper-pot diagnostic method to define emittance and Twiss parameters on low energies accelerators
Dolinska, M E
2002-01-01
The new complex mathematical algorithm to determine beam transverse emittance data and the Twiss parameters from intensity measured with pepper-por diagnostic device on rf low energies accelerators is described.
International Nuclear Information System (INIS)
Dumenigo Gonzalez, Cruz; Vilaragut Llanes, Juan J.; Morales Lopez, Jorge L.
2009-01-01
Accidents in the world of radiation, demonstrating the need for deepen security assessments. This study evaluates the safety of the treatment of teletherapy linear accelerator (LINAC) at a hospital in Cuba, based on applying the method Risk Matrix. This method has been used for many years in conventional industry, is simple, easy to apply and is based on the equation General risk R = f * P * C (where: f frequency of occurrence of the initiating event, P probability of failure of all barriers and magnitude of the consequences C expected). We have evaluated 140 accident sequences that were identified during the analysis of the treatment process. It was identified that 5 sequences are associated with the level of risk is very low, 96 low-risk, high risk and 39 with no very high risk. All accident sequences associated with high risk (considered unacceptable), have an impact on patients, and no concerns workers and public, which reaffirms that major security problems are related to radiation protection of patients. 34 sequences accidental high risk are associated with human errors and failures only 5 to equipment (LINAC, TPS, TAC, etc.). demonstrating the importance of human error. It shows that 35 of the 39 high-risk accident sequences leading to serious or very serious consequences for patients, which would mean the death of one or more patients, making specific recommendations to reduce risk in these cases. The findings of this work and regulators allow users to refine their programs quality assurance and inspection and suggest the hospital management, prioritize material resources according to criteria of irrigation management. (author)
Energy Technology Data Exchange (ETDEWEB)
Zmijarevic, I; Tomashevic, Dj [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)
1988-07-01
This paper presents Chebychev acceleration of outer iterations of a nodal diffusion code of high accuracy. Extrapolation parameters, unique for all moments are calculated using the node integrated distribution of fission source. Sample calculations are presented indicating the efficiency of method. (author)
Method for controlling an accelerator-type neutron source, and a pulsed neutron source
International Nuclear Information System (INIS)
Givens, W.W.
1991-01-01
The patent deals with an accelerator-type neutron source which employs a target, an ionization section and a replenisher for supplying accelerator gas. A positive voltage pulse is applied to the ionization section to produce a burst of neutrons. A negative voltage pulse is applied to the ionization section upon the termination of the positive voltage pulse to effect a sharp cut-off to the burst of neutrons. 4 figs
Energy Technology Data Exchange (ETDEWEB)
Nishiuchi, M., E-mail: sergei@jaea.go.jp; Sakaki, H.; Esirkepov, T. Zh. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); Nishio, K. [Japan Atomic Energy Agency, Advanced Science Research Center (Japan); Pikuz, T. A.; Faenov, A. Ya. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); Skobelev, I. Yu. [Russian Academy of Sciences, Joint Institute for High Temperature (Russian Federation); Orlandi, R. [Japan Atomic Energy Agency, Advanced Science Research Center (Japan); Pirozhkov, A. S.; Sagisaka, A.; Ogura, K.; Kanasaki, M.; Kiriyama, H.; Fukuda, Y. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); Koura, H. [Japan Atomic Energy Agency, Advanced Science Research Center (Japan); Kando, M. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); Yamauchi, T. [Graduate School of Maritime Sciences (Japan); Watanabe, Y. [Kyushu University, Interdisciplinary Graduate School of Engineering Sciences (Japan); Bulanov, S. V., E-mail: svbulanov@gmail.com; Kondo, K. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); and others
2016-04-15
A combination of a petawatt laser and nuclear physics techniques can crucially facilitate the measurement of exotic nuclei properties. With numerical simulations and laser-driven experiments we show prospects for the Laser-driven Exotic Nuclei extraction–acceleration method proposed in [M. Nishiuchi et al., Phys, Plasmas 22, 033107 (2015)]: a femtosecond petawatt laser, irradiating a target bombarded by an external ion beam, extracts from the target and accelerates to few GeV highly charged short-lived heavy exotic nuclei created in the target via nuclear reactions.
Chemical Synthesis Accelerated by Paper Spray: The Haloform Reaction
Bain, Ryan M.; Pulliam, Christopher J.; Raab, Shannon A.; Cooks, R. Graham
2016-01-01
In this laboratory, students perform a synthetic reaction in two ways: (i) by traditional bulk-phase reaction and (ii) in the course of reactive paper spray ionization. Mass spectrometry (MS) is used both as an analytical method and a means of accelerating organic syntheses. The main focus of this laboratory exercise is that the same ionization…
Optimization of accelerator parameters using normal form methods on high-order transfer maps
Energy Technology Data Exchange (ETDEWEB)
Snopok, Pavel [Michigan State Univ., East Lansing, MI (United States)
2007-05-01
Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented
Energy Technology Data Exchange (ETDEWEB)
Grassi, G. [Commissariat a l' Energie Atomique, CEA de Saclay, DM2S/SERMA/LENR, 91191, Gif-sur-Yvette (France)
2006-07-01
We present a non-linear space-angle two-level acceleration scheme for the method of the characteristics (MOC). To the fine level on which the MOC transport calculation is performed, we associate a more coarsely discretized phase space in which a low-order problem is solved as an acceleration step. Cross sections on the coarse level are obtained by a flux-volume homogenisation technique, which entails the non-linearity of the acceleration. Discontinuity factors per surface are introduced as additional degrees of freedom on the coarse level in order to ensure the equivalence of the heterogeneous and the homogenised problem. After each fine transport iteration, a low-order transport problem is iteratively solved on the homogenised grid. The solution of this problem is then used to correct the angular moments of the flux resulting from the previous free transport sweep. Numerical tests for a given benchmark have been performed. Results are discussed. (authors)
International Nuclear Information System (INIS)
Grassi, G.
2006-01-01
We present a non-linear space-angle two-level acceleration scheme for the method of the characteristics (MOC). To the fine level on which the MOC transport calculation is performed, we associate a more coarsely discretized phase space in which a low-order problem is solved as an acceleration step. Cross sections on the coarse level are obtained by a flux-volume homogenisation technique, which entails the non-linearity of the acceleration. Discontinuity factors per surface are introduced as additional degrees of freedom on the coarse level in order to ensure the equivalence of the heterogeneous and the homogenised problem. After each fine transport iteration, a low-order transport problem is iteratively solved on the homogenised grid. The solution of this problem is then used to correct the angular moments of the flux resulting from the previous free transport sweep. Numerical tests for a given benchmark have been performed. Results are discussed. (authors)
International Nuclear Information System (INIS)
Ma Xiushan; Zou Dehua; Lin Bo; Liu Xiying; Xu Yingjie; Zhang Guoxian; Wang Limin
2007-01-01
Objective: To summarize the improved method of photographing axial view of the patella by genuflex at 25 degree angle and evaluate synthetically various measures. Methods: Special projection frame was made to photograph in the study. Thirty normal people were enrolled as controls and 154 patients with fore-knee pain were included in the patient group. The patients were found to be abnormal in the arrangement of the patella by patella axial view photographs. Measures included sulcus angle (SA), congruence angle (CA), lateral shift (LS), later patellofemoral angle (LPFA), patellofemoral index (PFI), and lateral patellar displacement (LPD). Results: The average value of the parameter of two groups respectively are comparison group of SA 137.38 degree, CA -10.73 degree, LS 10.49%, LPFA 13.70 degree, PFI 0.48, LPD 0.45 mm and observation group of: SA 142.38 degree, CA -0.71 degree, LS 19.68%, LPFA 12.12 degree, PFI 1.13, LPD 0.42 mm. There was significant difference between the control group and patients for most measures except LPD (P<0.01). The measures of PFI, CA, LS, LPTA and LPD were sensitive to the disarrangement of the patella, with sensitivity of 71.66%, 63.33%, 56.66%, 21.60%, 16.66% respectively. Conclusions: The method of photographing axial view of the patella by genuflex at 25 degree angle is effective. It increases distinctness of the photographs. Furthermore, the method makes line-drawing more easily and measuring more accurately. (authors)
International Nuclear Information System (INIS)
Burastero, J.
1975-01-01
This work is about the laboratory scale investigation of the conditions in the rutile synthetic production from one me nita in Aguas Dulces reservoir. The iron mineral is chlorinated and volatilized selectively leaving a residue enriched in titanium dioxide which can be used as a substitute of rutile mineral
New method for laser driven ion acceleration with isolated, mass-limited targets
International Nuclear Information System (INIS)
Paasch-Colberg, T.; Sokollik, T.; Gorling, K.; Eichmann, U.; Steinke, S.; Schnuerer, M.; Nickles, P.V.; Andreev, A.; Sandner, W.
2011-01-01
A new technique to investigate laser driven ion acceleration with fully isolated, mass-limited glass spheres with a diameter down to 8μm is presented. A Paul trap was used to prepare a levitating glass sphere for the interaction with a laser pulse of relativistic intensity. Narrow-bandwidth energy spectra of protons and oxygen ions have been observed and were attributed to specific acceleration field dynamics in case of the spherical target geometry. A general limiting mechanism has been found that explains the experimentally observed ion energies for the mass-limited target.
Griko, Yuri; Regan, Matthew D.
2018-02-01
Animal research aboard the Space Shuttle and International Space Station has provided vital information on the physiological, cellular, and molecular effects of spaceflight. The relevance of this information to human spaceflight is enhanced when it is coupled with information gleaned from human-based research. As NASA and other space agencies initiate plans for human exploration missions beyond low Earth orbit (LEO), incorporating animal research into these missions is vitally important to understanding the biological impacts of deep space. However, new technologies will be required to integrate experimental animals into spacecraft design and transport them beyond LEO in a safe and practical way. In this communication, we propose the use of metabolic control technologies to reversibly depress the metabolic rates of experimental animals while in transit aboard the spacecraft. Compared to holding experimental animals in active metabolic states, the advantages of artificially inducing regulated, depressed metabolic states (called synthetic torpor) include significantly reduced mass, volume, and power requirements within the spacecraft owing to reduced life support requirements, and mitigated radiation- and microgravity-induced negative health effects on the animals owing to intrinsic physiological properties of torpor. In addition to directly benefitting animal research, synthetic torpor-inducing systems will also serve as test beds for systems that may eventually hold human crewmembers in similar metabolic states on long-duration missions. The technologies for inducing synthetic torpor, which we discuss, are at relatively early stages of development, but there is ample evidence to show that this is a viable idea and one with very real benefits to spaceflight programs. The increasingly ambitious goals of world's many spaceflight programs will be most quickly and safely achieved with the help of animal research systems transported beyond LEO; synthetic torpor may
Castro, P; Huerga, C; Chamorro, P; Garayoa, J; Roch, M; Pérez, L
2018-04-17
The goals of the study are to characterize imaging properties in 2D PET images reconstructed with the iterative algorithm ordered-subset expectation maximization (OSEM) and to propose a new method for the generation of synthetic images. The noise is analyzed in terms of its magnitude, spatial correlation, and spectral distribution through standard deviation, autocorrelation function, and noise power spectrum (NPS), respectively. Their variations with position and activity level are also analyzed. This noise analysis is based on phantom images acquired from 18 F uniform distributions. Experimental recovery coefficients of hot spheres in different backgrounds are employed to study the spatial resolution of the system through point spread function (PSF). The NPS and PSF functions provide the baseline for the proposed simulation method: convolution with PSF as kernel and noise addition from NPS. The noise spectral analysis shows that the main contribution is of random nature. It is also proven that attenuation correction does not alter noise texture but it modifies its magnitude. Finally, synthetic images of 2 phantoms, one of them an anatomical brain, are quantitatively compared with experimental images showing a good agreement in terms of pixel values and pixel correlations. Thus, the contrast to noise ratio for the biggest sphere in the NEMA IEC phantom is 10.7 for the synthetic image and 8.8 for the experimental image. The properties of the analyzed OSEM-PET images can be described by NPS and PSF functions. Synthetic images, even anatomical ones, are successfully generated by the proposed method based on the NPS and PSF. Copyright © 2018 Sociedad Española de Medicina Nuclear e Imagen Molecular. Publicado por Elsevier España, S.L.U. All rights reserved.
Programming languages for synthetic biology.
Umesh, P; Naveen, F; Rao, Chanchala Uma Maheswara; Nair, Achuthsankar S
2010-12-01
In the backdrop of accelerated efforts for creating synthetic organisms, the nature and scope of an ideal programming language for scripting synthetic organism in-silico has been receiving increasing attention. A few programming languages for synthetic biology capable of defining, constructing, networking, editing and delivering genome scale models of cellular processes have been recently attempted. All these represent important points in a spectrum of possibilities. This paper introduces Kera, a state of the art programming language for synthetic biology which is arguably ahead of similar languages or tools such as GEC, Antimony and GenoCAD. Kera is a full-fledged object oriented programming language which is tempered by biopart rule library named Samhita which captures the knowledge regarding the interaction of genome components and catalytic molecules. Prominent feature of the language are demonstrated through a toy example and the road map for the future development of Kera is also presented.
Electron Acceleration in a Turbulent Current Sheet - Comparison of GCA and HARHA Methods
Czech Academy of Sciences Publication Activity Database
Kramoliš, D.; Varady, Michal; Bárta, Miroslav
2016-01-01
Roč. 40, č. 1 (2016), s. 69-77 ISSN 1845-8319. [Hvar Astrophysical Colloquium /14./. Hvar, 26.09.2016-30.09.2016] R&D Projects: GA ČR(CZ) GA16-18495S Institutional support: RVO:67985815 Keywords : magnetic reconnection * current sheet * electron acceleration Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
Walker, James D S; Grosvenor, Andrew P
2013-08-05
Magnetoelectric materials couple both magnetic and electronic properties, making them attractive for use in multifunctional devices. The magnetoelectric AFeO3 compounds (Pna2(1); A = Al, Ga) have received attention as the properties of the system depend on composition as well as the synthetic method used. Al(1-x)Ga(x)FeO3. (0 ≤ x ≤ 1) was synthesized by the sol-gel and coprecipitation methods and studied by X-ray absorption near-edge spectroscopy (XANES). Al L(2,3-), Ga K-, and Fe K-edge XANES spectra were collected to examine how the average metal coordination number (CN) changes with the synthetic method. Al and Fe were found to prefer octahedral sites, while Ga prefers the tetrahedral site. It was found that composition played a larger role in determining site occupancies than synthetic method. Samples made by the sol-gel or ceramic methods (reported previously; Walker, J. D. S.; Grosvenor, A. P. J. Solid State Chem. 2013, 197, 147-153) showed smaller spectral changes than samples made via the coprecipitation method. This is attributed to greater ion mobility in samples synthesized via coprecipitation as the reactants do not have a long-range polymeric or oxide network during synthesis like samples synthesized via the sol-gel or ceramic method. Increasing annealing temperature increases the average coordination number of Al, and to a lesser extent Ga, while the average coordination number of Fe decreases. This study indicates that greater disorder is observed when the Al(1-x)Ga(x)FeO3. compounds have high Al content, and when annealed at higher temperatures.
International Nuclear Information System (INIS)
Rashid, Nur Shahidah Abdul; Sarmani, Sukiman; Majid, Amran Ab.; Mohamed, Faizal; Siong, Khoo Kok
2015-01-01
238U radionuclide is a naturally occuring radioactive material that can be found in soil. In this study, the solubility of 238U radionuclide obtained from various types of soil in synthetic gastrointestinal fluids was analysed by “US P in vitro” digestion method. The synthetic gastrointestinal fluids were added to the samples with well-ordered, mixed throughly and incubated according to the human physiology digestive system. The concentration of 238U radionuclide in the solutions extracted from the soil was measured using Induced Coupling Plasma Mass Spectrometer (ICP-MS). The concentration of 238U radionuclide from the soil samples in synthetic gastrointestinal fluids showed different values due to different homogenity of soil types and chemical reaction of 238U radionuclide. In general, the solubility of 238U radionuclide in gastric fluid was higher (0.050 – 0.209 ppm) than gastrointestinal fluids (0.024 – 0.050 ppm). It could be concluded that the US P in vitro digestion method is practicle for estimating the solubility of 238U radionuclide from soil materials and could be useful for monitoring and risk assessment purposes applying to environmental, health and contaminated soil samples
Chen, Xiaohong; Li, Xiaoping; Zhao, Yonggang; Pan, Shengdong; Jin, Micong
2015-07-01
A method based on ultrafast liquid chromatography-tandem mass spectrometry (UFLC-MS/MS) has been developed for the simultaneous determination of seven synthetic pigments in cooked meat product. After the cooked meat products were extracted by mixed extraction agent, purified by WAX column, the UFLC separation was performed on a Shim-pack XR-ODS II column (75 mm x 2.0 mm, 2.2 µm) with a linear gradient elution program of acetonitrile and ammonium acetate (AmAc, 5 mmol/L) as the mobile phase. Electrospray ionization was applied and operated in the negative ion mode. The limits of quantitation (LOQs) for the seven synthetic pigments were in the range of 0.7-5.0 µg/kg. The calibration curves showed good linearities for the seven analytes in their detection ranges, and the correlative coefficients (r) were more than 0.999. The recoveries were between 88.2%-106.5% with the RSDs in the range of 1.2%-5.0%. The method is sensitive, reproducible, quick and adapts to the simultaneous determination of the seven synthetic pigments in cooked meat product.
Energy Technology Data Exchange (ETDEWEB)
Rashid, Nur Shahidah Abdul; Sarmani, Sukiman; Majid, Amran Ab.; Mohamed, Faizal; Siong, Khoo Kok, E-mail: khoo@ukm.edu.my [School of Applied Physics, Faculty of Science and Technology, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Selangor (Malaysia)
2015-04-29
238U radionuclide is a naturally occuring radioactive material that can be found in soil. In this study, the solubility of 238U radionuclide obtained from various types of soil in synthetic gastrointestinal fluids was analysed by “US P in vitro” digestion method. The synthetic gastrointestinal fluids were added to the samples with well-ordered, mixed throughly and incubated according to the human physiology digestive system. The concentration of 238U radionuclide in the solutions extracted from the soil was measured using Induced Coupling Plasma Mass Spectrometer (ICP-MS). The concentration of 238U radionuclide from the soil samples in synthetic gastrointestinal fluids showed different values due to different homogenity of soil types and chemical reaction of 238U radionuclide. In general, the solubility of 238U radionuclide in gastric fluid was higher (0.050 – 0.209 ppm) than gastrointestinal fluids (0.024 – 0.050 ppm). It could be concluded that the US P in vitro digestion method is practicle for estimating the solubility of 238U radionuclide from soil materials and could be useful for monitoring and risk assessment purposes applying to environmental, health and contaminated soil samples.
Qu, Lin; Sun, Peng; Wu, Ying; Zhang, Ke; Liu, Zhengping
2017-08-01
An efficient metal-free homodifunctional bimolecular ring-closure method is developed for the formation of cyclic polymers by combining reversible addition-fragmentation chain transfer (RAFT) polymerization and self-accelerating click reaction. In this approach, α,ω-homodifunctional linear polymers with azide terminals are prepared by RAFT polymerization and postmodification of polymer chain end groups. By virtue of sym-dibenzo-1,5-cyclooctadiene-3,7-diyne (DBA) as small linkers, well-defined cyclic polymers are then prepared using the self-accelerating double strain-promoted azide-alkyne click (DSPAAC) reaction to ring-close the azide end-functionalized homodifunctional linear polymer precursors. Due to the self-accelerating property of DSPAAC ring-closing reaction, this novel method eliminates the requirement of equimolar amounts of telechelic polymers and small linkers in traditional bimolecular ring-closure methods. It facilitates this method to efficiently and conveniently produce varied pure cyclic polymers by employing an excess molar amount of DBA small linkers. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
Choi, Won Chang; Byun, Dongjin; Lee, Joong Kee; Cho, Byung won
2004-01-01
Four kinds of synthetic graphite coated with silver and nickel for the anodes of lithium secondary batteries were prepared by a gas suspension spray coating method. The electrode coated with silver showed higher charge-discharge capacities due to a Ag-Li alloy, but rate capability decreased at higher charge-discharge rate. This result can be explained by the formation of an artificial Ag oxidation film with higher impedance, this lowered the rate capability at high charge-discharge rate due to its low electrical conductivity. Rate capability is improved, however, by coating nickel and silver together on the surface of synthetic graphite. The nickel which is inactive with oxidation reaction plays an important role as a conducting agent which enhanced the conductivity of the electrode
Directory of Open Access Journals (Sweden)
Yuan Chen
2011-09-01
Full Text Available This paper proposes a piecewise acceleration-optimal and smooth-jerk trajectory planning method of robot manipulator. The optimal objective function is given by the weighted sum of two terms having opposite effects: the maximal acceleration and the minimal jerk. Some computing techniques are proposed to determine the optimal solution. These techniques take both the time intervals between two interpolation points and the control points of B-spline function as optimal variables, redefine the kinematic constraints as the constraints of optimal variables, and reformulate the objective function in matrix form. The feasibility of the optimal method is illustrated by simulation and experimental results with pan mechanism for cooking robot.
International Nuclear Information System (INIS)
Yamauchi, Hideto; Kitamura, Yasunori; Yamane, Yoshihiro; Misawa, Tsuyoshi; Unesaki, Hironobu
2003-01-01
Two types of the variance-to-mean methods for the subcritical system that was driven by the periodic and pulsed neutron source were developed and their experimental examination was performed with the Kyoto University Critical Assembly and a pulsed neutron generator. As a result, it was demonstrated that the prompt neutron decay constant could be measured by these methods. From this fact, it was concluded that the present variance-to-mean methods had potential for being used in the subcriticality monitor for the future accelerator driven system operated with the pulse-mode. (author)
International Nuclear Information System (INIS)
Palmer, R.
1994-06-01
Electromagnetic fields can be separated into near and far components. Near fields are extensions of static fields. They do not radiate, and they fall off more rapidly from a source than far fields. Near fields can accelerate particles, but the ratio of acceleration to source fields at a distance R, is always less than R/λ or 1, whichever is smaller. Far fields can be represented as sums of plane parallel, transversely polarized waves that travel at the velocity of light. A single such wave in a vacuum cannot give continuous acceleration, and it is shown that no sums of such waves can give net first order acceleration. This theorem is proven in three different ways; each method showing a different aspect of the situation
Lin, Hongxiang; Azuma, Takashi; Qu, Xiaolei; Takagi, Shu
2017-03-01
In this work, we construct a multi-frequency accelerating strategy for the contrast source inversion (CSI) method using pulse data in the time domain. CSI is a frequency-domain inversion method for ultrasound waveform tomography that does not require the forward solver through the process of reconstruction. Several prior researches show that the CSI method has a good performance of convergence and accuracy in the low-center-frequency situation. In contrast, utilizing the high-center-frequency data leads to a high-resolution reconstruction but slow convergence on large numbers of grid. Our objective is to take full advantage of all low frequency components from pulse data with the high-center-frequency data measured by the diagnostic device. First we process the raw data in the frequency domain. Then multi-frequency accelerating strategy helps restart CSI in the current frequency using the last iteration result obtained from the lower frequency component. The merit of multi- frequency accelerating strategy is that computational burden decreases at the first few iterations. Because the low frequency component of dataset computes on the coarse grid with assuming a fixed number of points per wavelength. In the numerical test, the pulse data were generated by the K-wave simulator and have been processed to meet the computation of the CSI method. We investigate the performance of the multi-frequency and single-frequency reconstructions and conclude that the multi-frequency accelerating strategy significantly enhances the quality of the reconstructed image and simultaneously reduces the average computational time for any iteration step.
Kolyer, J. M.; Mann, N. R.
1977-01-01
Methods of accelerated and abbreviated testing were developed and applied to solar cell encapsulants. These encapsulants must provide protection for as long as 20 years outdoors at different locations within the United States. Consequently, encapsulants were exposed for increasing periods of time to the inherent climatic variables of temperature, humidity, and solar flux. Property changes in the encapsulants were observed. The goal was to predict long term behavior of encapsulants based upon experimental data obtained over relatively short test periods.
International Nuclear Information System (INIS)
Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes
2008-01-01
Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset
International Nuclear Information System (INIS)
Hariz, M.I.; Laitinen, L.V.; Henriksson, R.; Saeterborg, N.-E.; Loefroth, P.-O.
1990-01-01
A new technique for fractionated stereotactic irradiation of intracranial lesions is described. The treatment is based on a versatile, non-invasive interface for stereotactic localization of the brain target imaged by computed tomography (CT), angiography or magnetic resonance tomography (MRT), and subsequent repetitive stereotactic irradiation of the target using a linear accelerator. The fractionation of the stereotactic irradiation was intended to meet the requirements of the basic principles of radiobiology. The radiophysical evaluation using phantoms, and the clinical results in a small number of patients, demonstrated a good reproducibilit between repeated positionings of the target in the isocenter of the accelerator, and a high degree of accuracy in the treatment of brain lesions. (authors). 28 refs.; 11 figs.; 1 tab
Directory of Open Access Journals (Sweden)
A Nikkhah
2016-04-01
Full Text Available Introduction: Too many people are working in the agricultural sector and therefore, pay more attention to the safety and health at work in the agricultural sector is important. This issue is more important in developing industrial countries where the level of the ergonomic working condition is less than that of developed countries. Attention to ergonomic condition of agricultural machinery drivers is one of the goals of agricultural mechanization. Therefore, in this study the ergonomic conditions of brake and accelerator mechanisms for MF285 and MF399 tractor's drivers were investigated using a new method. Materials and Methods: 25 people were selected for experiment. The electrical activity of Medialis gastrocnemius, Lateralis gastrocnemius, Vastus medialis, Vastus lateralis, Quadratus Lumborum and Trapezius muscles of drivers before and during pressing the pedal and after rest time were recorded using Biovision device. Measurements were performed for each person on each muscle 30 seconds before pressing the pedal, 60 seconds after pressing the pedal and after 60 seconds of rest. For all drivers, the muscles on the right side (brake and accelerator side have been selected and tested. The measurements were performed in compliance with appropriate time intervals between the measurements. Results and Discussion: Ergonomic assessment of brake pedal: The results showed that the RMS electrical activity of muscles of Vastus medialis and Medial gastrocnemius, during 60 seconds braking were 2.47 and 1.97. So, Vastus medialis and Medial gastrocnemius had the highest stress during pressing the MF399 tractor's brake pedal. Moreover, the Medial gastrocnemius and Lateral gastrocnemius with RMS electrical activity ratio of 2.47 and 1.74 had the highest RMS electrical activity ratio respectively, during 60 seconds braking compared to before braking of MF285 tractor. The comparison of results showed that the Vastus medialis and Trapezius had the higher stress
Measurement of acceleration while walking as an automated method for gait assessment in dairy cattle
DEFF Research Database (Denmark)
Chapinal, N.; de Passillé, A.M.; Pastell, M.
2011-01-01
The aims were to determine whether measures of acceleration of the legs and back of dairy cows while they walk could help detect changes in gait or locomotion associated with lameness and differences in the walking surface. In 2 experiments, 12 or 24 multiparous dairy cows were fitted with five 3...... to be a promising tool for lameness detection on farm and to study walking surfaces, especially when attached to a leg....
Electron accelerator with a laser ignition for investigation of beam plasma by optical methods
International Nuclear Information System (INIS)
Kabanov, S.N.; Korolev, A.A.; Kul'beda, V.E.; Razumovskij, A.I.; Trukhin, V.A.
1990-01-01
Facility to conduct investigations into dense gas beam plasma is described. Facility comprises: electron accelerator (200-300 keV, 5kA, 20ns), OGM-40 ignition ruby laser LZhI-501 diagnostic laser (with 0.55-0.66 μm tunable wave length), Michelson interferometer and diagnostic equipment for optical measurements. Laser ignition of spark gap is introduced to strong synchronization (±10ns) of radiation pulse of diagnostic laser with beam current pulse
International Nuclear Information System (INIS)
Hartung, W.H.; Asner, D.M.; Conway, J.V.; Dennett, C.A.; Greenwald, S.; Kim, J.-S.; Li, Y.; Moore, T.P.; Omanovic, V.; Palmer, M.A.; Strohman, C.R.
2015-01-01
The performance of a particle accelerator can be limited by the build-up of an electron cloud (EC) in the vacuum chamber. Secondary electron emission from the chamber walls can contribute to EC growth. An apparatus for in-situ measurements of the secondary electron yield (SEY) in the Cornell Electron Storage Ring (CESR) was developed in connection with EC studies for the CESR Test Accelerator program. The CESR in-situ system, in operation since 2010, allows for SEY measurements as a function of incident electron energy and angle on samples that are exposed to the accelerator environment, typically 5.3 GeV counter-rotating beams of electrons and positrons. The system was designed for periodic measurements to observe beam conditioning of the SEY with discrimination between exposure to direct photons from synchrotron radiation versus scattered photons and cloud electrons. The samples can be exchanged without venting the CESR vacuum chamber. Measurements have been done on metal surfaces and EC-mitigation coatings. The in-situ SEY apparatus and improvements to the measurement tools and techniques are described
Energy Technology Data Exchange (ETDEWEB)
Hartung, W.H., E-mail: wh29@cornell.edu; Asner, D.M.; Conway, J.V.; Dennett, C.A.; Greenwald, S.; Kim, J.-S.; Li, Y.; Moore, T.P.; Omanovic, V.; Palmer, M.A.; Strohman, C.R.
2015-05-21
The performance of a particle accelerator can be limited by the build-up of an electron cloud (EC) in the vacuum chamber. Secondary electron emission from the chamber walls can contribute to EC growth. An apparatus for in-situ measurements of the secondary electron yield (SEY) in the Cornell Electron Storage Ring (CESR) was developed in connection with EC studies for the CESR Test Accelerator program. The CESR in-situ system, in operation since 2010, allows for SEY measurements as a function of incident electron energy and angle on samples that are exposed to the accelerator environment, typically 5.3 GeV counter-rotating beams of electrons and positrons. The system was designed for periodic measurements to observe beam conditioning of the SEY with discrimination between exposure to direct photons from synchrotron radiation versus scattered photons and cloud electrons. The samples can be exchanged without venting the CESR vacuum chamber. Measurements have been done on metal surfaces and EC-mitigation coatings. The in-situ SEY apparatus and improvements to the measurement tools and techniques are described.
International Nuclear Information System (INIS)
Steck, M.
1986-01-01
A superconducting quarter-wave resonator at 325 MHz was studied for the implementation at the Heidelberg post-accelerator. Using the computer programs SUPERFISH and URMEL the first design derived from analytical approaches was optimized regarding the superconducting operation. The measurements on the model showed good agreement with the calculations. By modification of the standard techniques the fabrication of the resonator body and the preparation of the superconducting surface could be simplified. On the superconducting resonator 1 μm thick superconducting surfaces of pure lead as well as a lead/tin alloy were tested. Thereby with lead a quality of the resonator Q 0 =8.5.10 7 and a maximal electrical acceleration field in the continuous region of epsilonsub(acc)=2.16 MV/m at Q=1.10 7 were reached. The measurements with a surface of lead/tin yielded Q 0 =1.4.10 8 and as maximal acceleration field epsilonsub(acc)=1.93 MV/m at Q=1.10 7 . A further increasing of the maximal electric field by conditioning of the resonator can be expected because of the test results. The excellent mechanical stability not reachable with other resonator types which manifests by a static frequency shift of 4 Hz/(MV/m) 2 and rapid frequency oscillations [de
Directory of Open Access Journals (Sweden)
N. D. Tiannikova
2014-01-01
Full Text Available G.D. Kartashov has developed a technique to determine the rapid testing results scaling functions to the normal mode. Its feature is preliminary tests of products of one sample including tests using the alternating modes. Standard procedure of preliminary tests (researches is as follows: n groups of products with m elements in each start being tested in normal mode and, after a failure of one of products in the group, the remained products are tested in accelerated mode. In addition to tests in alternating mode, tests in constantly normal mode are conducted as well. The acceleration factor of rapid tests for this type of products, identical to any lots is determined using such testing results of products from the same lot. A drawback of this technique is that tests are to be conducted in alternating mode till the failure of all products. That is not always is possible. To avoid this shortcoming, the Renyi criterion is offered. It allows us to determine scaling functions using the right-censored data thus giving the opportunity to stop testing prior to all failures of products.In this work a statistical modeling of the acceleration factor estimation owing to Renyi statistics minimization is implemented by the Monte-Carlo method. Results of modeling show that the acceleration factor estimation obtained through Renyi statistics minimization is conceivable for rather large n . But for small sample volumes some systematic bias of acceleration factor estimation, which decreases with growth n is observed for both distributions (exponential and Veybull's distributions. Therefore the paper also presents calculation results of correction factors for a case of exponential distribution and Veybull's distribution.
Ha, Sanghyun; Park, Junshin; You, Donghyun
2018-01-01
Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.
Directory of Open Access Journals (Sweden)
Pushpendra Rana
2018-02-01
Full Text Available Certification by the Forest Stewardship Council (FSC remains rare among forest management units (FMUs in natural tropical forests, presenting a challenge for impact evaluation. We demonstrate application of the synthetic control method (SCM to evaluate the impact of FSC certification on a single FMU in each of three tropical forest landscapes. Specifically, we estimate causal effects on tree cover change from the year of certification to 2012 using SCM and open-access, pan-tropical datasets. We demonstrate that it is possible to construct synthetic controls, or weighted combinations of non-certified FMUs, that followed the same path of tree cover change as the certified FMUs before certification. By using these synthetic controls to measure counterfactual tree cover change after certification, we find that certification reduced tree cover loss in the most recent year (2012 in all three landscapes. However, placebo tests show that in one case, this effect was not significant, and in another case, it followed several years in which certification had the opposite effect (increasing tree cover loss. We conclude that SCM has promise for identifying temporally varying impacts of small-N interventions on land use and land cover change.
Tissue Harmonic Synthetic Aperture Imaging
DEFF Research Database (Denmark)
Rasmussen, Joachim
The main purpose of this PhD project is to develop an ultrasonic method for tissue harmonic synthetic aperture imaging. The motivation is to advance the field of synthetic aperture imaging in ultrasound, which has shown great potentials in the clinic. Suggestions for synthetic aperture tissue...... system complexity compared to conventional synthetic aperture techniques. In this project, SASB is sought combined with a pulse inversion technique for 2nd harmonic tissue harmonic imaging. The advantages in tissue harmonic imaging (THI) are expected to further improve the image quality of SASB...
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
This report documents the development of statistical tools used to quantify the hazard presented by the response of sea-level elevation to natural or anthropogenic changes in climate and ocean circulation. A hazard is a physical process (or processes) that, when combined with vulnerability (or susceptibility to the hazard), results in risk. This study presents the development and comparison of new and existing sea-level analysis methods, exploration of the strengths and weaknesses of the methods using synthetic time series, and when appropriate, synthesis of the application of the method to observed sea-level time series. These reports are intended to enhance material presented in peer-reviewed journal articles where it is not always possible to provide the level of detail that might be necessary to fully support or recreate published results.
DEFF Research Database (Denmark)
topics, lists of the necessary materials and reagents, step-by-step, readily reproducible laboratory protocols, and tips on troubleshooting and avoiding known pitfalls. Authoritative and practical, Synthetic Metabolic Pathways: Methods and Protocols aims to ensure successful results in the further study...
International Nuclear Information System (INIS)
Samtaney, Ravi
2009-01-01
We present a numerical method to solve the linear stability of impulsively accelerated density interfaces in two dimensions such as those arising in the Richtmyer-Meshkov instability. The method uses an Eulerian approach, and is based on an unwind method to compute the temporally evolving base state and a flux vector splitting method for the perturbations. The method is applicable to either gas dynamics or magnetohydrodynamics. Numerical examples are presented for cases in which a hydrodynamic shock interacts with a single or double density interface, and a doubly shocked single density interface. Convergence tests show that the method is spatially second order accurate for smooth flows, and between first and second order accurate for flows with shocks
Johnson, G. M.
1976-01-01
The application of high temperature accelerated test techniques was shown to be an effective method of microcircuit defect screening. Comprehensive microcircuit evaluations and a series of high temperature (473 K to 573 K) life tests demonstrated that a freak or early failure population of surface contaminated devices could be completely screened in thirty two hours of test at an ambient temperature of 523 K. Equivalent screening at 398 K, as prescribed by current Military and NASA specifications, would have required in excess of 1,500 hours of test. All testing was accomplished with a Texas Instruments' 54L10, low power triple-3 input NAND gate manufactured with a titanium- tungsten (Ti-W), Gold (Au) metallization system. A number of design and/or manufacturing anomalies were also noted with the Ti-W, Au metallization system. Further study of the exact nature and cause(s) of these anomalies is recommended prior to the use of microcircuits with Ti-W, Au metallization in long life/high reliability applications. Photomicrographs of tested circuits are included.
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyung-Kyu; Lee, Young-Ho; Lee, Hyun-Seung; Lee, Kang-Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2017-05-15
This paper reports the results of an acceleration test to predict the contact-induced failure that could occur at the cylinder-to-hole joint for the fuel rod of a sodium-cooled fast reactor (SFR). To incorporate the fuel life of the SFR currently under development at KAERI (around 35,000 h), the acceleration test method of reliability engineering was adopted in this work. A finite element method was used to evaluate the flow-induced vibration frequency and amplitude for the test parameter values. Five specimens were tested. The failure criterion during the life of the SFR fuel was applied. The S-N curve of the HT-9, the material of concern, was used to obtain the acceleration factor. As a result, a test time of 16.5 h was obtained for each specimen. It was concluded that the B{sub 0.004} life would be guaranteed for the SFR fuel rods with 99% confidence if no failure was observed at any of the contact surfaces of the five specimens.
Natural - synthetic - artificial!
DEFF Research Database (Denmark)
Nielsen, Peter E
2010-01-01
The terms "natural," "synthetic" and "artificial" are discussed in relation to synthetic and artificial chromosomes and genomes, synthetic and artificial cells and artificial life.......The terms "natural," "synthetic" and "artificial" are discussed in relation to synthetic and artificial chromosomes and genomes, synthetic and artificial cells and artificial life....
Mills, Brooke; Yepes, Andres; Nugent, Kenneth
2015-07-01
Synthetic cannabinoids (SCBs), also known under the brand names of "Spice," "K2," "herbal incense," "Cloud 9," "Mojo" and many others, are becoming a large public health concern due not only to their increasing use but also to their unpredictable toxicity and abuse potential. There are many types of SCBs, each having a unique binding affinity for cannabinoid receptors. Although both Δ-tetrahydrocannabinol (THC) and SCBs stimulate the same receptors, cannabinoid receptor 1 (CB1) and cannabinoid receptor 2 (CB2), studies have shown that SCBs are associated with higher rates of toxicity and hospital admissions than is natural cannabis. This is likely due to SCBs being direct agonists of the cannabinoid receptors, whereas THC is a partial agonist. Furthermore, the different chemical structures of SCBs found in Spice or K2 may interact in unpredictable ways to elicit previously unknown, and the commercial products may have unknown contaminants. The largest group of users is men in their 20s who participate in polydrug use. The most common reported toxicities with SCB use based on studies using Texas Poison Control records are tachycardia, agitation and irritability, drowsiness, hallucinations, delusions, hypertension, nausea, confusion, dizziness, vertigo and chest pain. Acute kidney injury has also been strongly associated with SCB use. Treatment mostly involves symptom management and supportive care. More research is needed to identify which contaminants are typically found in synthetic marijuana and to understand the interactions between different SBCs to better predict adverse health outcomes.
Seebacher, David
2009-01-01
In many particle accelerators, including the LHC at the European Organization for Nuclear Research CERN, NEG coatings are used to improve vacuum performance. In other particle accelerators there have been hints that those coatings could have a relevant impact on the beam coupling impedance, however the data available is contradictory. To clarify the possible impact of NEG coatings the electromagnetic properties have been measured. The measurements have been carried out by means of cavity perturbation method. The second part of this thesis deals with the microwave waveguide reflectometer developed at CERN several years ago, which is used as part of the quality assurance test program for the LHC assembly. To ensure optimum operation and to avoid an expensive removal of any foreign object from inside the LHC beam-screen after its completion, microwave reflectometry is performed. Until now several objects have been found by means of reflectometry, but so far neither precise data about the reflections of different...
Influence of tungsten fiber’s slow drift on the measurement of G with angular acceleration method
Energy Technology Data Exchange (ETDEWEB)
Luo, Jie; Wu, Wei-Huang; Zhan, Wen-Ze [School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan 430074 (China); Xue, Chao [MOE Key Laboratory of Fundamental Physical Quantities Measurement, School of Physics, Huazhong University of Science and Technology, Wuhan 430074 (China); School of Physics and Astronomy, Sun Yat-sen University, Guangzhou 510275 (China); Shao, Cheng-Gang, E-mail: cgshao@mail.hust.edu.cn; Wu, Jun-Fei [MOE Key Laboratory of Fundamental Physical Quantities Measurement, School of Physics, Huazhong University of Science and Technology, Wuhan 430074 (China); Milyukov, Vadim [Sternberg Astronomical Institute, Lomonosov Moscow State University, Moscow 119992 (Russian Federation)
2016-08-15
In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.
Directory of Open Access Journals (Sweden)
Haochen Ni
2014-09-01
Full Text Available The chemical industry poses a potential security risk to factory personnel and neighboring residents. In order to mitigate prospective damage, a synthetic method must be developed for an emergency response. With the development of environmental numeric simulation models, model integration methods, and modern information technology, many Decision Support Systems (DSSs have been established. However, existing systems still have limitations, in terms of synthetic simulation and network interoperation. In order to resolve these limitations, the matured simulation model for chemical accidents was integrated into the WEB Geographic Information System (WEBGIS platform. The complete workflow of the emergency response, including raw data (meteorology information, and accident information management, numeric simulation of different kinds of accidents, environmental impact assessments, and representation of the simulation results were achieved. This allowed comprehensive and real-time simulation of acute accidents in the chemical industry. The main contribution of this paper is that an organizational mechanism of the model set, based on the accident type and pollutant substance; a scheduling mechanism for the parallel processing of multi-accident-type, multi-accident-substance, and multi-simulation-model; and finally a presentation method for scalar and vector data on the web browser on the integration of a WEB Geographic Information System (WEBGIS platform. The outcomes demonstrated that this method could provide effective support for deciding emergency responses of acute chemical accidents.
Ni, Haochen; Rui, Yikang; Wang, Jiechen; Cheng, Liang
2014-09-05
The chemical industry poses a potential security risk to factory personnel and neighboring residents. In order to mitigate prospective damage, a synthetic method must be developed for an emergency response. With the development of environmental numeric simulation models, model integration methods, and modern information technology, many Decision Support Systems (DSSs) have been established. However, existing systems still have limitations, in terms of synthetic simulation and network interoperation. In order to resolve these limitations, the matured simulation model for chemical accidents was integrated into the WEB Geographic Information System (WEBGIS) platform. The complete workflow of the emergency response, including raw data (meteorology information, and accident information) management, numeric simulation of different kinds of accidents, environmental impact assessments, and representation of the simulation results were achieved. This allowed comprehensive and real-time simulation of acute accidents in the chemical industry. The main contribution of this paper is that an organizational mechanism of the model set, based on the accident type and pollutant substance; a scheduling mechanism for the parallel processing of multi-accident-type, multi-accident-substance, and multi-simulation-model; and finally a presentation method for scalar and vector data on the web browser on the integration of a WEB Geographic Information System (WEBGIS) platform. The outcomes demonstrated that this method could provide effective support for deciding emergency responses of acute chemical accidents.
Ni, Haochen; Rui, Yikang; Wang, Jiechen; Cheng, Liang
2014-01-01
The chemical industry poses a potential security risk to factory personnel and neighboring residents. In order to mitigate prospective damage, a synthetic method must be developed for an emergency response. With the development of environmental numeric simulation models, model integration methods, and modern information technology, many Decision Support Systems (DSSs) have been established. However, existing systems still have limitations, in terms of synthetic simulation and network interoperation. In order to resolve these limitations, the matured simulation model for chemical accidents was integrated into the WEB Geographic Information System (WEBGIS) platform. The complete workflow of the emergency response, including raw data (meteorology information, and accident information) management, numeric simulation of different kinds of accidents, environmental impact assessments, and representation of the simulation results were achieved. This allowed comprehensive and real-time simulation of acute accidents in the chemical industry. The main contribution of this paper is that an organizational mechanism of the model set, based on the accident type and pollutant substance; a scheduling mechanism for the parallel processing of multi-accident-type, multi-accident-substance, and multi-simulation-model; and finally a presentation method for scalar and vector data on the web browser on the integration of a WEB Geographic Information System (WEBGIS) platform. The outcomes demonstrated that this method could provide effective support for deciding emergency responses of acute chemical accidents. PMID:25198686
Milton, Kimball A
2006-01-01
This is a graduate level textbook on the theory of electromagnetic radiation and its application to waveguides, transmission lines, accelerator physics and synchrotron radiation. It has grown out of lectures and manuscripts by Julian Schwinger prepared during the war at MIT's Radiation Laboratory, updated with material developed by Schwinger at UCLA in the 1970s and 1980s, and by Milton at the University of Oklahoma since 1994. The book includes a great number of straightforward and challenging exercises and problems. It is addressed to students in physics, electrical engineering, and applied mathematics seeking a thorough introduction to electromagnetism with emphasis on radiation theory and its applications.
Longoni, Gianluca
In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain
Directory of Open Access Journals (Sweden)
D. Yu. Apushkin
2017-01-01
Full Text Available The proposed article touches upon the study of metabolism of new synthetic cannabinoids. In the work the data on synthetic cannabinoid 3-(Naftalin-1-yloxomethyl-1-(5-fluoropentyl-1H-indazole (THJ-2201, as well as the products of its metabolism in the laboratory rats of the Wistar line are given, i. e. Mass spectra and chromatograms of the native substance (THJ-2201 obtained by high-performance liquid chromatography with mass-selective detection (HPLC-MS and gas chromatography with mass-selective detection (GC-MS are given. The paper presents a complex technique for qualitative determination of cannabimimetics THJ-2201 and methods for obtaining a metabolic profile model for the test substance that can be useful for the tasks of qualitative detection and detection of new psychoactive substances in biological objects for the purposes of forensic analysis.The aim of this work was to develop methods for determination of the test substance (THJ-2201 and its metabolites in the urine of laboratory animals, as well as the study of the metabolic characteristics of synthetic cannabinoids on the whole.Materials and methods. The following equipment was used for the experiment: a liquid chromatograph from the firm “Shimadzu LCMC-8050” in combination with a mass-selective detector. The detector type is a triple quadrupole with a double ionization source (chemical ionization at atmospheric pressure and electrospray. The separation of the substances occurred in a chromatographic column (the material is stainless steel, the characteristics are: 150 * 3.0 mm, Luna 3uC18 (2, 100A. The Sorbent is reversed-phase. The investigations were carried out on Agilent 7890A gas chromatograph with Agilent 5975C mass spectrometer and a 103 polar HP-5ms column of 28 m × 0.25 mm. The animals were mature male white laboratory rats of the Wistar line, aged 4–6 months, weighing 190-230 grams.Results and discussion. As a result of the studies, a comprehensive methodology for
Analysis of Samples Treated by Resistance Test Method Exposed to Accelerated Aging
Directory of Open Access Journals (Sweden)
Irena Bates
2015-09-01
Full Text Available Global awareness that packaging has to be fully adequate and of high quality, is gradually increasing. That is why printing inks and substrates, which have no detrimental effect on packed products, should be considered a compulsory precondition for food and tobacco packaging. Printing inks that have been developed in the recent years, especially for food and tobacco packaging, have low-odour and low migration into the printing substrate during the drying process. Their migration into the printing substrate is within the acceptable limits and has no detrimental effect in terms of food safety. Another extremely important element of prints in high quality food and tobacco packaging is their stability, as they have to be resistant to liquids and chemicals, which are a part of packed product. The selection of an appropriate printing substrate is also extremely relevant, since the interaction of substrate with printing inks should have zero effect on the packed product and should not change the physical appearance of packaging. This paper presents the results of analysing the stability of laboratory samples printed with low-migration inks, observed immediately after the printing (unaged and two treatments of accelerated aging. The accelerated aging of prints was conducted in order to simulate conditions in which food and tobacco packaging can be found due to the prolonged indoors storage. The stability of prints was analysed based on optical characteristics by observing the prints’ relative reflectance curves.
Synthetic biology and occupational risk.
Howard, John; Murashov, Vladimir; Schulte, Paul
2017-03-01
Synthetic biology is an emerging interdisciplinary field of biotechnology that involves applying the principles of engineering and chemical design to biological systems. Biosafety professionals have done an excellent job in addressing research laboratory safety as synthetic biology and gene editing have emerged from the larger field of biotechnology. Despite these efforts, risks posed by synthetic biology are of increasing concern as research procedures scale up to industrial processes in the larger bioeconomy. A greater number and variety of workers will be exposed to commercial synthetic biology risks in the future, including risks to a variety of workers from the use of lentiviral vectors as gene transfer devices. There is a need to review and enhance current protection measures in the field of synthetic biology, whether in experimental laboratories where new advances are being researched, in health care settings where treatments using viral vectors as gene delivery systems are increasingly being used, or in the industrial bioeconomy. Enhanced worker protection measures should include increased injury and illness surveillance of the synthetic biology workforce; proactive risk assessment and management of synthetic biology products; research on the relative effectiveness of extrinsic and intrinsic biocontainment methods; specific safety guidance for synthetic biology industrial processes; determination of appropriate medical mitigation measures for lentiviral vector exposure incidents; and greater awareness and involvement in synthetic biology safety by the general occupational safety and health community as well as by government occupational safety and health research and regulatory agencies.
Wan, Y.
2013-06-01
Brainbow is a genetic engineering technique that randomly colorizes cells. Biological samples processed with this technique and imaged with confocal microscopy have distinctive colors for individual cells. Complex cellular structures can then be easily visualized. However, the complexity of the Brainbow technique limits its applications. In practice, most confocal microscopy scans use different florescence staining with typically at most three distinct cellular structures. These structures are often packed and obscure each other in rendered images making analysis difficult. In this paper, we leverage a process known as GPU framebuffer feedback loops to synthesize Brainbow-like images. In addition, we incorporate ID shuffing and Monte-Carlo sampling into our technique, so that it can be applied to single-channel confocal microscopy data. The synthesized Brainbow images are presented to domain experts with positive feedback. A user survey demonstrates that our synthetic Brainbow technique improves visualizations of volume data with complex structures for biologists.
Boehm, Christian R; Pollak, Bernardo; Purswani, Nuri; Patron, Nicola; Haseloff, Jim
2017-07-05
Plants are attractive platforms for synthetic biology and metabolic engineering. Plants' modular and plastic body plans, capacity for photosynthesis, extensive secondary metabolism, and agronomic systems for large-scale production make them ideal targets for genetic reprogramming. However, efforts in this area have been constrained by slow growth, long life cycles, the requirement for specialized facilities, a paucity of efficient tools for genetic manipulation, and the complexity of multicellularity. There is a need for better experimental and theoretical frameworks to understand the way genetic networks, cellular populations, and tissue-wide physical processes interact at different scales. We highlight new approaches to the DNA-based manipulation of plants and the use of advanced quantitative imaging techniques in simple plant models such as Marchantia polymorpha. These offer the prospects of improved understanding of plant dynamics and new approaches to rational engineering of plant traits. Copyright © 2017 Cold Spring Harbor Laboratory Press; all rights reserved.
Qian, Cheng; Fan, Jiajie; Fang, Jiayi; Yu, Chaohua; Ren, Yi; Fan, Xuejun; Zhang, Guoqi
2017-10-16
By solving the problem of very long test time on reliability qualification for Light-emitting Diode (LED) products, the accelerated degradation test with a thermal overstress at a proper range is regarded as a promising and effective approach. For a comprehensive survey of the application of step-stress accelerated degradation test (SSADT) in LEDs, the thermal, photometric, and colorimetric properties of two types of LED chip scale packages (CSPs), i.e., 4000 °K and 5000 °K samples each of which was driven by two different levels of currents (i.e., 120 mA and 350 mA, respectively), were investigated under an increasing temperature from 55 °C to 150 °C and a systemic study of driving current effect on the SSADT results were also reported in this paper. During SSADT, junction temperatures of the test samples have a positive relationship with their driving currents. However, the temperature-voltage curve, which represents the thermal resistance property of the test samples, does not show significant variance as long as the driving current is no more than the sample's rated current. But when the test sample is tested under an overdrive current, its temperature-voltage curve is observed as obviously shifted to the left when compared to that before SSADT. Similar overdrive current affected the degradation scenario is also found in the attenuation of Spectral Power Distributions (SPDs) of the test samples. As used in the reliability qualification, SSADT provides explicit scenes on color shift and correlated color temperature (CCT) depreciation of the test samples, but not on lumen maintenance depreciation. It is also proved that the varying rates of the color shift and CCT depreciation failures can be effectively accelerated with an increase of the driving current, for instance, from 120 mA to 350 mA. For these reasons, SSADT is considered as a suitable accelerated test method for qualifying these two failure modes of LED CSPs.
International Nuclear Information System (INIS)
Cruceru, I.; Sandu, M.; Cruceru, M.
1994-01-01
A method for measuring and evaluation of doses and dose equivalent rate in mixed gamma- neutron fields is discussed in this paper. The method is basedon a double detector system consist of an ionization chamber with components made from a plastic scintillator, coupled to on photomultiplier. Generally the radiation fields around accelerators are complex, often consisting of many different ionizing radiations extending over a broad range of energies. This method solve two major difficulties: determination of response functions of radiation detectors; interpretation of measurement and determination of accuracy. The discrimination gamma-fast neutrons is assured directly without a pulse shape discrimination circuit. The method is applied to mixed fields in which particle energies are situated in the energy range under 20 MeV and an izotropic emision (Φ=10 4 -10 11 n.s -1 ). The dose equivalent rates explored is 0.01mSV--0.1SV
Hubeny, I.; Lanz, T.
1995-01-01
A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.
Neutron source, linear-accelerator fuel enricher and regenerator and associated methods
Steinberg, Meyer; Powell, James R.; Takahashi, Hiroshi; Grand, Pierre; Kouts, Herbert
1982-01-01
A device for producing fissile material inside of fabricated nuclear elements so that they can be used to produce power in nuclear power reactors. Fuel elements, for example, of a LWR are placed in pressure tubes in a vessel surrounding a liquid lead-bismuth flowing columnar target. A linear-accelerator proton beam enters the side of the vessel and impinges on the dispersed liquid lead-bismuth columns and produces neutrons which radiate through the surrounding pressure tube assembly or blanket containing the nuclear fuel elements. These neutrons are absorbed by the natural fertile uranium-238 elements and are transformed to fissile plutonium-239. The fertile fuel is thus enriched in fissile material to a concentration whereby they can be used in power reactors. After use in the power reactors, dispensed depleted fuel elements can be reinserted into the pressure tubes surrounding the target and the nuclear fuel regenerated for further burning in the power reactor.
Li, Yongxing; Smith, Richard S.
2018-03-01
We present two examples of using the contrast source inversion (CSI) method to invert synthetic radio-imaging (RIM) data and field data. The synthetic model has two isolated conductors (one perfect conductor and one moderate conductor) embedded in a layered background. After inversion, we can identify the two conductors on the inverted image. The shape of the perfect conductor is better resolved than the shape of the moderate conductor. The inverted conductivity values of the two conductors are approximately the same, which demonstrates that the conductivity values cannot be correctly interpreted from the CSI results. The boundaries and the tilts of the upper and the lower conductive layers on the background can also be inferred from the results, but the centre parts of conductive layers in the inversion results are more conductive than the parts close to the boreholes. We used the straight-ray tomographic imaging method and the CSI method to invert the RIM field data collected using the FARA system between two boreholes in a mining area in Sudbury, Canada. The RIM data include the amplitude and the phase data collected using three frequencies: 312.5 kHz, 625 kHz and 1250 kHz. The data close to the ground surface have high amplitude values and complicated phase fluctuations, which are inferred to be contaminated by the reflected or refracted electromagnetic (EM) fields from the ground surface, and are removed for all frequencies. Higher-frequency EM waves attenuate more quickly in the subsurface environment, and the locations where the measurements are dominated by noise are also removed. When the data are interpreted with the straight-ray method, the images differ substantially for different frequencies. In addition, there are some unexpected features in the images, which are difficult to interpret. Compared with the straight-ray imaging results, the inversion results with the CSI method are more consistent for different frequencies. On the basis of what we learnt
Microfluidic Technologies for Synthetic Biology
Directory of Open Access Journals (Sweden)
Sung Kuk Lee
2011-06-01
Full Text Available Microfluidic technologies have shown powerful abilities for reducing cost, time, and labor, and at the same time, for increasing accuracy, throughput, and performance in the analysis of biological and biochemical samples compared with the conventional, macroscale instruments. Synthetic biology is an emerging field of biology and has drawn much attraction due to its potential to create novel, functional biological parts and systems for special purposes. Since it is believed that the development of synthetic biology can be accelerated through the use of microfluidic technology, in this review work we focus our discussion on the latest microfluidic technologies that can provide unprecedented means in synthetic biology for dynamic profiling of gene expression/regulation with high resolution, highly sensitive on-chip and off-chip detection of metabolites, and whole-cell analysis.
Czarski, Tomasz; Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Simrock, Stefan
2004-07-01
The cavity control system for the TESLA -- TeV-Energy Superconducting Linear Accelerator project is initially introduced in this paper. The FPGA -- Field Programmable Gate Array technology has been implemented for digital controller stabilizing cavity field gradient. The cavity SIMULINK model has been applied to test the hardware controller. The step operation method has been developed for testing the FPGA device coupled to the SIMULINK model of the analog real plant. The FPGA signal processing has been verified according to the required algorithm of the reference MATLAB controller. Some experimental results have been presented for different cavity operational conditions.
Kim, Byungyeon; Park, Byungjun; Lee, Seungrag; Won, Youngjae
2016-01-01
We demonstrated GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method. Our algorithm was verified for various fluorescence lifetimes and photon numbers. The GPU processing time was faster than the physical scanning time for images up to 800 × 800, and more than 149 times faster than a single core CPU. The frame rate of our system was demonstrated to be 13 fps for a 200 × 200 pixel image when observing maize vascular tissue. This system can be utilized for observing dynamic biological reactions, medical diagnosis, and real-time industrial inspection. PMID:28018724
Kobayashi, Hirokazu; Mitsuka, Yuko; Kitagawa, Hiroshi
2016-08-01
Hybrid materials composed of metal nanoparticles and metal-organic frameworks (MOFs) have attracted much attention in many applications, such as enhanced gas storage and catalytic, magnetic, and optical properties, because of the synergetic effects between the metal nanoparticles and MOFs. In this Forum Article, we describe our recent progress on novel synthetic methods to produce metal nanoparticles covered with a MOF (metal@MOF). We first present Pd@copper(II) 1,3,5-benzenetricarboxylate (HKUST-1) as a novel hydrogen-storage material. The HKUST-1 coating on Pd nanocrystals results in a remarkably enhanced hydrogen-storage capacity and speed in the Pd nanocrystals, originating from charge transfer from Pd nanocrystals to HKUST-1. Another material, Pd-Au@Zn(MeIM)2 (ZIF-8, where HMeIM = 2-methylimidazole), exhibits much different catalytic activity for alcohol oxidation compared with Pd-Au nanoparticles, indicating a design guideline for the development of composite catalysts with high selectivity. A composite material composed of Cu nanoparticles and Cr3F(H2O)2O{C6H3(CO2)3}2 (MIL-100-Cr) demonstrates higher catalytic activity for CO2 reduction into methanol than Cu/γ-Al2O3. We also present novel one-pot synthetic methods to produce composite materials including Pd/ZIF-8 and Ni@Ni2(dhtp) (MOF-74, where H4dhtp = 2,5-dihydroxyterephthalic acid).
YEREVAN: Acceleration workshop
International Nuclear Information System (INIS)
Anon.
1989-01-01
Sponsored by the Yerevan Physics Institute in Armenia, a Workshop on New Methods of Charged Particle Acceleration in October near the Nor Amberd Cosmic Ray Station attracted participants from most major accelerator centres in the USSR and further afield
Kurz, S
1999-01-01
In this paper a new technique for the accurate calculation of magnetic fields in the end regions of superconducting accelerator magnets is presented. This method couples Boundary Elements (BEM) which discretize the surface of the iron yoke and Finite Elements (FEM) for the modelling of the nonlinear interior of the yoke. The BEM-FEM method is therefore specially suited for the calculation of 3-dimensional effects in the magnets, as the coils and the air regions do not have to be represented in the finite-element mesh and discretization errors only influence the calculation of the magnetization (reduced field) of the yoke. The method has been recently implemented into the CERN-ROXIE program package for the design and optimization of the LHC magnets. The field shape and multipole errors in the two-in-one LHC dipoles with its coil ends sticking out of the common iron yoke is presented.
Synthetic Aperture Sequential Beamforming
DEFF Research Database (Denmark)
Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke
2008-01-01
A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...
By how much can Residual Minimization Accelerate the Convergence of Orthogonal Residual Methods?
Czech Academy of Sciences Publication Activity Database
Gutknecht, M. H.; Rozložník, Miroslav
2001-01-01
Roč. 27, - (2001), s. 189-213 ISSN 1017-1398 R&D Projects: GA ČR GA201/98/P108 Institutional research plan: AV0Z1030915 Keywords : system of linear algebraic equations * iterative method * Krylov space method * conjugate gradient method * biconjugate gradient method * CG * CGNE * CGNR * CGS * FOM * GMRes * QMR * TFQMR * residual smoothing * MR smoothing * QMR smoothing Subject RIV: BA - General Mathematics Impact factor: 0.438, year: 2001
Rothschild, Lynn J.
2017-01-01
"Are we alone?" is one of the primary questions of astrobiology, and whose answer defines our significance in the universe. Unfortunately, this quest is hindered by the fact that we have only one confirmed example of life, that of earth. While this is enormously helpful in helping to define the minimum envelope for life, it strains credulity to imagine that life, if it arose multiple times, has not taken other routes. To help fill this gap, our lab has begun using synthetic biology - the design and construction of new biological parts and systems and the redesign of existing ones for useful purposes - as an enabling technology. One theme, the "Hell Cell" project, focuses on creating artificial extremophiles in order to push the limits for Earth life, and to understand how difficult it is for life to evolve into extreme niches. In another project, we are re-evolving biotic functions using only the most thermodynamically stable amino acids in order to understand potential capabilities of an early organism with a limited repertoire of amino acids.
Accelerated gradient methods for total-variation-based CT image reconstruction
DEFF Research Database (Denmark)
Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian
2011-01-01
incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping...... reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton’s method. The simple gradient method has much lower memory requirements, but exhibits slow convergence...
Quasi-Newton methods for the acceleration of multi-physics codes
CSIR Research Space (South Africa)
Haelterman, R
2017-08-01
Full Text Available .E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) [11] J.E. Dennis, R.B. Schnabel, Least Change Secant Updates for quasi- Newton methods. SIAM Rev. 21, pp. 443–459 (1979) [12] G. Dhondt, CalculiX CrunchiX USER...) [25] J.M. Martinez, M.C. Zambaldi, An Inverse Column-Updating Method for solving large-scale nonlinear systems of equations. Optim. Methods Softw. 1, pp. 129–140 (1992) [26] J.M. Martinez, On the convergence of the column-updating method. Comp. Appl...
International Nuclear Information System (INIS)
Dubois, J.; Calvin, Ch.; Dubois, J.; Petiton, S.
2011-01-01
This paper presents a parallelized hybrid single-vector Arnoldi algorithm for computing approximations to Eigen-pairs of a nonsymmetric matrix. We are interested in the use of accelerators and multi-core units to speed up the Arnoldi process. The main goal is to propose a parallel version of the Arnoldi solver, which can efficiently use multiple multi-core processors or multiple graphics processing units (GPUs) in a mixed coarse and fine grain fashion. In the proposed algorithms, this is achieved by an auto-tuning of the matrix vector product before starting the Arnoldi Eigen-solver as well as the reorganization of the data and global communications so that communication time is reduced. The execution time, performance, and scalability are assessed with well-known dense and sparse test matrices on multiple Nehalems, GT200 NVidia Tesla, and next generation Fermi Tesla. With one processor, we see a performance speedup of 2 to 3x when using all the physical cores, and a total speedup of 2 to 8x when adding a GPU to this multi-core unit, and hence a speedup of 4 to 24x compared to the sequential solver. (authors)
Krishnamoorthy, Ganesan; Ramamurthy, Govindaswamy; Sadulla, Sayeed; Sastry, Thotapalli Parvathaleswara; Mandal, Asit Baran
2014-09-01
Click chemistry approaches are tailored to generate molecular building blocks quickly and reliably by joining small units together selectively and covalently, stably and irreversibly. The vegetable tannins such as hydrolyzable and condensed tannins are capable to produce rather stable radicals or inhibit the progress of radicals and are prone to oxidations such as photo and auto-oxidation, and their anti-oxidant nature is well known. A lot remains to be done to understand the extent of the variation of leather stability, color variation (lightening and darkening reaction of leather), and poor resistance to water uptake for prolonged periods. In the present study, we have reported click chemistry approaches to accelerated vegetable tanning processes based on periodates catalyzed formation of oxidized hydrolysable and condensed tannins for high exhaustion with improved properties. The distribution of oxidized vegetable tannin, the thermal stability such as shrinkage temperature (T s) and denaturation temperature (T d), resistance to collagenolytic activities, and organoleptic properties of tanned leather as well as the evaluations of eco-friendly characteristics were investigated. Scanning electron microscopic analysis indicates the cross section of tightness of the leather. Differential scanning calorimetric analysis shows that the T d of leather is more than that of vegetable tanned or equal to aldehyde tanned one. The leathers exhibited fullness, softness, good color, and general appearance when compared to non-oxidized vegetable tannin. The developed process benefits from significant reduction in total solids and better biodegradability in the effluent, compared to non-oxidized vegetable tannins.
Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus
2015-09-03
Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Dereje Homa
2017-01-01
Full Text Available The effect of Cr(VI pollution on the corrosion rate of corrugated iron roof samples collected from tanning industry areas was investigated through simulated laboratory exposure and spectrophotometric detection of Cr(III deposit as a product of the reaction. The total level of Cr detected in the samples ranged from 113.892 ± 0.17 ppm to 53.05 ± 0.243 ppm and showed increasing trend as sampling sites get closer to the tannery and in the direction of tannery effluent stream. The laboratory exposure of a newly manufactured material to a simulated condition showed a relatively faster corrosion rate in the presence of Cr(VI with concomitant deposition of Cr(III under pH control. A significant (P = 0.05 increase in the corrosion rate was also recorded when exposing scratched or stress cracked samples. A coupled redox process where Cr(VI is reduced to a stable, immobile, and insoluble Cr(III accompanying corrosion of the iron is proposed as a possible mechanism leading to the elevated deposition of the latter on the materials. In conclusion, the increased deposits of Cr detected in the corrugated iron roof samples collected from tanning industry zones suggested possible atmospheric Cr pollution as a factor to the accelerated corrosion of the materials.
International Nuclear Information System (INIS)
Lyons, R.G.
1988-01-01
An important parameter in calculating the environmental dose rate for electron spin resonance (ESR) age estimates is the relative effectiveness of alpha and gamma radiation. A small research accelerator is used as a source of alpha particles of various pre-selected energies, corresponding to those found in the environment, to determine the effectiveness of alpha radiation of different energies. Preparation of sample targets is discussed, including the use of absolute ethanol, thorough etching and deposition by centrifuge. Preliminary results show that the alpha/gamma effectiveness ratio, k, depends on the energy of the incident alpha and must therefore be expressed in terms of a reference energy. The effectiveness of an alpha particle in causing ESR damage is found to vary linearly with its range or path length, not with its energy, a fact which must be considered when calculating effective dose-rates from environmental radionuclide concentrations. Failure to do so may lead to serious systematic errors in the effective alpha contribution to environmental dose-rates and consequently in age estimates. (author)
Homa, Dereje; Haile, Ermias; Washe, Alemayehu P
2017-01-01
The effect of Cr(VI) pollution on the corrosion rate of corrugated iron roof samples collected from tanning industry areas was investigated through simulated laboratory exposure and spectrophotometric detection of Cr(III) deposit as a product of the reaction. The total level of Cr detected in the samples ranged from 113.892 ± 0.17 ppm to 53.05 ± 0.243 ppm and showed increasing trend as sampling sites get closer to the tannery and in the direction of tannery effluent stream. The laboratory exposure of a newly manufactured material to a simulated condition showed a relatively faster corrosion rate in the presence of Cr(VI) with concomitant deposition of Cr(III) under pH control. A significant ( P = 0.05) increase in the corrosion rate was also recorded when exposing scratched or stress cracked samples. A coupled redox process where Cr(VI) is reduced to a stable, immobile, and insoluble Cr(III) accompanying corrosion of the iron is proposed as a possible mechanism leading to the elevated deposition of the latter on the materials. In conclusion, the increased deposits of Cr detected in the corrugated iron roof samples collected from tanning industry zones suggested possible atmospheric Cr pollution as a factor to the accelerated corrosion of the materials.
2014-07-01
Spray. Journal of Failure Analysis and Prevention 2008, 8 (2), 164–175. 34. Aluminium Alloy 5083, Plate and Sheet; SAE-AMS-QQ-A-250/6S; SAE...Corrosion Screening of EV31A Magnesium and Other Magnesium Alloys Using Laboratory-Based Accelerated Corrosion and Electro-chemical Methods...Magnesium and Other Magnesium Alloys Using Laboratory-Based Accelerated Corrosion and Electro-chemical Methods Brian E. Placzankis, Joseph P
Su, Xiaoye; Liang, Ruiting; Stolee, Jessica A
2018-06-05
Oligonucleotides are being researched and developed as potential drug candidates for the treatment of a broad spectrum of diseases. The characterization of antisense oligonucleotide (ASO) impurities caused by base mutations (e.g. deamination) which are closely related to the target ASO is a significant analytical challenge. Herein, we describe a novel one-step method, utilizing a strategy that combines fluorescence-ON detection with competitive hybridization, to achieve single base mutation quantitation in extensively modified synthetic ASOs. Given that this method is highly specific and sensitive (LoQ = 4 nM), we envision that it will find utility for screening other impurities as well as sequencing modified oligonucleotides. Copyright © 2018 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Ryu, Kyung Ha; Lee, Tae Hyun; Kim, Ji Hak; Hwang, Il Soon; Lee, Na Young; Kim, Ji Hyun; Park, Jin Ho; Sohn, Chang Ho
2010-01-01
The flow accelerated corrosion (FAC) phenomenon persistently impacts plant reliability and personnel safety. We have shown that Equipotential Switching Direct Current Potential Drop (ES-DCPD) can be employed to detect piping wall loss induced by FAC. It has been demonstrated to have sufficient sensitivity to cover both long and short lengths of piping. Based on this, new FAC screening and inspection approaches have been developed. For example, resolution of ES-DCPD can be adjusted according to its monitoring purpose. The developed method shows good integrity during long test periods. It also shows good reproducibility. The Seoul National University FAC Accelerated Simulation Loop (SFASL) has been constructed for ES-DCPD demonstration purposes. During one demonstration, the piping wall was thinned by 23.7% through FAC for a 13,000 min test period. In addition to the ES-DCPD method, ultrasonic technique (UT) has been applied to SFASL for verification while water chemistry was continually monitored and controlled using electrochemical sensors. Developed electrochemical sensors showed accurate and stable water conditions in the SFASL during the test period. The ES-DCPD results were also theoretically predicted by the Sanchez-Caldera's model. The UT, however, failed to detect thinning because of its localized characteristics. Online UT that covers only local areas cannot assure the detection of wall loss.
Synthetic LDL as targeted drug delivery vehicle
Forte, Trudy M [Berkeley, CA; Nikanjam, Mina [Richmond, CA
2012-08-28
The present invention provides a synthetic LDL nanoparticle comprising a lipid moiety and a synthetic chimeric peptide so as to be capable of binding the LDL receptor. The synthetic LDL nanoparticle of the present invention is capable of incorporating and targeting therapeutics to cells expressing the LDL receptor for diseases associated with the expression of the LDL receptor such as central nervous system diseases. The invention further provides methods of using such synthetic LDL nanoparticles.
Acceleration of the AFEN method by two-node nonlinear iteration
Energy Technology Data Exchange (ETDEWEB)
Moon, Kap Suk; Cho, Nam Zin; Noh, Jae Man; Hong, Ser Gi [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1998-12-31
A nonlinear iterative scheme developed to reduce the computing time of the AFEN method was tested and applied to two benchmark problems. The new nonlinear method for the AFEN method is based on solving two-node problems and use of two nonlinear correction factors at every interface instead of one factor in the conventional scheme. The use of two correction factors provides higher-order accurate interface fluxes as well as currents which are used as the boundary conditions of the two-node problem. The numerical results show that this new method gives exactly the same solution as that of the original AFEN method and the computing time is significantly reduced in comparison with the original AFEN method. 7 refs., 1 fig., 1 tab. (Author)
Acceleration of the AFEN method by two-node nonlinear iteration
Energy Technology Data Exchange (ETDEWEB)
Moon, Kap Suk; Cho, Nam Zin; Noh, Jae Man; Hong, Ser Gi [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
1999-12-31
A nonlinear iterative scheme developed to reduce the computing time of the AFEN method was tested and applied to two benchmark problems. The new nonlinear method for the AFEN method is based on solving two-node problems and use of two nonlinear correction factors at every interface instead of one factor in the conventional scheme. The use of two correction factors provides higher-order accurate interface fluxes as well as currents which are used as the boundary conditions of the two-node problem. The numerical results show that this new method gives exactly the same solution as that of the original AFEN method and the computing time is significantly reduced in comparison with the original AFEN method. 7 refs., 1 fig., 1 tab. (Author)
Accelerating the SCE-UA Global Optimization Method Based on Multi-Core CPU and Many-Core GPU
Directory of Open Access Journals (Sweden)
Guangyuan Kan
2016-01-01
Full Text Available The famous global optimization SCE-UA method, which has been widely used in the field of environmental model parameter calibration, is an effective and robust method. However, the SCE-UA method has a high computational load which prohibits the application of SCE-UA to high dimensional and complex problems. In recent years, the hardware of computer, such as multi-core CPUs and many-core GPUs, improves significantly. These much more powerful new hardware and their software ecosystems provide an opportunity to accelerate the SCE-UA method. In this paper, we proposed two parallel SCE-UA methods and implemented them on Intel multi-core CPU and NVIDIA many-core GPU by OpenMP and CUDA Fortran, respectively. The Griewank benchmark function was adopted in this paper to test and compare the performances of the serial and parallel SCE-UA methods. According to the results of the comparison, some useful advises were given to direct how to properly use the parallel SCE-UA methods.
Energy Technology Data Exchange (ETDEWEB)
Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Benites R, J. L., E-mail: fermineutron@yahoo.com [Centro Estatal de Cancerologia de Nayarit, Servicio de Seguridad Radiologica, Calzada de la Cruz 118 Sur, 63000 Tepic, Nayarit (Mexico)
2014-08-15
A novel procedure to measure the neutrons spectrum originated in a lineal accelerator of medical use has been developed. The method uses a passive spectrometer of Bonner spheres. The main advantage of the method is that only requires of a single shot of the accelerator. When this is used around a lineal accelerator is necessary to operate it under the same conditions so many times like the spheres that contain the spectrometer, activity that consumes enough time. The developed procedure consists on situating all the spheres of the spectrometer at the same time and to realize the reading making a single shot. With this method the photo neutrons spectrum produced by a lineal accelerator Varian ix of 15 MV to 100 cm of the isocenter was determined, with the spectrum is determined the total flow and the ambient dose equivalent. (Author)
International Nuclear Information System (INIS)
Tanaka, Takayuki; Otosaka, Shigeyoshi; Togawa, Orihiko; Amano, Hikaru
2009-01-01
We developed an extraction method for accurately and reproducibly determining dissolved organic radiocarbon in seawater by ultraviolet oxidation of dissolved organic carbon and subsequent accelerator mass spectrometry. We determined the irradiation time required for oxidation of the dissolved organic carbon. By modifying the experimental apparatus, we decreased contamination by dead carbon, which came mainly from petrochemical products in the apparatus and from the incursion of carbon dioxide from the atmosphere. The modifications decreased the analytical blank level to less than 1% of sample size, a percentage that had not previously been achieved. The recovery efficiency was high, 95±1%. To confirm both the accuracy and reproducibility of the method, we tested it by analyzing an oxalic acid radiocarbon reference material and by determining the dissolved organic carbon in surface seawater samples. (author)
Chen, Qi; Chen, Quan; Luo, Xiaobing
2014-09-01
In recent years, due to the fast development of high power light-emitting diode (LED), its lifetime prediction and assessment have become a crucial issue. Although the in situ measurement has been widely used for reliability testing in laser diode community, it has not been applied commonly in LED community. In this paper, an online testing method for LED life projection under accelerated reliability test was proposed and the prototype was built. The optical parametric data were collected. The systematic error and the measuring uncertainty were calculated to be within 0.2% and within 2%, respectively. With this online testing method, experimental data can be acquired continuously and sufficient amount of data can be gathered. Thus, the projection fitting accuracy can be improved (r(2) = 0.954) and testing duration can be shortened.
On the equivalence of LIST and DIIS methods for convergence acceleration
Energy Technology Data Exchange (ETDEWEB)
Garza, Alejandro J. [Department of Chemistry, Rice University, Houston, Texas 77251-1892 (United States); Scuseria, Gustavo E. [Department of Chemistry and Department of Physics and Astronomy, Rice University, Houston, Texas 77251-1892, USA and Chemistry Department, Faculty of Science, King Abdulaziz University, Jeddah 21589 (Saudi Arabia)
2015-04-28
Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay’s DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.
Energy Technology Data Exchange (ETDEWEB)
Hirata, Makoto [Oita University, Oita (Japan)
1999-03-05
Recently, taste and bioactivation of large number of oligopeptide become clear, and the development of the efficient synthetic method becomes the urgency. In the production process by conventional enzyme reaction which combined the crystallization, because the solubility of the product to the water which is reaction solvent is low, the yield remained at about 60%, and the problem of reaction inhibition of the product by the crystal had also been indicated. In the enzyme synthesis of the aspartame in which he is the representative oligopeptide, it aimed at the establishment of the new synthesis method which can improve yield and reaction rate, while the segregation enzyme was continuously utilized. In this synthetic method, supply of organic solvent which dissolved the substrate, extraction of the substrate from organic solvent to water phase, synthesis reaction by the segregation enzyme in water phase, extraction of the aspartame which is a product from water phase to organic solvent progress, and they continuously progress by one complete mixing reactor. The process which controlled these speeds and yields was quantitatively analyzed, and material balance style considering substrate, enzyme and mass transfer of the product and enzyme reaction speed was deduced. The optimum operating condition for improving yield and productivity of the purpose product using this solution was examined, and optimum supply concentration and agitation speed of aspartic acid which was a substrate were started, and the optimum operating condition which realizes the improvement in high yield and productivity over 90% of the aspartame was clarified. Like this, it is that this research adopts features of liquid Citrus nobilis two-phase partition for the enzyme synthesis of the aspartame, and it is considered that there is a value, because it is the creative research which verified that the productivity can be greatly improved by the utilization of the chemical-engineering technique, and
International Nuclear Information System (INIS)
Lee, Jaejun; Cho, Namzin
2007-01-01
Most existing methods of nuclear design analysis for pebble bed reactors (PBRs) are based on old finite difference solvers or on statistical methods. These methods require very long computer times. Therefore, there is strong desire of making available high fidelity coarse-mesh nodal computer codes. Recently, we extended the analytic function expansion nodal (AFEN) method developed quite extensively in Cartesian (x,y,z) geometry and in hexagonal-z geometry to the treatment of the full three dimensional cylindrical (r,θ,z) geometry for pebble bed reactors(PBRs). The AFEN methodology in this geometry as in hexagonal geometry is 'robust', due to the unique feature of the AFEN method that it does not use the transverse integration. This paper presents an acceleration scheme based on the coarse-group rebalance (CGR) concept and provides test results verifying the method and its implementation in the TOPS code. Also, we implemented discontinuity factors in the TOPS code and tested on benchmark problems. The TOPS results are in excellent agreement with those of the VENTURE code, using significantly less computer time
An accelerated hologram calculation using the wavefront recording plane method and wavelet transform
Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi
2017-06-01
Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.
International Nuclear Information System (INIS)
Nieves, Leidy Johana Jaramillo; Baena, Oscar Jaime Restrepo
2012-01-01
The ceramic pigment with structure Zn 1-x Fe x Cr 2 O 4 (x = 0, 0.5, 1) was synthesized by non conventional methods of coprecipitation assisted by ultrasound and milling of high energy. This pigment was characterized by XRD, XRF, SEM, UV-VIS spectrophotometry and CIELab colorimetry. The aim of this work was studied two alternative methods to the traditional method of synthesis, evaluating the pigment properties, varying the stoichiometry, such as structure, composition, morphology and colorimetric coordinates. The results showed that is possible to obtain the desired crystalline structure at temperatures below 1000 ° C in both cases, also expected hues are obtained according to each stoichiometry, which shows the advantages of using methods non conventional when produce these pigments, since it has a higher controlling the composition, stoichiometry and is obtained at temperatures below compared with traditional ceramic method
Yu, Chaohua; Fan, Xuejun; Zhang, Guoqi
2017-01-01
By solving the problem of very long test time on reliability qualification for Light-emitting Diode (LED) products, the accelerated degradation test with a thermal overstress at a proper range is regarded as a promising and effective approach. For a comprehensive survey of the application of step-stress accelerated degradation test (SSADT) in LEDs, the thermal, photometric, and colorimetric properties of two types of LED chip scale packages (CSPs), i.e., 4000 °K and 5000 °K samples each of which was driven by two different levels of currents (i.e., 120 mA and 350 mA, respectively), were investigated under an increasing temperature from 55 °C to 150 °C and a systemic study of driving current effect on the SSADT results were also reported in this paper. During SSADT, junction temperatures of the test samples have a positive relationship with their driving currents. However, the temperature-voltage curve, which represents the thermal resistance property of the test samples, does not show significant variance as long as the driving current is no more than the sample’s rated current. But when the test sample is tested under an overdrive current, its temperature-voltage curve is observed as obviously shifted to the left when compared to that before SSADT. Similar overdrive current affected the degradation scenario is also found in the attenuation of Spectral Power Distributions (SPDs) of the test samples. As used in the reliability qualification, SSADT provides explicit scenes on color shift and correlated color temperature (CCT) depreciation of the test samples, but not on lumen maintenance depreciation. It is also proved that the varying rates of the color shift and CCT depreciation failures can be effectively accelerated with an increase of the driving current, for instance, from 120 mA to 350 mA. For these reasons, SSADT is considered as a suitable accelerated test method for qualifying these two failure modes of LED CSPs. PMID:29035300
Garion, C
2009-01-01
Modern particle accelerators require UHV conditions during their operation. In the accelerating cavities, breakdowns can occur, releasing large amount of gas into the vacuum chamber. To determine the pressure profile along the cavity as a function of time, the time-dependent behaviour of the gas has to be simulated. To do that, it is useful to apply accurate three-dimensional method, such as Test Particles Monte Carlo. In this paper, a time-dependent Test Particles Monte Carlo is used. It has been implemented in a Finite Element code, CASTEM. The principle is to track a sample of molecules during time. The complex geometry of the cavities can be created either in the FE code or in a CAD software (CATIA in our case). The interface between the two softwares to export the geometry from CATIA to CASTEM is given. The algorithm of particle tracking for collisionless flow in the FE code is shown. Thermal outgassing, pumping surfaces and electron and/or ion stimulated desorption can all be generated as well as differ...
Directory of Open Access Journals (Sweden)
Sebastian Gim
2012-11-01
Full Text Available Continued device scaling into the nanometer region and high frequencies of operation well into the multi-GHz region has given rise to new effects that previously had negligible impact but now present greater challenges and unprecedented complexity to designing successful mixed-signal silicon. The Chameleon-RF project was conceived to address these challenges. Creative use of domain decomposition, multi grid techniques or reduced order modeling techniques (ROM can be selectively applied at all levels of the process to efficiently prune down degrees of freedom (DoFs. However, the simulation of complex systems within a reasonable amount of time remains a computational challenge. This paper presents work done in the incorporation of GPGPU technology to accelerate Krylov based algorithms used for compact modeling of on-chip passive integrated structures within the workflow of the Chameleon-RF project. Based upon insight gained from work done above, a novel GPGPU accelerated algorithm was developed for the Krylov ROM (kROM methods and is described here for the benefit of the wider community.
International Nuclear Information System (INIS)
Su, L.; Du, X.; Liu, T.; Xu, X. G.
2013-01-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous EnviRonments - is being developed at Rensselaer Polytechnic Institute as a software test-bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs (Graphics Processing Units). This paper presents the preliminary code development and the testing involving radiation dose related problems. In particular, the paper discusses the electron transport simulations using the class-II condensed history method. The considered electron energy ranges from a few hundreds of keV to 30 MeV. As for photon part, photoelectric effect, Compton scattering and pair production were simulated. Voxelized geometry was supported. A serial CPU (Central Processing Unit)code was first written in C++. The code was then transplanted to the GPU using the CUDA C 5.0 standards. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. The code was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and later dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6*10 6 electron histories were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively. On-going work continues to test the code for different medical applications such as radiotherapy and brachytherapy. (authors)
International Nuclear Information System (INIS)
Bannouf, S.
2013-01-01
The goal of this thesis was, initially, to evaluate phased array methods for ultrasonic Non Destructive Testing (NDT) in order to propose optimizations, or to develop new alternative methods. In particular, this works deals with the detection of defects in complex geometries and/or materials parts. The TFM (Total Focusing Method) algorithm provides high resolution images and several representations of a same defect thanks to different reconstruction modes. These properties have been exploited judiciously in order to propose an adaptive imaging method in immersion configuration. We showed that TFM imaging can be used to characterize more precisely the defects. However, this method presents two major drawbacks: the large amount of data to be processed and a low signal-to-noise ratio (SNR), especially in noisy materials. We developed solutions to these two problems. To overcome the limitation caused by the large number of signals to be processed, we propose an algorithm that defines the sparse array to activate. As for the low SNR, it can be now improved by use of virtual sources and a new filtering method based on the DORT method (Decomposition of the Time Reversal Operator). (author) [fr
Energy Technology Data Exchange (ETDEWEB)
Spellings, Matthew [Chemical Engineering, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109 (United States); Biointerfaces Institute, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109 (United States); Marson, Ryan L. [Materials Science & Engineering, University of Michigan, 2300 Hayward St., Ann Arbor, MI 48109 (United States); Biointerfaces Institute, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109 (United States); Anderson, Joshua A. [Chemical Engineering, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109 (United States); Biointerfaces Institute, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Chemical Engineering, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109 (United States); Materials Science & Engineering, University of Michigan, 2300 Hayward St., Ann Arbor, MI 48109 (United States); Biointerfaces Institute, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109 (United States)
2017-04-01
Faceted shapes, such as polyhedra, are commonly found in systems of nanoscale, colloidal, and granular particles. Many interesting physical phenomena, like crystal nucleation and growth, vacancy motion, and glassy dynamics are challenging to model in these systems because they require detailed dynamical information at the individual particle level. Within the granular materials community the Discrete Element Method has been used extensively to model systems of anisotropic particles under gravity, with friction. We provide an implementation of this method intended for simulation of hard, faceted nanoparticles, with a conservative Weeks–Chandler–Andersen (WCA) interparticle potential, coupled to a thermodynamic ensemble. This method is a natural extension of classical molecular dynamics and enables rigorous thermodynamic calculations for faceted particles.
An accelerated hybrid TLM-IE method for the investigation of shielding effectiveness
Directory of Open Access Journals (Sweden)
N. Fichtner
2010-09-01
Full Text Available A hybrid numerical technique combining time-domain integral equations (TD-IE with the transmission line matrix (TLM method is presented for the efficient modeling of transient wave phenomena. This hybrid method allows the full-wave modeling of circuits in the time-domain as well as the electromagnetic coupling of remote TLM subdomains using integral equations (IE. By using the integral equations the space between the TLM subdomains is not discretized and consequently doesn't contribute to the computational effort. The cost for the evaluation of the time-domain integral equations (TD-IE is further reduced using a suitable plane-wave representation of the source terms. The hybrid TD-IE/TLM method is applied in the computation of the shielding effectiveness (SE of metallic enclosures.
Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.
Nagaoka, Tomoaki; Watanabe, Soichi
2011-01-01
Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.
International Nuclear Information System (INIS)
Beer, H.-F.; Haeberli, M.; Ametamey, S.; Schubiger, P.A.
1995-01-01
The compound Ro 19-6327, N-(2-aminoethyl)-5-chloropyridine-2-carboxamide, is known to inhibit reversibly and site specifically the enzyme monoamine oxidase B (MAO-B). The 123 I-labelled iodo-analogue N-(2-aminoethyl)-5-iodopyridine-2-carboxamide (Ro 43-0463) was investigated successfully in human volunteers by means of SPET (Single Photon Emission Tomography). We developed therefore the synthesis and radiolabelling of the corresponding fluoro-analogue N-(2-aminoethyl)-5-fluoropyridine-2-carboxamide with 18 F in order to carry out PET (Positron Emission Tomography) investigations of MAO-B related neuropsychiatric diseases. For this purpose two synthetic approaches leading to the electrophilic and the nucleophilic methods of 18 F radiolabelling were undertaken. The nucleophilic approach appeared to be superior when factors such as precursor synthesis, beam time, specific activity and radiochemical purity of the product are considered. (author)
Application of accelerated simulation method on NPN bipolar transistors of different technology
International Nuclear Information System (INIS)
Fei Wuxiong; Zheng Yuzhan; Wang Yiyuan; Chen Rui; Li Maoshun; Lan Bo; Cui Jiangwei; Zhao Yun; Lu Wu; Ren Diyuan; Wang Zhikuan; Yang Yonghui
2010-01-01
With different radiation methods, ionizing radiation response of NPN bipolar transistors of six different processes was investigated. The results show that the enhanced low dose rate sensitivity obviously exists in NPN bipolar transistors of the six kinds of processes. According to the experiment, the damage of decreasing temperature in step during irradiation is obviously greater than the result of irradiated at high dose rate. This irradiation method can perfectly simulate and conservatively evaluate low dose rate damage, which is of great significance to radiation effects research of bipolar devices. Finally, the mechanisms of the experimental phenomena were analyzed. (authors)
Properties of the Feynman-alpha method applied to accelerator-driven subcritical systems.
Taczanowski, S; Domanska, G; Kopec, M; Janczyszyn, J
2005-01-01
A Monte Carlo study of the Feynman-method with a simple code simulating the multiplication chain, confined to pertinent time-dependent phenomena has been done. The significance of its key parameters (detector efficiency and dead time, k-source and spallation neutrons multiplicities, required number of fissions etc.) has been discussed. It has been demonstrated that this method can be insensitive to properties of the zones surrounding the core, whereas is strongly affected by the detector dead time. In turn, the influence of harmonics in the neutron field and of the dispersion of spallation neutrons has proven much less pronounced.
2015-03-01
change in concentration of the bridgehead hydrogens at 7.78 ppm over time was plotted and the resulting data shown in the Table as well as the graph...H chamber study indicate a zero order reaction for the decomposition of the 1 produced via ARL method, the hydrolysis study conducted by the Navy...yielded a sigmoidal concentration curve. Part of the discrepancy may be due to the method used by the Navy for their humid air hydrolysis in which a
Ha, Sanghyun; Park, Junshin; You, Donghyun
2017-11-01
Utility of the computational power of modern Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. Due to its serial and bandwidth-bound nature, the present choice of numerical methods is considered to be a good candidate for evaluating the potential of GPUs for solving Navier-Stokes equations using non-explicit time integration. An efficient algorithm is presented for GPU acceleration of the Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution method used in the semi-implicit fractional-step method. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while Navier-Stokes equations are computed on a GPU. Extension to multiple NVIDIA GPUs is implemented using NVLink supported by the Pascal architecture. Performance of the present method is experimented on multiple Tesla P100 GPUs compared with a single-core Xeon E5-2650 v4 CPU in simulations of boundary-layer flow over a flat plate. Supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (Ministry of Science, ICT and Future Planning NRF-2016R1E1A2A01939553, NRF-2014R1A2A1A11049599, and Ministry of Trade, Industry and Energy 201611101000230).
Verschoor, M.; Jalba, A.C.
2012-01-01
Elastically deformable models have found applications in various areas ranging from mechanical sciences and engineering to computer graphics. The method of Finite Elements has been the tool of choice for solving the underlying PDE, when accuracy and stability of the computations are more important
ACCELERATION RENDERING METHOD ON RAY TRACING WITH ANGLE COMPARISON AND DISTANCE COMPARISON
Directory of Open Access Journals (Sweden)
Liliana liliana
2007-01-01
Full Text Available In computer graphics applications, to produce realistic images, a method that is often used is ray tracing. Ray tracing does not only model local illumination but also global illumination. Local illumination count ambient, diffuse and specular effects only, but global illumination also count mirroring and transparency. Local illumination count effects from the lamp(s but global illumination count effects from other object(s too. Objects that are usually modeled are primitive objects and mesh objects. The advantage of mesh modeling is various, interesting and real-like shape. Mesh contains many primitive objects like triangle or square (rare. A problem in mesh object modeling is long rendering time. It is because every ray must be checked with a lot of triangle of the mesh. Added by ray from other objects checking, the number of ray that traced will increase. It causes the increasing of rendering time. To solve this problem, in this research, new methods are developed to make the rendering process of mesh object faster. The new methods are angle comparison and distance comparison. These methods are used to reduce the number of ray checking. The rays predicted will not intersect with the mesh, are not checked weather the ray intersects the mesh. With angle comparison, if using small angle to compare, the rendering process will be fast. This method has disadvantage, if the shape of each triangle is big, some triangles will be corrupted. If the angle to compare is bigger, mesh corruption can be avoided but the rendering time will be longer than without comparison. With distance comparison, the rendering time is less than without comparison, and no triangle will be corrupted.
International Nuclear Information System (INIS)
Larcher, A.M.; Bonet Duran, S.M.
1998-01-01
Full text: Medical electron accelerators operating above 10 MeV produce radiation beams that are contaminated with neutrons. Therefore, shielding design for high energy accelerator rooms must consider the neutron component of the radiation field. In this paper a semiempirical method is presented to calculate doses due to neutrons and capture gamma rays inside the room and the maze. The calculation method is based on the knowledge of the neutron yield Q (neutrons/Gy of photons at isocenter) and the average energy of the primary beam of neutrons Eo (MeV). The method constitutes an appropriate tool for shielding facilities evaluation. The accuracy of the method has been contrasted with data obtained from the literature and an excellent correlation among the calculations and the measured values was achieved. In addition, the method has been used in the verification of experimental data corresponding to a 15 MeV linear accelerator installed in the country with similar results. (author) [es
International Nuclear Information System (INIS)
Turner, N.L.
1982-01-01
A particle beam accelerator is described which has several electrodes that are selectively short circuited together synchronously with changes in the magnitude of a DC voltage applied to the accelerator. By this method a substantially constant voltage gradient is maintained along the length of the unshortened electrodes despite variations in the energy applied to the beam by the accelerator. The invention has particular application to accelerating ion beams that are implanted into semiconductor wafers. (U.K.)
Can Accelerators Accelerate Learning?
International Nuclear Information System (INIS)
Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.
2009-01-01
The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ)[1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.
Can Accelerators Accelerate Learning?
Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.
2009-03-01
The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.
Accelerated convergence of the steepest-descent method for magnetohydrodynamic equilibria
International Nuclear Information System (INIS)
Handy, C.R.; Hirshman, S.P.
1984-06-01
Iterative schemes based on the method of steepest descent have recently been used to obtain magnetohydrodynamic (MHD) equilibria. Such schemes generate asymptotic geometric vector sequences whose convergence rate can be improved through the use of the epsilon-algorithm. The application of this nonlinear recursive technique to stiff systems is discussed. In principle, the epsilon-algorithm is capable of yielding quadratic convergence and therefore represents an attractive alternative to other quadratic convergence schemes requiring Jacobian matrix inversion. Because the damped MHD equations have eigenvalues with negative real parts (in the neighborhood of a stable equilibrium), the epsilon-algorithm will generally be stable. Concern for residual monotonic sequences leads to consideration of alternative methods for implementing the algorithm
DEFF Research Database (Denmark)
Qiao, Jixin; Hou, Xiaolin; Steier, Peter
2015-01-01
An automated analytical method implemented in a flow injection (FI) system was developed for rapid determination of 236U in 10 L seawater samples. 238U was used as a chemical yield tracer for the whole procedure, in which extraction chromatography (UTEVA) was exploited to purify uranium, after...... experimental parameters affecting the analytical effectiveness were investigated and optimized in order to achieve high chemical yields and simple and rapid analysis as well as low procedure background. Besides, the operational conditions for the target preparation prior to the AMS measurement were optimized......, on the basis of studying the coprecipitation behavior of uranium with iron hydroxide. The analytical results indicate that the developed method is simple and robust, providing satisfactory chemical yields (80−100%) and high analysis speed (4 h/sample), which could be an appealing alternative to conventional...
Wang, Yi
2017-09-12
Reduced-order modeling approaches for gas flow in dual-porosity dual-permeability porous media are studied based on the proper orthogonal decomposition (POD) method combined with Galerkin projection. The typical modeling approach for non-porous-medium liquid flow problems is not appropriate for this compressible gas flow in a dual-continuum porous media. The reason is that non-zero mass transfer for the dual-continuum system can be generated artificially via the typical POD projection, violating the mass-conservation nature and causing the failure of the POD modeling. A new POD modeling approach is proposed considering the mass conservation of the whole matrix fracture system. Computation can be accelerated as much as 720 times with high precision (reconstruction errors as slow as 7.69 × 10−4%~3.87% for the matrix and 8.27 × 10−4%~2.84% for the fracture).
International Nuclear Information System (INIS)
Deprun, C.; Gauvin, H.; Le Beyec, Y.
1976-01-01
The He-jet transport systems for use with the heavy-ion accelerator ALICE at Orsay are described in detail. The dependence of the gas flow rate on various parameters (pressure, length and diameter of the capillary) was investigated. Off-line measurements were carried out with a 252 Cf source. Effect on collection yield of UV radiation and additives to the helium was checked. The influence of the distance between the target and the capillary on the collection efficiency for short-lived isotopes of Yb was investigated. Some other useful details are also discussed (collector, volume of the reaction chamber, etc.). Various applications of the He-jet method are described: particle identification, angular distribution of reaction products, mass identification of radioactive nuclei. (Auth.)
Wang, Yi; Sun, Shuyu; Yu, Bo
2017-01-01
Reduced-order modeling approaches for gas flow in dual-porosity dual-permeability porous media are studied based on the proper orthogonal decomposition (POD) method combined with Galerkin projection. The typical modeling approach for non-porous-medium liquid flow problems is not appropriate for this compressible gas flow in a dual-continuum porous media. The reason is that non-zero mass transfer for the dual-continuum system can be generated artificially via the typical POD projection, violating the mass-conservation nature and causing the failure of the POD modeling. A new POD modeling approach is proposed considering the mass conservation of the whole matrix fracture system. Computation can be accelerated as much as 720 times with high precision (reconstruction errors as slow as 7.69 × 10−4%~3.87% for the matrix and 8.27 × 10−4%~2.84% for the fracture).
Trost, Barry M; Masters, James T
2016-04-21
The metal-catalyzed coupling of alkynes is a powerful method for the preparation of 1,3-enynes, compounds that are of broad interest in organic synthesis. Numerous strategies have been developed for the homo- and cross coupling of alkynes to enynes via transition metal catalysis. In such reactions, a major issue is the control of regio-, stereo-, and, where applicable, chemoselectivity. Herein, we highlight prominent methods for the selective synthesis of these valuable compounds. Further, we illustrate the utility of these processes through specific examples of their application in carbocycle, heterocycle, and natural product syntheses.
Directory of Open Access Journals (Sweden)
Alejandro C Crespo
Full Text Available Smoothed Particle Hydrodynamics (SPH is a numerical method commonly used in Computational Fluid Dynamics (CFD to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs or Graphics Processor Units (GPUs, a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability.
Convergence acceleration of Navier-Stokes equation using adaptive wavelet method
International Nuclear Information System (INIS)
Kang, Hyung Min; Ghafoor, Imran; Lee, Do Hyung
2010-01-01
An efficient adaptive wavelet method is proposed for the enhancement of computational efficiency of the Navier-Stokes equations. The method is based on sparse point representation (SPR), which uses the wavelet decomposition and thresholding to obtain a sparsely distributed dataset. The threshold mechanism is modified in order to maintain the spatial accuracy of a conventional Navier-Stokes solver by adapting the threshold value to the order of spatial truncation error. The computational grid can be dynamically adapted to a transient solution to reflect local changes in the solution. The flux evaluation is then carried out only at the points of the adapted dataset, which reduces the computational effort and memory requirements. A stabilization technique is also implemented to avoid the additional numerical errors introduced by the threshold procedure. The numerical results of the adaptive wavelet method are compared with a conventional solver to validate the enhancement in computational efficiency of Navier-Stokes equations without the degeneration of the numerical accuracy of a conventional solver
Synthetic Biology and Personalized Medicine
Jain, K.K.
2013-01-01
Synthetic biology, application of synthetic chemistry to biology, is a broad term that covers the engineering of biological systems with structures and functions not found in nature to process information, manipulate chemicals, produce energy, maintain cell environment and enhance human health. Synthetic biology devices contribute not only to improve our understanding of disease mechanisms, but also provide novel diagnostic tools. Methods based on synthetic biology enable the design of novel strategies for the treatment of cancer, immune diseases metabolic disorders and infectious diseases as well as the production of cheap drugs. The potential of synthetic genome, using an expanded genetic code that is designed for specific drug synthesis as well as delivery and activation of the drug in vivo by a pathological signal, was already pointed out during a lecture delivered at Kuwait University in 2005. Of two approaches to synthetic biology, top-down and bottom-up, the latter is more relevant to the development of personalized medicines as it provides more flexibility in constructing a partially synthetic cell from basic building blocks for a desired task. PMID:22907209
Methods and models for accelerating dynamic simulation of fluid power circuits
Energy Technology Data Exchange (ETDEWEB)
Aaman, R.
2011-07-01
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, two mechanisms which make the system stiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation
Energy Technology Data Exchange (ETDEWEB)
Hassanzadeh, M. [Nuclear Science and Technology Research Institute, AEOI, Tehran, Islamic Republic of Iran (Iran, Islamic Republic of); Feghhi, S.A.H., E-mail: a_feghhi@sbu.ac.ir [Department of Radiation Application, Shahid Beheshti University, G.C., Tehran, Islamic Republic of Iran (Iran, Islamic Republic of); Khalafi, H. [Nuclear Science and Technology Research Institute, AEOI, Tehran, Islamic Republic of Iran (Iran, Islamic Republic of)
2013-09-15
Highlights: • All reactor kinetic parameters are importance weighted quantities. • MCNIC method has been developed for calculating neutron importance in ADSRs. • Mean generation time has been calculated in spallation driven systems. -- Abstract: The difference between non-weighted neutron generation time (Λ) and the weighted one (Λ{sup †}) can be quite significant depending on the type of the system. In the present work, we will focus on developing MCNIC method for calculation of the neutron importance (Φ{sup †}) and importance weighted neutron generation time (Λ{sup †}) in accelerator driven systems (ADS). Two hypothetic bare and graphite reflected spallation source driven system have been considered as illustrative examples for this means. The results of this method have been compared with those obtained by MCNPX code. According to the results, the relative difference between Λ and Λ{sup †} is within 36% and 24,840% in bare and reflected illustrative examples respectively. The difference is quite significant in reflected systems and increases with reflector thickness. In Conclusion, this method may be used for better estimation of kinetic parameters rather than the MCNPX code because of using neutron importance function.
International Nuclear Information System (INIS)
Hassanzadeh, M.; Feghhi, S.A.H.; Khalafi, H.
2013-01-01
Highlights: • All reactor kinetic parameters are importance weighted quantities. • MCNIC method has been developed for calculating neutron importance in ADSRs. • Mean generation time has been calculated in spallation driven systems. -- Abstract: The difference between non-weighted neutron generation time (Λ) and the weighted one (Λ † ) can be quite significant depending on the type of the system. In the present work, we will focus on developing MCNIC method for calculation of the neutron importance (Φ † ) and importance weighted neutron generation time (Λ † ) in accelerator driven systems (ADS). Two hypothetic bare and graphite reflected spallation source driven system have been considered as illustrative examples for this means. The results of this method have been compared with those obtained by MCNPX code. According to the results, the relative difference between Λ and Λ † is within 36% and 24,840% in bare and reflected illustrative examples respectively. The difference is quite significant in reflected systems and increases with reflector thickness. In Conclusion, this method may be used for better estimation of kinetic parameters rather than the MCNPX code because of using neutron importance function
Li, Si-Wen; Li, Jia-Rong; Jin, Qi-Ping; Yang, Zhi; Zhang, Rong-Lan; Gao, Rui-Min; Zhao, Jian-She
2017-09-05
Two different synthetic methods, the direct method and the substitution method, were used to synthesize the Cs-POM@MOF-199@MCM-41 (Cs-PMM), in which the modified heteropolyacid with cesium salt has been encapsulated into the pores with the mixture of MOF and MCM-41. The structural properties of the as-prepared catalysts were characterized using various analytical techniques: powder X-ray diffraction, FT-IR, SEM, TEM, XPS and BET, confirming that the Cs-POM active species retained its Keggin structure after immobilization. The substitution method of Cs-PMM exhibited more excellent catalytic performance for oxidative desulfurization of dibenzothiophene in the presence of oxygen. Under optimal conditions, the DBT conversion rate reached up to 99.6% and could be recycled 10 times without significant loss of catalytic activity, which is mainly attributed to the slow leaching of the active heteropolyacid species from the strong fixed effect of the mixture porous materials. Copyright © 2017. Published by Elsevier B.V.
Directory of Open Access Journals (Sweden)
Hashem Kamali
2017-03-01
Full Text Available Introduction: Codling moth, Cydia pomonella is one of the key pests of apple in Khorasan Razavi province which annually causes severe fruit damage to apple crop. There are several ways that are used to control and prevent injury to apple products in the world. The most successful and widespread use of pheromones has been in monitoring traps. Mating disruption method by pheromones takes place when enough artificial sources of pheromone are placed in the area that the chance of finding a female by a male is high. Mating, and laying viable eggs is reduced below the point where economically significant damage occurs. Large-scale mating disruption implementation trials have yielded significant reduction in pesticide use while keeping crop damage levels acceptably low. Mating disruption works best if large areas are treated with the pheromones. Currently, chemical control is the most common method of the pest control by using insecticides. In this research, with the goal of eliminating codling moth and minimizing the use of chemical compounds on the apple fruits, the ability of artificial sex pheromones in controlling the codling moth based on mating disruption method was investigated and compared with chemical control in Ghochan County, Khorasan-e-Razavi Province, Iran, in 2013. Materials and Methods: The experiments were conducted in 20 replicates based on a CRB design. The treatments were mating disruption with pheromone dispensers mating disruption + chemical control and chemical control based on the local method. Adult moth was sampled using Delta traps with a sticky insert. 1000 pheromone, which is a two-strand wire rod was produced has been installed on trees per hectare. Pheromones were installed before the first appearance of male moths. 20 to 25 days after each pest generation, randomly 25 fruits were selected and recorded from different directions and heights base on healthy and infected fruits. Results and Discussion: The mating disruption
Synchronization method of digital pulse power supply for heavy ions accelerator in Lanzhou
International Nuclear Information System (INIS)
Wang Rongkun; Zhao Jiang; Wu Fengjun; Zhang Huajian; Chen Youxin; Huang Yuzhen; Gao Daqing; Zhou Zhongzu; Yan Huaihai; Yan Hongbin
2013-01-01
The performance of the synchrotron depends on its synchronization. A kind of synchronization method of digital pulse power supply in Heavy Ion Research Facility in Lanzhou-Cooler Storage Ring (HIRFL-CSR) was presented in detail, which is a kind of system on a programmable chip (SOPC) based on optical fiber and optical-custom component. The test of the digital power supply was performed and the current wave forms of pulse mode were given. The results show that all targets can meet the design requirements. (authors)
Stone, Christopher P.; Alferman, Andrew T.; Niemeyer, Kyle E.
2018-05-01
Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge-Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%-35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5-2.7 ×) and Xeon Phi coprocessor (4.7-4.9 ×) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5-1.6 times faster than the baseline
Noise method for monitoring the sub-criticality in accelerator driven systems
International Nuclear Information System (INIS)
Rugama, Y.; Munoz-Cobo, J.L.; Valentine, T.E.; Mihalczo, J.T.; Perez, R.B.; Perez-Navarro, A.
2001-01-01
In this paper, an absolute measurements technique for the sub-criticality determination is presented. The development of ADS, requires of methods to monitor and control the sub-criticality of this kind of systems, without interfering it's normal operation mode. This method is based on the Stochastic Neutron and Photon Transport Theory developed by Munoz-Cobo et al., and which can be implemented in presently available neutron transport codes. As a by-product of the methodology a monitoring measurement technique has been developed and verified using two coupled Monte Carlo programs. The spallation collisions and the high-energy transport are simulated with LAHET. The neutrons transports with energies less than 20 MeV and the estimation of the count statistics for neutron and/or gamma ray counters in fissile systems, is simulated with MCNP-DSP. It is possible to get the kinetics parameters and the k eff value of the sub-critical system through the analysis of the counter detectors. (author)
Papaya Tree Detection with UAV Images Using a GPU-Accelerated Scale-Space Filtering Method
Directory of Open Access Journals (Sweden)
Hao Jiang
2017-07-01
Full Text Available The use of unmanned aerial vehicles (UAV can allow individual tree detection for forest inventories in a cost-effective way. The scale-space filtering (SSF algorithm is commonly used and has the capability of detecting trees of different crown sizes. In this study, we made two improvements with regard to the existing method and implementations. First, we incorporated SSF with a Lab color transformation to reduce over-detection problems associated with the original luminance image. Second, we ported four of the most time-consuming processes to the graphics processing unit (GPU to improve computational efficiency. The proposed method was implemented using PyCUDA, which enabled access to NVIDIA’s compute unified device architecture (CUDA through high-level scripting of the Python language. Our experiments were conducted using two images captured by the DJI Phantom 3 Professional and a most recent NVIDIA GPU GTX1080. The resulting accuracy was high, with an F-measure larger than 0.94. The speedup achieved by our parallel implementation was 44.77 and 28.54 for the first and second test image, respectively. For each 4000 × 3000 image, the total runtime was less than 1 s, which was sufficient for real-time performance and interactive application.
International Nuclear Information System (INIS)
Chaudhri, M. Anwar
2006-01-01
Full text: Various nuclear analytical methods have been developed and applied to determine the elemental composition of calcified tissues (teeth and bones). Fluorine was determined by prompt gamma activation analysis through the 19 F(p,αγ) 16 O reaction. Carbon was measured by activation analysis with He-3 ions, and the technique of Proton-Induced X-ray Emission (PIXE) was applied to simultaneously determine Ca, P, and trace elements in well-documented teeth. Dental hard tissues: enamel, dentine, cementum, and their junctions, as well as different parts of the same tissue, were examined separately. Furthermore, using a Proton Microprobe, we measured the surface distribution of F and other elements on and around carious lesions on the enamel. The depth profiles of F, and other elements, were also measured right up to the amelodentin junction. (author)
Ghasemi, F.; Abbasi Davani, F.
2015-06-01
Due to Iran's growing need for accelerators in various applications, IPM's electron Linac project has been defined. This accelerator is a 15 MeV energy S-band traveling-wave accelerator which is being designed and constructed based on the klystron that has been built in Iran. Based on the design, operating mode is π /2 and the accelerating chamber consists of two 60cm long tubes with constant impedance and a 30cm long buncher. Amongst all construction methods, shrinking method is selected for construction of IPM's electron Linac tube because it has a simple procedure and there is no need for large vacuum or hydrogen furnaces. In this paper, different aspects of this method are investigated. According to the calculations, linear ratio of frequency alteration to radius change is 787.8 MHz/cm, and the maximum deformation at the tube wall where disks and the tube make contact is 2.7μ m. Applying shrinking method for construction of 8- and 24-cavity tubes results in satisfactory frequency and quality factor. Average deviations of cavities frequency of 8- and 24-cavity tubes to the design values are 0.68 MHz and 1.8 MHz respectively before tune and 0.2 MHz and 0.4 MHz after tune. Accelerating tubes, buncher, and high power couplers of IPM's electron linac are constructed using shrinking method.
Directory of Open Access Journals (Sweden)
Xin Chen
2015-09-01
Full Text Available High-speed and precision positioning are fundamental requirements for high-acceleration low-load mechanisms in integrated circuit (IC packaging equipment. In this paper, we derive the transient nonlinear dynamicresponse equations of high-acceleration mechanisms, which reveal that stiffness, frequency, damping, and driving frequency are the primary factors. Therefore, we propose a new structural optimization and velocity-planning method for the precision positioning of a high-acceleration mechanism based on optimal spatial and temporal distribution of inertial energy. For structural optimization, we first reviewed the commonly flexible multibody dynamic optimization using equivalent static loads method (ESLM, and then we selected the modified ESLM for optimal spatial distribution of inertial energy; hence, not only the stiffness but also the inertia and frequency of the real modal shapes are considered. For velocity planning, we developed a new velocity-planning method based on nonlinear dynamic-response optimization with varying motion conditions. Our method was verified on a high-acceleration die bonder. The amplitude of residual vibration could be decreased by more than 20% via structural optimization and the positioning time could be reduced by more than 40% via asymmetric variable velocity planning. This method provides an effective theoretical support for the precision positioning of high-acceleration low-load mechanisms.