The adjoint variational nodal method
International Nuclear Information System (INIS)
Laurin-Kovitz, K.; Lewis, E.E.
1993-01-01
The widespread use of nodal methods for reactor core calculations in both diffusion and transport approximations has created a demand for the corresponding adjoint solutions as a prerequisite for performing perturbation calculations. With some computational methods, however, the solution of the adjoint problem presents a difficulty; the physical adjoint obtained by discretizing the adjoint equation is not the same as the mathematical adjoint obtained by taking the transpose of the coefficient matrix, which results from the discretization of the forward equation. This difficulty arises, in particular, when interface current nodal methods based on quasi-one-dimensional solution of the diffusion or transport equation are employed. The mathematical adjoint is needed to perform perturbation calculations. The utilization of existing nodal computational algorithms, however, requires the physical adjoint. As a result, similarity transforms or related techniques must be utilized to relate physical and mathematical adjoints. Thus far, such techniques have been developed only for diffusion theory
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
Multigroup adjoint transport solution using the method of cyclic characteristics
International Nuclear Information System (INIS)
Assawaroongruengchot, M.; Marleau, G.
2005-01-01
The adjoint transport solution algorithm based on the method of cyclic characteristics (MOCC) is developed for the heterogeneous 2-dimensional geometries. The adjoint characteristics equation associated with a cyclic tracking line is formulated, then a closed form for adjoint angular flux can be determined. The acceleration techniques are implemented using the group-reduction and group-splitting techniques. To demonstrate the efficacy of the algorithm, the calculations are performed on the 17*17 PWR and Watanabe-Maynard benchmark problems. Comparisons of adjoint flux and k eff results obtained by MOCC and collision probability (CP) methods are performed. The mathematical relationship between pseudo-adjoint flux obtained by CP method and adjoint flux by MOCC method is presented. It appears that the pseudo-adjoint flux by CP method is equivalent to the adjoint flux by MOCC method and that the MOCC method requires lower computing time than the CP method for a single adjoint flux calculation
Multigroup adjoint transport solution using the method of cyclic characteristics
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M.; Marleau, G. [Ecole Polytechnique de Montreal, Institut de Genie Nucleaire, Montreal, Quebec (Canada)
2005-07-01
The adjoint transport solution algorithm based on the method of cyclic characteristics (MOCC) is developed for the heterogeneous 2-dimensional geometries. The adjoint characteristics equation associated with a cyclic tracking line is formulated, then a closed form for adjoint angular flux can be determined. The acceleration techniques are implemented using the group-reduction and group-splitting techniques. To demonstrate the efficacy of the algorithm, the calculations are performed on the 17*17 PWR and Watanabe-Maynard benchmark problems. Comparisons of adjoint flux and k{sub eff} results obtained by MOCC and collision probability (CP) methods are performed. The mathematical relationship between pseudo-adjoint flux obtained by CP method and adjoint flux by MOCC method is presented. It appears that the pseudo-adjoint flux by CP method is equivalent to the adjoint flux by MOCC method and that the MOCC method requires lower computing time than the CP method for a single adjoint flux calculation.
The discrete adjoint method for parameter identification in multibody system dynamics.
Lauß, Thomas; Oberpeilsteiner, Stefan; Steiner, Wolfgang; Nachbagauer, Karin
2018-01-01
The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method , where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.
Use of adjoint methods in the probabilistic finite element approach to fracture mechanics
Liu, Wing Kam; Besterfield, Glen; Lawrence, Mark; Belytschko, Ted
1988-01-01
The adjoint method approach to probabilistic finite element methods (PFEM) is presented. When the number of objective functions is small compared to the number of random variables, the adjoint method is far superior to the direct method in evaluating the objective function derivatives with respect to the random variables. The PFEM is extended to probabilistic fracture mechanics (PFM) using an element which has the near crack-tip singular strain field embedded. Since only two objective functions (i.e., mode I and II stress intensity factors) are needed for PFM, the adjoint method is well suited.
Four-Dimensional Data Assimilation Using the Adjoint Method
Bao, Jian-Wen
The calculus of variations is used to confirm that variational four-dimensional data assimilation (FDDA) using the adjoint method can be implemented when the numerical model equations have a finite number of first-order discontinuous points. These points represent the on/off switches associated with physical processes, for which the Jacobian matrix of the model equation does not exist. Numerical evidence suggests that, in some situations when the adjoint method is used for FDDA, the temperature field retrieved using horizontal wind data is numerically not unique. A physical interpretation of this type of non-uniqueness of the retrieval is proposed in terms of energetics. The adjoint equations of a numerical model can also be used for model-parameter estimation. A general computational procedure is developed to determine the size and distribution of any internal model parameter. The procedure is then applied to a one-dimensional shallow -fluid model in the context of analysis-nudging FDDA: the weighting coefficients used by the Newtonian nudging technique are determined. The sensitivity of these nudging coefficients to the optimal objectives and constraints is investigated. Experiments of FDDA using the adjoint method are conducted using the dry version of the hydrostatic Penn State/NCAR mesoscale model (MM4) and its adjoint. The minimization procedure converges and the initialization experiment is successful. Temperature-retrieval experiments involving an assimilation of the horizontal wind are also carried out using the adjoint of MM4.
Construction of adjoint operators for coupled equations depending on different variables
International Nuclear Information System (INIS)
Hoogenboom, J.E.
1986-01-01
A procedure is described for the construction of the adjoint operator matrix in case of coupled equations defining quantities that depend on different sets of variables. This case is not properly treated in the literature. From this procedure a simple rule can be deduced for the construction of such adjoint operator matrices
The Adjoint Method for the Inverse Problem of Option Pricing
Directory of Open Access Journals (Sweden)
Shou-Lei Wang
2014-01-01
Full Text Available The estimation of implied volatility is a typical PDE inverse problem. In this paper, we propose the TV-L1 model for identifying the implied volatility. The optimal volatility function is found by minimizing the cost functional measuring the discrepancy. The gradient is computed via the adjoint method which provides us with an exact value of the gradient needed for the minimization procedure. We use the limited memory quasi-Newton algorithm (L-BFGS to find the optimal and numerical examples shows the effectiveness of the presented method.
Solving the multigroup adjoint transport equations using the method of cyclic characteristics
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M.; Marleau, G. [Ecole Polytechnique de Montreal, Inst. de genie nucleaire, Montreal, Quebec (Canada)]. E-mail: monchai.assawar@polymtl.ca
2005-07-01
The adjoint transport solution algorithm based on the method of cyclic characteristics (MOCC) is developed for the heterogeneous 2D geometries. The adjoint characteristics equation associated with a cyclic tracking line is formulated, then a closed form for adjoint angular flux can be determined. The acceleration techniques are implemented using the group-reduction and group-splitting techniques. To demonstrate the efficacy of the algorithm, the calculations are performed on the 37 pin CANDU cell and on the Watanabe-Maynard benchmark problem. Comparisons of adjoint flux and k{sub eff} results obtained by MOCC and collision probability (CP) methods are performed. The mathematical relationship between pseudo-adjoint flux obtained by CP method and adjoint flux by MOCC method is presented. (author)
Solving the multigroup adjoint transport equations using the method of cyclic characteristics
International Nuclear Information System (INIS)
Assawaroongruengchot, M.; Marleau, G.
2005-01-01
The adjoint transport solution algorithm based on the method of cyclic characteristics (MOCC) is developed for the heterogeneous 2D geometries. The adjoint characteristics equation associated with a cyclic tracking line is formulated, then a closed form for adjoint angular flux can be determined. The acceleration techniques are implemented using the group-reduction and group-splitting techniques. To demonstrate the efficacy of the algorithm, the calculations are performed on the 37 pin CANDU cell and on the Watanabe-Maynard benchmark problem. Comparisons of adjoint flux and k eff results obtained by MOCC and collision probability (CP) methods are performed. The mathematical relationship between pseudo-adjoint flux obtained by CP method and adjoint flux by MOCC method is presented. (author)
Passive control of thermoacoustic oscillations with adjoint methods
Aguilar, Jose; Juniper, Matthew
2017-11-01
Strict pollutant regulations are driving gas turbine manufacturers to develop devices that operate under lean premixed conditions, which produce less NOx but encourage thermoacoustic oscillations. These are a form of unstable combustion that arise due to the coupling between the acoustic field and the fluctuating heat release in a combustion chamber. In such devices, in which safety is paramount, thermoacoustic oscillations must be eliminated passively, rather than through feedback control. The ideal way to eliminate thermoacoustic oscillations is by subtly changing the shape of the device. To achieve this, one must calculate the sensitivity of each unstable thermoacoustic mode to every geometric parameter. This is prohibitively expensive with standard methods, but is relatively cheap with adjoint methods. In this study we first present low-order network models as a tool to model and study the thermoacoustic behaviour of combustion chambers. Then we compute the continuous adjoint equations and the sensitivities to relevant parameters. With this, we run an optimization routine that modifies the parameters in order to stabilize all the resonant modes of a laboratory combustor rig.
Adaptive mesh refinement and adjoint methods in geophysics simulations
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Sensitivity kernels for viscoelastic loading based on adjoint methods
Al-Attar, David; Tromp, Jeroen
2014-01-01
Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity
Memory-efficient calculations of adjoint-weighted tallies by the Monte Carlo Wielandt method
International Nuclear Information System (INIS)
Choi, Sung Hoon; Shim, Hyung Jin
2016-01-01
Highlights: • The MC Wielandt method is applied to reduce memory for the adjoint estimation. • The adjoint-weighted kinetics parameters are estimated in the MC Wielandt calculations. • The MC S/U analyses are conducted in the MC Wielandt calculations. - Abstract: The current Monte Carlo (MC) adjoint-weighted tally techniques based on the iterated fission probability (IFP) concept require a memory amount which is proportional to the numbers of the adjoint-weighted tallies and histories per cycle to store history-wise tally estimates during the convergence of the adjoint flux. Especially the conventional MC adjoint-weighted perturbation (AWP) calculations for the nuclear data sensitivity and uncertainty (S/U) analysis suffer from the huge memory consumption to realize the IFP concept. In order to reduce the memory requirement drastically, we present a new adjoint estimation method in which the memory usage is irrelevant to the numbers of histories per cycle by applying the IFP concept for the MC Wielandt calculations. The new algorithms for the adjoint-weighted kinetics parameter estimation and the AWP calculations in the MC Wielandt method are implemented in a Seoul National University MC code, McCARD and its validity is demonstrated in critical facility problems. From the comparison of the nuclear data S/U analyses, it is demonstrated that the memory amounts to store the sensitivity estimates in the proposed method become negligibly small.
Numerical solution of multi group-Two dimensional- Adjoint equation with finite element method
International Nuclear Information System (INIS)
Poursalehi, N.; Khalafi, H.; Shahriari, M.; Minoochehr
2008-01-01
Adjoint equation is used for perturbation theory in nuclear reactor design. For numerical solution of adjoint equation, usually two methods are applied. These are Finite Element and Finite Difference procedures. Usually Finite Element Procedure is chosen for solving of adjoint equation, because it is more use able in variety of geometries. In this article, Galerkin Finite Element method is discussed. This method is applied for numerical solving multi group, multi region and two dimensional (X, Y) adjoint equation. Typical reactor geometry is partitioned with triangular meshes and boundary condition for adjoint flux is considered zero. Finally, for a case of defined parameters, Finite Element Code was applied and results were compared with Citation Code
Introduction to Adjoint Models
Errico, Ronald M.
2015-01-01
In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.
International Nuclear Information System (INIS)
Cacuci, D.G.
1984-07-01
This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations
International Nuclear Information System (INIS)
Martin, William G.K.; Hasekamp, Otto P.
2018-01-01
Highlights: • We demonstrate adjoint methods for atmospheric remote sensing in a two-dimensional setting. • Searchlight functions are used to handle the singularity of measurement response functions. • Adjoint methods require two radiative transfer calculations to evaluate the measurement misfit function and its derivatives with respect to all unknown parameters. • Synthetic retrieval studies show the scalability of adjoint methods to problems with thousands of measurements and unknown parameters. • Adjoint methods and the searchlight function technique are generalizable to 3D remote sensing. - Abstract: In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also
A practical discrete-adjoint method for high-fidelity compressible turbulence simulations
International Nuclear Information System (INIS)
Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.
2015-01-01
Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that
Energy Technology Data Exchange (ETDEWEB)
Mansur, Ralph S.; Moura, Carlos A., E-mail: ralph@ime.uerj.br, E-mail: demoura@ime.uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). Departamento de Engenharia Mecanica; Barros, Ricardo C., E-mail: rcbarros@pq.cnpq.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Departamento de Modelagem Computacional
2017-07-01
Presented here is an application of the Response Matrix (RM) method for adjoint discrete ordinates (S{sub N}) problems in slab geometry applied to energy-dependent source-detector problems. The adjoint RM method is free from spatial truncation errors, as it generates numerical results for the adjoint angular fluxes in multilayer slabs that agree with the numerical values obtained from the analytical solution of the energy multigroup adjoint SN equations. Numerical results are given for two typical source-detector problems to illustrate the accuracy and the efficiency of the offered RM computer code. (author)
Martin, William G. K.; Hasekamp, Otto P.
2018-01-01
In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also possible to retrieve the vertical profile of clouds that are separated by clear regions. The vertical profile retrievals improve for smaller cloud fractions. This leads to the conclusion that cloud edges actually increase the amount of information that is available for retrieving the vertical profile of clouds. However, to exploit this information one must retrieve the horizontally heterogeneous cloud properties with a 2D (or 3D) model. This prototype shows that adjoint methods can efficiently compute the gradient of the misfit function. This work paves the way for the application of similar methods to 3D remote
Subramanian, Ramanathan Vishnampet Ganapathi
Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvement. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs. Such methods have enabled sensitivity analysis and active control of turbulence at engineering flow conditions by providing gradient information at computational cost comparable to that of simulating the flow. They accelerate convergence of numerical design optimization algorithms, though this is predicated on the availability of an accurate gradient of the discretized flow equations. This is challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. We analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space--time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge--Kutta-like scheme
Directory of Open Access Journals (Sweden)
Yonghan Choi
2014-01-01
Full Text Available An adjoint sensitivity-based data assimilation (ASDA method is proposed and applied to a heavy rainfall case over the Korean Peninsula. The heavy rainfall case, which occurred on 26 July 2006, caused torrential rainfall over the central part of the Korean Peninsula. The mesoscale convective system (MCS related to the heavy rainfall was classified as training line/adjoining stratiform (TL/AS-type for the earlier period, and back building (BB-type for the later period. In the ASDA method, an adjoint model is run backwards with forecast-error gradient as input, and the adjoint sensitivity of the forecast error to the initial condition is scaled by an optimal scaling factor. The optimal scaling factor is determined by minimising the observational cost function of the four-dimensional variational (4D-Var method, and the scaled sensitivity is added to the original first guess. Finally, the observations at the analysis time are assimilated using a 3D-Var method with the improved first guess. The simulated rainfall distribution is shifted northeastward compared to the observations when no radar data are assimilated or when radar data are assimilated using the 3D-Var method. The rainfall forecasts are improved when radar data are assimilated using the 4D-Var or ASDA method. Simulated atmospheric fields such as horizontal winds, temperature, and water vapour mixing ratio are also improved via the 4D-Var or ASDA method. Due to the improvement in the analysis, subsequent forecasts appropriately simulate the observed features of the TL/AS- and BB-type MCSs and the corresponding heavy rainfall. The computational cost associated with the ASDA method is significantly lower than that of the 4D-Var method.
DEFF Research Database (Denmark)
Pingen, Georg; Evgrafov, Anton; Maute, Kurt
2009-01-01
We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion...
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
Bucco, D.; Weiss, M.
2007-01-01
The COVariance and ADjoint Analysis Tool (COVAD) is a specially designed software tool, written for the Matlab/Simulink environment, which allows the user the capability to carry out system analysis and simulation using the adjoint, covariance or Monte Carlo methods. This paper describes phase one
International Nuclear Information System (INIS)
Christie, S.A.; Lathouwers, D.; Kloosterman, J.L.
2013-01-01
Highlights: ► The double adjoint method is described. ► System reloading is determined so the multiplication factor behaviour is repeated. ► Both fast and thermal systems behave as desired. ► Allowance must be made for indirect effects in thermal systems. ► An alternative definition of breeding ratio is derived. -- Abstract: The double adjoint method uses the adjoint reactivity and transmutation problems to describe how the system composition is related to the system reactivity at different points in time. Values of the contribution to the reactivity are determined using the adjoint reactivity problem, and these are then used as the source function for the adjoint transmutation problem. The method is applied to the problem of determining the contribution of the beginning of cycle composition to the end of cycle reactivity. It is tested in both fast and thermal systems by comparing the behaviour of the multiplication factor at the end of cycle in calculations with perturbed initial compositions to that predicted by the double adjoint method. The results from the fast system are good, while those from the thermal system are less favourable. This is believed to be due to the method neglecting the coupling between the composition and the flux, which plays a more significant role in thermal systems than fast ones. The importance of correcting for the effects of the fuel compound is also established. The values found are used in calculations to determine the appropriate fuel reloading of the systems tested, with the aim of duplicating the behaviour of the multiplication factor of the original system. Again the fast system gives good results, while the thermal system is less accurate. The double adjoint method is also used for a definition of breeding ratio, and some of the features of this definition are illustrated by examining the effects of different feed materials and reprocessing schemes. The method is shown to be a useful tool for the comparison of the
Adjoint-based Mesh Optimization Method: The Development and Application for Nuclear Fuel Analysis
International Nuclear Information System (INIS)
Son, Seongmin; Lee, Jeong Ik
2016-01-01
In this research, methods for optimizing mesh distribution is proposed. The proposed method uses adjoint base optimization method (adjoint method). The optimized result will be obtained by applying this meshing technique to the existing code input deck and will be compared to the results produced from the uniform meshing method. Numerical solutions are calculated form an in-house 1D Finite Difference Method code while neglecting the axial conduction. The fuel radial node optimization was first performed to match the Fuel Centerline Temperature (FCT) the best. This was followed by optimizing the axial node which the Peak Cladding Temperature (PCT) is matched the best. After obtaining the optimized radial and axial nodes, the nodalization is implemented into the system analysis code and transient analyses were performed to observe the optimum nodalization performance. The developed adjoint-based mesh optimization method in the study is applied to MARS-KS, which is a nuclear system analysis code. Results show that the newly established method yields better results than that of the uniform meshing method from the numerical point of view. It is again stressed that the optimized mesh for the steady state can also give better numerical results even during a transient analysis
Aerodynamic Optimization Based on Continuous Adjoint Method for a Flexible Wing
Directory of Open Access Journals (Sweden)
Zhaoke Xu
2016-01-01
Full Text Available Aerodynamic optimization based on continuous adjoint method for a flexible wing is developed using FORTRAN 90 in the present work. Aerostructural analysis is performed on the basis of high-fidelity models with Euler equations on the aerodynamic side and a linear quadrilateral shell element model on the structure side. This shell element can deal with both thin and thick shell problems with intersections, so this shell element is suitable for the wing structural model which consists of two spars, 20 ribs, and skin. The continuous adjoint formulations based on Euler equations and unstructured mesh are derived and used in the work. Sequential quadratic programming method is adopted to search for the optimal solution using the gradients from continuous adjoint method. The flow charts of rigid and flexible optimization are presented and compared. The objective is to minimize drag coefficient meanwhile maintaining lift coefficient for a rigid and flexible wing. A comparison between the results from aerostructural analysis of rigid optimization and flexible optimization is shown here to demonstrate that it is necessary to include the effect of aeroelasticity in the optimization design of a wing.
First-arrival traveltime tomography for anisotropic media using the adjoint-state method
Waheed, Umair bin
2016-05-27
Traveltime tomography using transmission data has been widely used for static corrections and for obtaining near-surface models for seismic depth imaging. More recently, it is also being used to build initial models for full-waveform inversion. The classic traveltime tomography approach based on ray tracing has difficulties in handling large data sets arising from current seismic acquisition surveys. Some of these difficulties can be addressed using the adjoint-state method, due to its low memory requirement and numerical efficiency. By coupling the gradient computation to nonlinear optimization, it avoids the need for explicit computation of the Fréchet derivative matrix. Furthermore, its cost is equivalent to twice the solution of the forward-modeling problem, irrespective of the size of the input data. The presence of anisotropy in the subsurface has been well established during the past few decades. The improved seismic images obtained by incorporating anisotropy into the seismic processing workflow justify the effort. However, previous literature on the adjoint-state method has only addressed the isotropic approximation of the subsurface. We have extended the adjoint-state technique for first-arrival traveltime tomography to vertical transversely isotropic (VTI) media. Because δ is weakly resolvable from surface seismic alone, we have developed the mathematical framework and procedure to invert for vNMO and η. Our numerical tests on the VTI SEAM model demonstrate the ability of the algorithm to invert for near-surface model parameters and reveal the accuracy achievable by the algorithm.
First-arrival traveltime tomography for anisotropic media using the adjoint-state method
Waheed, Umair bin; Flagg, Garret; Yarman, Can Evren
2016-01-01
Traveltime tomography using transmission data has been widely used for static corrections and for obtaining near-surface models for seismic depth imaging. More recently, it is also being used to build initial models for full-waveform inversion. The classic traveltime tomography approach based on ray tracing has difficulties in handling large data sets arising from current seismic acquisition surveys. Some of these difficulties can be addressed using the adjoint-state method, due to its low memory requirement and numerical efficiency. By coupling the gradient computation to nonlinear optimization, it avoids the need for explicit computation of the Fréchet derivative matrix. Furthermore, its cost is equivalent to twice the solution of the forward-modeling problem, irrespective of the size of the input data. The presence of anisotropy in the subsurface has been well established during the past few decades. The improved seismic images obtained by incorporating anisotropy into the seismic processing workflow justify the effort. However, previous literature on the adjoint-state method has only addressed the isotropic approximation of the subsurface. We have extended the adjoint-state technique for first-arrival traveltime tomography to vertical transversely isotropic (VTI) media. Because δ is weakly resolvable from surface seismic alone, we have developed the mathematical framework and procedure to invert for vNMO and η. Our numerical tests on the VTI SEAM model demonstrate the ability of the algorithm to invert for near-surface model parameters and reveal the accuracy achievable by the algorithm.
An optimal control method for fluid structure interaction systems via adjoint boundary pressure
Chirco, L.; Da Vià, R.; Manservisi, S.
2017-11-01
In recent year, in spite of the computational complexity, Fluid-structure interaction (FSI) problems have been widely studied due to their applicability in science and engineering. Fluid-structure interaction systems consist of one or more solid structures that deform by interacting with a surrounding fluid flow. FSI simulations evaluate the tensional state of the mechanical component and take into account the effects of the solid deformations on the motion of the interior fluids. The inverse FSI problem can be described as the achievement of a certain objective by changing some design parameters such as forces, boundary conditions and geometrical domain shapes. In this paper we would like to study the inverse FSI problem by using an optimal control approach. In particular we propose a pressure boundary optimal control method based on Lagrangian multipliers and adjoint variables. The objective is the minimization of a solid domain displacement matching functional obtained by finding the optimal pressure on the inlet boundary. The optimality system is derived from the first order necessary conditions by taking the Fréchet derivatives of the Lagrangian with respect to all the variables involved. The optimal solution is then obtained through a standard steepest descent algorithm applied to the optimality system. The approach presented in this work is general and could be used to assess other objective functionals and controls. In order to support the proposed approach we perform a few numerical tests where the fluid pressure on the domain inlet controls the displacement that occurs in a well defined region of the solid domain.
Adjoint Optimization of a Wing Using the CSRT Method
Straathof, M.H.; Van Tooren, M.J.L.
2011-01-01
This paper will demonstrate the potential of the Class-Shape-Refinement-Transformation (CSRT) method for aerodynamically optimizing three-dimensional surfaces. The CSRT method was coupled to an in-house Euler solver and this combination was used in an optimization framework to optimize the ONERA M6
Energy Technology Data Exchange (ETDEWEB)
Curbelo, Jesus P.; Silva, Odair P. da; Barros, Ricardo C. [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico. Programa de Pos-graduacao em Modelagem Computacional; Garcia, Carlos R., E-mail: cgh@instec.cu [Departamento de Ingenieria Nuclear, Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba)
2017-07-01
Presented here is the application of the adjoint technique for solving source-detector discrete ordinates (S{sub N}) transport problems by using a spectral nodal method. For slab-geometry adjoint S-N model, the adjoint spectral Green's function method (SGF{sup †}) is extended to multigroup problems considering arbitrary L'th-order of scattering anisotropy, and the possibility of non-zero prescribed boundary conditions for the forward S{sub N} transport problems. The SGF{sup †} method converges numerical solutions that are completely free from spatial truncation errors. In order to generate numerical solutions of the SGF{sup †} equations, we use the partial adjoint one-node block inversion (NBI) iterative scheme. Partial adjoint NBI scheme uses the most recent estimates for the node-edge adjoint angular Fluxes in the outgoing directions of a given discretization node, to solve the resulting adjoint SN problem in that node for all the adjoint angular fluxes in the incoming directions, which constitute the outgoing adjoint angular fluxes for the adjacent node in the sweeping directions. Numerical results are given to illustrate the present spectral nodal method features and some advantages of using the adjoint technique in source-detector problems. author)
International Nuclear Information System (INIS)
Curbelo, Jesus P.; Silva, Odair P. da; Barros, Ricardo C.
2017-01-01
Presented here is the application of the adjoint technique for solving source{detector discrete ordinates (S N ) transport problems by using a spectral nodal method. For slab-geometry adjoint S-N model, the adjoint spectral Green's function method (SGF † ) is extended to multigroup problems considering arbitrary L'th-order of scattering anisotropy, and the possibility of non{zero prescribed boundary conditions for the forward S N transport problems. The SGF † method converges numerical solutions that are completely free from spatial truncation errors. In order to generate numerical solutions of the SGF † equations, we use the partial adjoint one{node block inversion (NBI) iterative scheme. Partial adjoint NBI scheme uses the most recent estimates for the node-edge adjoint angular Fluxes in the outgoing directions of a given discretization node, to solve the resulting adjoint SN problem in that node for all the adjoint angular fluxes in the incoming directions, which constitute the outgoing adjoint angular fluxes for the adjacent node in the sweeping directions. Numerical results are given to illustrate the present spectral nodal method features and some advantages of using the adjoint technique in source-detector problems. author)
A midway forward-adjoint coupling method for neutron and photon Monte Carlo transport
International Nuclear Information System (INIS)
Serov, I.V.; John, T.M.; Hoogenboom, J.E.
1999-01-01
The midway Monte Carlo method for calculating detector responses combines a forward and an adjoint Monte Carlo calculation. In both calculations, particle scores are registered at a surface to be chosen by the user somewhere between the source and detector domains. The theory of the midway response determination is developed within the framework of transport theory for external sources and for criticality theory. The theory is also developed for photons, which are generated at inelastic scattering or capture of neutrons. In either the forward or the adjoint calculation a so-called black absorber technique can be applied; i.e., particles need not be followed after passing the midway surface. The midway Monte Carlo method is implemented in the general-purpose MCNP Monte Carlo code. The midway Monte Carlo method is demonstrated to be very efficient in problems with deep penetration, small source and detector domains, and complicated streaming paths. All the problems considered pose difficult variance reduction challenges. Calculations were performed using existing variance reduction methods of normal MCNP runs and using the midway method. The performed comparative analyses show that the midway method appears to be much more efficient than the standard techniques in an overwhelming majority of cases and can be recommended for use in many difficult variance reduction problems of neutral particle transport
Iteration of adjoint equations
International Nuclear Information System (INIS)
Lewins, J.D.
1994-01-01
Adjoint functions are the basis of variational methods and now widely used for perturbation theory and its extension to higher order theory as used, for example, in modelling fuel burnup and optimization. In such models, the adjoint equation is to be solved in a critical system with an adjoint source distribution that is not zero but has special properties related to ratios of interest in critical systems. Consequently the methods of solving equations by iteration and accumulation are reviewed to show how conventional methods may be utilized in these circumstances with adequate accuracy. (author). 3 refs., 6 figs., 3 tabs
Cagnetti, Filippo
2013-11-01
We consider a numerical scheme for the one dimensional time dependent Hamilton-Jacobi equation in the periodic setting. This scheme consists in a semi-discretization using monotone approximations of the Hamiltonian in the spacial variable. From classical viscosity solution theory, these schemes are known to converge. In this paper we present a new approach to the study of the rate of convergence of the approximations based on the nonlinear adjoint method recently introduced by L.C. Evans. We estimate the rate of convergence for convex Hamiltonians and recover the O(h) convergence rate in terms of the L∞ norm and O(h) in terms of the L1 norm, where h is the size of the spacial grid. We discuss also possible generalizations to higher dimensional problems and present several other additional estimates. The special case of quadratic Hamiltonians is considered in detail in the end of the paper. © 2013 IMACS.
Cagnetti, Filippo; Gomes, Diogo A.; Tran, Hung Vinh
2013-01-01
We consider a numerical scheme for the one dimensional time dependent Hamilton-Jacobi equation in the periodic setting. This scheme consists in a semi-discretization using monotone approximations of the Hamiltonian in the spacial variable. From classical viscosity solution theory, these schemes are known to converge. In this paper we present a new approach to the study of the rate of convergence of the approximations based on the nonlinear adjoint method recently introduced by L.C. Evans. We estimate the rate of convergence for convex Hamiltonians and recover the O(h) convergence rate in terms of the L∞ norm and O(h) in terms of the L1 norm, where h is the size of the spacial grid. We discuss also possible generalizations to higher dimensional problems and present several other additional estimates. The special case of quadratic Hamiltonians is considered in detail in the end of the paper. © 2013 IMACS.
International Nuclear Information System (INIS)
Andreucci, N.
1985-04-01
Deep penetration transport problems in complex systems joint to heterogeneous source (Q) sampling give rise to some difficulties in evaluating leakage and fluxes on a detector point. To overcome these difficulties we have solved both the adjoint Boltzmann flux (phi*) equation and following scalar-dual equation: ∫Qphi* dP - ∫Q*phi dP = ∫phiphi* Ω . n dΣ dΩ dE dt + ∫ [phiphi*]sub(0)sup(T)/v dr dΩ dE D = (phase space). With a suitable choice for the domain D, for Q* and for the boundary conditions, an adjoint flux calculation allows us to obtain simultaneously the Q-source contribution and the detection (or leakage) spectrum. Compared to direct methods with importance sampling, the adjoint methods give very low-cost and faithful results
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect
Multi-objective Optimization Strategies Using Adjoint Method and Game Theory in Aerodynamics
Tang, Zhili
2006-08-01
There are currently three different game strategies originated in economics: (1) Cooperative games (Pareto front), (2) Competitive games (Nash game) and (3) Hierarchical games (Stackelberg game). Each game achieves different equilibria with different performance, and their players play different roles in the games. Here, we introduced game concept into aerodynamic design, and combined it with adjoint method to solve multi-criteria aerodynamic optimization problems. The performance distinction of the equilibria of these three game strategies was investigated by numerical experiments. We computed Pareto front, Nash and Stackelberg equilibria of the same optimization problem with two conflicting and hierarchical targets under different parameterizations by using the deterministic optimization method. The numerical results show clearly that all the equilibria solutions are inferior to the Pareto front. Non-dominated Pareto front solutions are obtained, however the CPU cost to capture a set of solutions makes the Pareto front an expensive tool to the designer.
Multi-objective optimization strategies using adjoint method and game theory in aerodynamics
Institute of Scientific and Technical Information of China (English)
Zhili Tang
2006-01-01
There are currently three different game strategies originated in economics:(1) Cooperative games (Pareto front),(2)Competitive games (Nash game) and (3)Hierarchical games (Stackelberg game).Each game achieves different equilibria with different performance,and their players play different roles in the games.Here,we introduced game concept into aerodynamic design, and combined it with adjoint method to solve multicriteria aerodynamic optimization problems.The performance distinction of the equilibria of these three game strategies was investigated by numerical experiments.We computed Pareto front, Nash and Stackelberg equilibria of the same optimization problem with two conflicting and hierarchical targets under different parameterizations by using the deterministic optimization method.The numerical results show clearly that all the equilibria solutions are inferior to the Pareto front.Non-dominated Pareto front solutions are obtained,however the CPU cost to capture a set of solutions makes the Pareto front an expensive tool to the designer.
The method of rigged spaces in singular perturbation theory of self-adjoint operators
Koshmanenko, Volodymyr; Koshmanenko, Nataliia
2016-01-01
This monograph presents the newly developed method of rigged Hilbert spaces as a modern approach in singular perturbation theory. A key notion of this approach is the Lax-Berezansky triple of Hilbert spaces embedded one into another, which specifies the well-known Gelfand topological triple. All kinds of singular interactions described by potentials supported on small sets (like the Dirac δ-potentials, fractals, singular measures, high degree super-singular expressions) admit a rigorous treatment only in terms of the equipped spaces and their scales. The main idea of the method is to use singular perturbations to change inner products in the starting rigged space, and the construction of the perturbed operator by the Berezansky canonical isomorphism (which connects the positive and negative spaces from a new rigged triplet). The approach combines three powerful tools of functional analysis based on the Birman-Krein-Vishik theory of self-adjoint extensions of symmetric operators, the theory of singular quadra...
Energy Technology Data Exchange (ETDEWEB)
Curbelo, Jesus P.; Alves Filho, Hermes; Barros, Ricardo C., E-mail: jperez@iprj.uerj.br, E-mail: halves@iprj.uerj.br, E-mail: rcbarros@pq.cnpq.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico. Programa de Pos-Graduacao em Modelagem Computacional; Hernandez, Carlos R.G., E-mail: cgh@instec.cu [Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba)
2015-07-01
The spectral Green's function (SGF) method is a numerical method that is free of spatial truncation errors for slab-geometry fixed-source discrete ordinates (S{sub N}) adjoint problems. The method is based on the standard spatially discretized adjoint S{sub N} balance equations and a nonstandard adjoint auxiliary equation expressing the node-average adjoint angular flux, in each discretization node, as a weighted combination of the node-edge outgoing adjoint fluxes. The auxiliary equation contains parameters which act as Green's functions for the cell-average adjoint angular flux. These parameters are determined by means of a spectral analysis which yields the local general solution of the S{sub N} equations within each node of the discretization grid. In this work a number of advances in the SGF adjoint method are presented: the method is extended to adjoint S{sub N} problems considering linearly anisotropic scattering and non-zero prescribed boundary conditions for the forward source-detector problem. Numerical results to typical model problems are considered to illustrate the efficiency and accuracy of the o offered method. (author)
Optical properties reconstruction using the adjoint method based on the radiative transfer equation
Addoum, Ahmad; Farges, Olivier; Asllanaj, Fatmir
2018-01-01
An efficient algorithm is proposed to reconstruct the spatial distribution of optical properties in heterogeneous media like biological tissues. The light transport through such media is accurately described by the radiative transfer equation in the frequency-domain. The adjoint method is used to efficiently compute the objective function gradient with respect to optical parameters. Numerical tests show that the algorithm is accurate and robust to retrieve simultaneously the absorption μa and scattering μs coefficients for lowly and highly absorbing medium. Moreover, the simultaneous reconstruction of μs and the anisotropy factor g of the Henyey-Greenstein phase function is achieved with a reasonable accuracy. The main novelty in this work is the reconstruction of g which might open the possibility to image this parameter in tissues as an additional contrast agent in optical tomography.
Directory of Open Access Journals (Sweden)
V.S. Kochergin
2017-02-01
Full Text Available The passive admixture transport model in the Azov Sea is considered. The problem of cartelistic impulse local source identification at the sea surface based on adjoint method is solving by integration of independent series of adjoint tasks. Simultaneous solution of this problem at the parallel mode is realized by the aforementioned approach. The efficiency of the algorithm optimal value power of source search agreed with the data measurements is shown in the test example. The measurement data assimilation algorithm in the passive admixture transfer model is implemented applying variational methods of filtration for optimal estimate retrieval. The retrieval is carried out by means of the method of adjoint equations and solving of linear systems. On the basis of the variational filtration method of data assimilation, the optimal estimate retrieval algorithm for pollution source power identification is constructed. In application of the algorithm, the integration of the main, linked and adjoint problems is implemented. Integration problems are solved using TVD approximations. For the application of the procedure, the Azov current fields and turbulent diffusion coefficients are obtained using the sigma coordinate ocean model (POM under the eastern wind stress conditions being dominant at the observed time period. Furthermore, the results can be used to perform numerical data assimilation on loads of suspended matter.
International Nuclear Information System (INIS)
Kowalok, M.; Mackie, T.R.
2001-01-01
A relatively new technique for achieving the right dose to the right tissue, is intensity modulated radiation therapy (IMRT). In this technique, a megavoltage x-ray beam is rotated around a patient, and the intensity and shape of the beam is modulated as a function of source position and patient anatomy. The relationship between beam-let intensity and patient dose can be expressed under a matrix form where the matrix D ij represents the dose delivered to voxel i by beam-let j per unit fluence. The D ij influence matrix is the key element that enables this approach. In this regard, sensitivity theory lends itself in a natural way to the process of computing beam weights for treatment planning. The solution of the adjoint form of the Boltzmann equation is an adjoint function that describes the importance of particles throughout the system in contributing to the detector response. In this case, adjoint methods can provide the sensitivity of the dose at a single point in the patient with respect to all points in the source field. The purpose of this study is to investigate the feasibility of using the adjoint method and Monte Carlo transport for radiation therapy treatment planning
Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang
2018-06-01
In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.
Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.
2014-12-01
Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.
The adjoint sensitivity method, a contribution to the code uncertainty evaluation
International Nuclear Information System (INIS)
Ounsy, A.; Brun, B.; De Crecy, F.
1994-01-01
This paper deals with the application of the adjoint sensitivity method (ASM) to thermal hydraulic codes. The advantage of the method is to use small central processing unit time in comparison with the usual approach requiring one complete code run per sensitivity determination. In the first part the mathematical aspects of the problem are treated, and the applicability of the method of the functional-type response of a thermal hydraulic model is demonstrated. On a simple example of non-linear hyperbolic equation (Burgers equation) the problem has been analysed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the continuous ASM and the discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the discrete ASM constitutes a practical solution for thermal hydraulic codes. The application of the discrete ASM to the thermal hydraulic safety code CATHARE is then presented for two examples. They demonstrate that the discrete ASM constitutes an efficient tool for the analysis of code sensitivity. ((orig.))
The adjoint sensitivity method, a contribution to the code uncertainty evaluation
International Nuclear Information System (INIS)
Ounsy, A.; Crecy, F. de; Brun, B.
1993-01-01
The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs
The adjoint sensitivity method. A contribution to the code uncertainty evaluation
International Nuclear Information System (INIS)
Ounsy, A.; Brun, B.
1993-01-01
The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs
The adjoint sensitivity method. A contribution to the code uncertainty evaluation
Energy Technology Data Exchange (ETDEWEB)
Ounsy, A; Brun, B
1994-12-31
The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs.
The adjoint sensitivity method, a contribution to the code uncertainty evaluation
Energy Technology Data Exchange (ETDEWEB)
Ounsy, A; Crecy, F de; Brun, B
1994-12-31
The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs.
The adjoint method for general EEG and MEG sensor-based lead field equations
International Nuclear Information System (INIS)
Vallaghe, Sylvain; Papadopoulo, Theodore; Clerc, Maureen
2009-01-01
Most of the methods for the inverse source problem in electroencephalography (EEG) and magnetoencephalography (MEG) use a lead field as an input. The lead field is the function which relates any source in the brain to its measurements at the sensors. For complex geometries, there is no analytical formula of the lead field. The common approach is to numerically compute the value of the lead field for a finite number of point sources (dipoles). There are several drawbacks: the model of the source space is fixed (a set of dipoles), and the computation can be expensive for as much as 10 000 dipoles. The common idea to bypass these problems is to compute the lead field from a sensor point of view. In this paper, we use the adjoint method to derive general EEG and MEG sensor-based lead field equations. Within a simple framework, we provide a complete review of the explicit lead field equations, and we are able to extend these equations to non-pointlike sensors.
Full Seismic Waveform Tomography of the Japan region using Adjoint Methods
Steptoe, Hamish; Fichtner, Andreas; Rickers, Florian; Trampert, Jeannot
2013-04-01
We present a full-waveform tomographic model of the Japan region based on spectral-element wave propagation, adjoint techniques and seismic data from dense station networks. This model is intended to further our understanding of both the complex regional tectonics and the finite rupture processes of large earthquakes. The shallow Earth structure of the Japan region has been the subject of considerable tomographic investigation. The islands of Japan exist in an area of significant plate complexity: subduction related to the Pacific and Philippine Sea plates is responsible for the majority of seismicity and volcanism of Japan, whilst smaller micro-plates in the region, including the Okhotsk, and Okinawa and Amur, part of the larger North America and Eurasia plates respectively, contribute significant local intricacy. In response to the need to monitor and understand the motion of these plates and their associated faults, numerous seismograph networks have been established, including the 768 station high-sensitivity Hi-net network, 84 station broadband F-net and the strong-motion seismograph networks K-net and KiK-net in Japan. We also include the 55 station BATS network of Taiwan. We use this exceptional coverage to construct a high-resolution model of the Japan region from the full-waveform inversion of over 15,000 individual component seismograms from 53 events that occurred between 1997 and 2012. We model these data using spectral-element simulations of seismic wave propagation at a regional scale over an area from 120°-150°E and 20°-50°N to a depth of around 500 km. We quantify differences between observed and synthetic waveforms using time-frequency misfits allowing us to separate both phase and amplitude measurements whilst exploiting the complete waveform at periods of 15-60 seconds. Fréchet kernels for these misfits are calculated via the adjoint method and subsequently used in an iterative non-linear conjugate-gradient optimization. Finally, we employ
The sensitivity analysis by adjoint method for the uncertainty evaluation of the CATHARE-2 code
Energy Technology Data Exchange (ETDEWEB)
Barre, F.; de Crecy, A.; Perret, C. [French Atomic Energy Commission (CEA), Grenoble (France)
1995-09-01
This paper presents the application of the DASM (Discrete Adjoint Sensitivity Method) to CATHARE 2 thermal-hydraulics code. In a first part, the basis of this method is presented. The mathematical model of the CATHARE 2 code is based on the two fluid six equation model. It is discretized using implicit time discretization and it is relatively easy to implement this method in the code. The DASM is the ASM directly applied to the algebraic system of the discretized code equations which has been demonstrated to be the only solution of the mathematical model. The ASM is an integral part of the new version 1.4 of CATHARE. It acts as a post-processing module. It has been qualified by comparison with the {open_quotes}brute force{close_quotes} technique. In a second part, an application of the DASM in CATHARE 2 is presented. It deals with the determination of the uncertainties of the constitutive relationships, which is a compulsory step for calculating the final uncertainty of a given response. First, the general principles of the method are explained: the constitutive relationship are represented by several parameters and the aim is to calculate the variance-covariance matrix of these parameters. The experimental results of the separate effect tests used to establish the correlation are considered. The variance of the corresponding results calculated by CATHARE are estimated by comparing experiment and calculation. A DASM calculation is carried out to provide the derivatives of the responses. The final covariance matrix is obtained by combination of the variance of the responses and those derivatives. Then, the application of this method to a simple case-the blowdown Canon experiment-is presented. This application has been successfully performed.
The sensitivity analysis by adjoint method for the uncertainty evaluation of the CATHARE-2 code
International Nuclear Information System (INIS)
Barre, F.; de Crecy, A.; Perret, C.
1995-01-01
This paper presents the application of the DASM (Discrete Adjoint Sensitivity Method) to CATHARE 2 thermal-hydraulics code. In a first part, the basis of this method is presented. The mathematical model of the CATHARE 2 code is based on the two fluid six equation model. It is discretized using implicit time discretization and it is relatively easy to implement this method in the code. The DASM is the ASM directly applied to the algebraic system of the discretized code equations which has been demonstrated to be the only solution of the mathematical model. The ASM is an integral part of the new version 1.4 of CATHARE. It acts as a post-processing module. It has been qualified by comparison with the open-quotes brute forceclose quotes technique. In a second part, an application of the DASM in CATHARE 2 is presented. It deals with the determination of the uncertainties of the constitutive relationships, which is a compulsory step for calculating the final uncertainty of a given response. First, the general principles of the method are explained: the constitutive relationship are represented by several parameters and the aim is to calculate the variance-covariance matrix of these parameters. The experimental results of the separate effect tests used to establish the correlation are considered. The variance of the corresponding results calculated by CATHARE are estimated by comparing experiment and calculation. A DASM calculation is carried out to provide the derivatives of the responses. The final covariance matrix is obtained by combination of the variance of the responses and those derivatives. Then, the application of this method to a simple case-the blowdown Canon experiment-is presented. This application has been successfully performed
Approximation for the adjoint neutron spectrum
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2002-01-01
The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)
Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods
Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco
2015-04-01
The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface
An Adjoint-based Numerical Method for a class of nonlinear Fokker-Planck Equations
Festa, Adriano; Gomes, Diogo A.; Machado Velho, Roberto
2017-01-01
Here, we introduce a numerical approach for a class of Fokker-Planck (FP) equations. These equations are the adjoint of the linearization of Hamilton-Jacobi (HJ) equations. Using this structure, we show how to transfer the properties of schemes for HJ equations to the FP equations. Hence, we get numerical schemes with desirable features such as positivity and mass-preservation. We illustrate this approach in examples that include mean-field games and a crowd motion model.
An Adjoint-based Numerical Method for a class of nonlinear Fokker-Planck Equations
Festa, Adriano
2017-03-22
Here, we introduce a numerical approach for a class of Fokker-Planck (FP) equations. These equations are the adjoint of the linearization of Hamilton-Jacobi (HJ) equations. Using this structure, we show how to transfer the properties of schemes for HJ equations to the FP equations. Hence, we get numerical schemes with desirable features such as positivity and mass-preservation. We illustrate this approach in examples that include mean-field games and a crowd motion model.
Cooper, M.; Martin, R.; Henze, D. K.
2016-12-01
Nitrogen oxide (NOx ≡ NO + NO2) emission inventories can be improved through top-down constraints provided by inverse modeling of observed nitrogen dioxide (NO2) columns. Here we compare two methods of inverse modeling for emissions of NOx from synthetic NO2 columns generated from known emissions using the GEOS-Chem chemical transport model and its adjoint. We treat the adjoint-based 4D-VAR approach for estimating top-down emissions as a benchmark against which to evaluate variations on the mass balance method. We find that the standard mass balance algorithm can be improved by using an iterative process and using finite difference to calculate the local sensitivity of a change in NO2 columns to a change in emissions, resulting in a factor of two reduction in inversion error. In a simplified case study to recover local emission perturbations, horizontal smearing effects due to NOx transport were better resolved by the adjoint-based approach than by mass balance. For more complex emission changes that reflect real world scenarios, the iterative finite difference mass balance and adjoint methods produce similar top-down inventories when inverting hourly synthetic observations, both reducing the a priori error by factors of 3-4. Inversions of data sets that simulate satellite observations from low Earth and geostationary orbits also indicate that both the mass balance and adjoint inversions produce similar results, reducing a priori error by a factor of 3. As the iterative finite difference mass balance method provides similar accuracy as the adjoint-based 4D-VAR method, it offers the ability to efficiently estimate top-down emissions using models that do not have an adjoint.
FOCUS: a non-multigroup adjoint Monte Carlo code with improved variance reduction
International Nuclear Information System (INIS)
Hoogenboom, J.E.
1974-01-01
A description is given of the selection mechanism in the adjoint Monte Carlo code FOCUS in which the energy is treated as a continuous variable. The method of Kalos who introduced the idea of adjoint cross sections is followed to derive a sampling scheme for the adjoint equation solved in FOCUS which is in most aspects analogous to the normal Monte Carlo game. The disadvantages of the use of these adjoint cross sections are removed to some extent by introduction of a new definition for the adjoint cross sections resulting in appreciable variance reduction. At the cost of introducing a weight factor slightly different from unity, the direction and energy are selected in a simple way without the need of two-dimensional probability tables. Finally the handling of geometry and cross section in FOCUS is briefly discussed. 6 references. (U.S.)
Adjoint method provides phase response functions for delay-induced oscillations.
Kotani, Kiyoshi; Yamaguchi, Ikuhiro; Ogawa, Yutaro; Jimbo, Yasuhiko; Nakao, Hiroya; Ermentrout, G Bard
2012-07-27
Limit-cycle oscillations induced by time delay are widely observed in various systems, but a systematic phase-reduction theory for them has yet to be developed. Here we present a practical theoretical framework to calculate the phase response function Z(θ), a fundamental quantity for the theory, of delay-induced limit cycles with infinite-dimensional phase space. We show that Z(θ) can be obtained as a zero eigenfunction of the adjoint equation associated with an appropriate bilinear form for the delay differential equations. We confirm the validity of the proposed framework for two biological oscillators and demonstrate that the derived phase equation predicts intriguing multimodal locking behavior.
Adjoint Method and Predictive Control for 1-D Flow in NASA Ames 11-Foot Transonic Wind Tunnel
Nguyen, Nhan; Ardema, Mark
2006-01-01
This paper describes a modeling method and a new optimal control approach to investigate a Mach number control problem for the NASA Ames 11-Foot Transonic Wind Tunnel. The flow in the wind tunnel is modeled by the 1-D unsteady Euler equations whose boundary conditions prescribe a controlling action by a compressor. The boundary control inputs to the compressor are in turn controlled by a drive motor system and an inlet guide vane system whose dynamics are modeled by ordinary differential equations. The resulting Euler equations are thus coupled to the ordinary differential equations via the boundary conditions. Optimality conditions are established by an adjoint method and are used to develop a model predictive linear-quadratic optimal control for regulating the Mach number due to a test model disturbance during a continuous pitch
International Nuclear Information System (INIS)
Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.
2013-01-01
The first goal of this paper is to present an exact method able to precisely evaluate very small reactivity effects with a Monte Carlo code (<10 pcm). it has been decided to implement the exact perturbation theory in TRIPOLI-4 and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4 is described. To illustrate the efficiency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the 'direct' estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the 'direct' method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. It offers the possibility to split reactivity contributions on both isotopes and reactions. Other applications of this perturbation method are presented and tested like the calculation of exact kinetic parameters (βeff, Λeff) or sensitivity parameters
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Hossen, M. Jakir; Gusman, Aditya; Satake, Kenji; Cummins, Phil R.
2018-01-01
We have previously developed a tsunami source inversion method based on "Time Reverse Imaging" and demonstrated that it is computationally very efficient and has the ability to reproduce the tsunami source model with good accuracy using tsunami data of the 2011 Tohoku earthquake tsunami. In this paper, we implemented this approach in the 2009 Samoa earthquake tsunami triggered by a doublet earthquake consisting of both normal and thrust faulting. Our result showed that the method is quite capable of recovering the source model associated with normal and thrust faulting. We found that the inversion result is highly sensitive to some stations that must be removed from the inversion. We applied an adjoint sensitivity method to find the optimal set of stations in order to estimate a realistic source model. We found that the inversion result is improved significantly once the optimal set of stations is used. In addition, from the reconstructed source model we estimated the slip distribution of the fault from which we successfully determined the dipping orientation of the fault plane for the normal fault earthquake. Our result suggests that the fault plane dip toward the northeast.
International Nuclear Information System (INIS)
Henderson, D.L.; Yoo, S.; Kowalok, M.; Mackie, T.R.; Thomadsen, B.R.
2001-01-01
The goal of this project is to investigate the use of the adjoint method, commonly used in the reactor physics community, for the optimization of radiation therapy patient treatment plans. Two different types of radiation therapy are being examined, interstitial brachytherapy and radiotherapy. In brachytherapy radioactive sources are surgically implanted within the diseased organ such as the prostate to treat the cancerous tissue. With radiotherapy, the x-ray source is usually located at a distance of about 1-meter from the patient and focused on the treatment area. For brachytherapy the optimization phase of the treatment plan consists of determining the optimal placement of the radioactive sources, which delivers the prescribed dose to the disease tissue while simultaneously sparing (reducing) the dose to sensitive tissue and organs. For external beam radiation therapy the optimization phase of the treatment plan consists of determining the optimal direction and intensity of beam, which provides complete coverage of the tumor region with the prescribed dose while simultaneously avoiding sensitive tissue areas. For both therapy methods, the optimal treatment plan is one in which the diseased tissue has been treated with the prescribed dose and dose to the sensitive tissue and organs has been kept to a minimum
AN EULERIAN-LAGRANGIAN LOCALIZED ADJOINT METHOD FOR THE ADVECTION-DIFFUSION EQUATION
Many numerical methods use characteristic analysis to accommodate the advective component of transport. Such characteristic methods include Eulerian-Lagrangian methods (ELM), modified method of characteristics (MMOC), and operator splitting methods. A generalization of characteri...
Hoteit, Ibrahim; Cornuelle, B.; Heimbach, P.
2010-01-01
An eddy-permitting adjoint-based assimilation system has been implemented to estimate the state of the tropical Pacific Ocean. The system uses the Massachusetts Institute of Technology's general circulation model and its adjoint. The adjoint method
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard; Nakshatrala, Praveen B.; Tortorelli, Daniel A.
2014-01-01
Gradient-based topology optimization typically involves thousands or millions of design variables. This makes efficient sensitivity analysis essential and for this the adjoint variable method (AVM) is indispensable. For transient problems it has been observed that the traditional AVM, based on a ...
Spatial discretizations for self-adjoint forms of the radiative transfer equations
International Nuclear Information System (INIS)
Morel, Jim E.; Adams, B. Todd; Noh, Taewan; McGhee, John M.; Evans, Thomas M.; Urbatsch, Todd J.
2006-01-01
There are three commonly recognized second-order self-adjoint forms of the neutron transport equation: the even-parity equations, the odd-parity equations, and the self-adjoint angular flux equations. Because all of these equations contain second-order spatial derivatives and are self-adjoint for the mono-energetic case, standard continuous finite-element discretization techniques have proved quite effective when applied to the spatial variables. We first derive analogs of these equations for the case of time-dependent radiative transfer. The primary unknowns for these equations are functions of the angular intensity rather than the angular flux, hence the analog of the self-adjoint angular flux equation is referred to as the self-adjoint angular intensity equation. Then we describe a general, arbitrary-order, continuous spatial finite-element approach that is applied to each of the three equations in conjunction with backward-Euler differencing in time. We refer to it as the 'standard' technique. We also introduce an alternative spatial discretization scheme for the self-adjoint angular intensity equation that requires far fewer unknowns than the standard method, but appears to give comparable accuracy. Computational results are given that demonstrate the validity of both of these discretization schemes
International Nuclear Information System (INIS)
Khericha, Soli T.
2000-01-01
One-energy group, two-dimensional computer code was developed to calculate the response of a detector to a vibrating absorber in a reactor core. A concept of local/global components, based on the frequency dependent detector adjoint function, and a nodalization technique were utilized. The frequency dependent detector adjoint functions presented by complex equations were expanded into real and imaginary parts. In the nodalization technique, the flux is expanded into polynomials about the center point of each node. The phase angle and the magnitude of the one-energy group detector adjoint function were calculated for a detector located in the center of a 200x200 cm reactor using a two-dimensional nodalization technique, the computer code EXTERMINATOR, and the analytical solution. The purpose of this research was to investigate the applicability of a polynomial nodal model technique to the calculations of the real and the imaginary parts of the detector adjoint function for one-energy group two-dimensional polynomial nodal model technique. From the results as discussed earlier, it is concluded that the nodal model technique can be used to calculate the detector adjoint function and the phase angle. Using the computer code developed for nodal model technique, the magnitude of one energy group frequency dependent detector adjoint function and the phase angle were calculated for the detector located in the center of a 200x200 cm homogenous reactor. The real part of the detector adjoint function was compared with the results obtained from the EXTERMINATOR computer code as well as the analytical solution based on a double sine series expansion using the classical Green's Function solution. The values were found to be less than 1% greater at 20 cm away from the source region and about 3% greater closer to the source compared to the values obtained from the analytical solution and the EXTERMINATOR code. The currents at the node interface matched within 1% of the average
2006-01-30
detail next. 3.2 Fast Sweeping Method for Equation (1) The fast sweeping method was originated in Boue and Dupis [5], its first PDE formulation was in...Geophysics, 50:903–923, 1985. [5] M. Boue and P. Dupuis. Markov chain approximations for deterministic control prob- lems with affine dynamics and
Adjoint entropy vs topological entropy
Giordano Bruno, Anna
2012-01-01
Recently the adjoint algebraic entropy of endomorphisms of abelian groups was introduced and studied. We generalize the notion of adjoint entropy to continuous endomorphisms of topological abelian groups. Indeed, the adjoint algebraic entropy is defined using the family of all finite-index subgroups, while we take only the subfamily of all open finite-index subgroups to define the topological adjoint entropy. This allows us to compare the (topological) adjoint entropy with the known topologic...
Martin, William G.; Cairns, Brian; Bal, Guillaume
2014-01-01
This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.
Directory of Open Access Journals (Sweden)
Andre Lamert
2018-03-01
Full Text Available We present and compare two flexible and effective methodologies to predict disturbance zones ahead of underground tunnels by using elastic full-waveform inversion. One methodology uses a linearized, iterative approach based on misfit gradients computed with the adjoint method while the other uses iterative, gradient-free unscented Kalman filtering in conjunction with a level-set representation. Whereas the former does not involve a priori assumptions on the distribution of elastic properties ahead of the tunnel, the latter introduces a massive reduction in the number of explicit model parameters to be inverted for by focusing on the geometric form of potential disturbances and their average elastic properties. Both imaging methodologies are validated through successful reconstructions of simple disturbances. As an application, we consider an elastic multiple disturbance scenario. By using identical synthetic time-domain seismograms as test data, we obtain satisfactory, albeit different, reconstruction results from the two inversion methodologies. The computational costs of both approaches are of the same order of magnitude, with the gradient-based approach showing a slight advantage. The model parameter space reduction approach compensates for this by additionally providing a posteriori estimates of model parameter uncertainty. Keywords: Tunnel seismics, Full waveform inversion, Seismic waves, Level-set method, Adjoint method, Kalman filter
On the non-uniqueness of the nodal mathematical adjoint
International Nuclear Information System (INIS)
Müller, Erwin
2014-01-01
Highlights: • We evaluate three CMFD schemes for computing the nodal mathematical adjoint. • The nodal mathematical adjoint is not unique and can be non-positive (nonphysical). • Adjoint and forward eigenmodes are compatible if produced by the same CMFD method. • In nodal applications the excited eigenmodes are purely mathematical entities. - Abstract: Computation of the neutron adjoint flux within the framework of modern nodal diffusion methods is often facilitated by reducing the nodal equation system for the forward flux into a simpler coarse-mesh finite-difference form and then transposing the resultant matrix equations. The solution to the transposed problem is known as the nodal mathematical adjoint. Since the coarse-mesh finite-difference reduction of a given nodal formulation can be obtained in a number of ways, different nodal mathematical adjoint solutions can be computed. This non-uniqueness of the nodal mathematical adjoint challenges the credibility of the reduction strategy and demands a verdict as to its suitability in practical applications. This is the matter under consideration in this paper. A selected number of coarse-mesh finite-difference reduction schemes are described and compared. Numerical calculations are utilised to illustrate the differences in the adjoint solutions as well as to appraise the impact on such common applications as the computation of core point kinetics parameters. Recommendations are made for the proper application of the coarse-mesh finite-difference reduction approach to the nodal mathematical adjoint problem
Adjoint spectrum calculation in fuel heterogeneous cells
International Nuclear Information System (INIS)
Suster, Luis Carlos
1998-01-01
In most codes for cells calculation, the multigroup cross sections are generated taking into consideration the conservation of the reaction rates in the forward spectrum. However, for certain uses of the perturbation theory it's necessary to use the average of the parameters for energy macrogroups over the forward and the adjoint spectra. In this thesis the adjoint spectrum was calculated from the adjoint neutron balance equations, that were obtained for a heterogeneous unit cell. The collision probabilities method was used to obtain these equations. In order optimize the computational run-time, the Gaussian quadrature method was used in the calculation of the neutron balance equations, forward and adjoint. This method of integration was also used for the Doppler broadening functions calculation, necessary for obtaining the energy dependent cross sections. In order to calculate the reaction rates and the average cross sections, using both the forward and the adjoint neutron spectra, the most important resonances of the U 238 were considered. The results obtained with the method show significant differences for the different cross sections weighting schemes. (author)
Extraction Methods, Variability Encountered in
Bodelier, P.L.E.; Nelson, K.E.
2014-01-01
Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition
A reduced adjoint approach to variational data assimilation
Altaf, Muhammad; El Gharamti, Mohamad; Heemink, Arnold W.; Hoteit, Ibrahim
2013-01-01
The adjoint method has been used very often for variational data assimilation. The computational cost to run the adjoint model often exceeds several original model runs and the method needs significant programming efforts to implement the adjoint model code. The work proposed here is variational data assimilation based on proper orthogonal decomposition (POD) which avoids the implementation of the adjoint of the tangent linear approximation of the original nonlinear model. An ensemble of the forward model simulations is used to determine the approximation of the covariance matrix and only the dominant eigenvectors of this matrix are used to define a model subspace. The adjoint of the tangent linear model is replaced by the reduced adjoint based on this reduced space. Thus the adjoint model is run in reduced space with negligible computational cost. Once the gradient is obtained in reduced space it is projected back in full space and the minimization process is carried in full space. In the paper the reduced adjoint approach to variational data assimilation is introduced. The characteristics and performance of the method are illustrated with a number of data assimilation experiments in a ground water subsurface contaminant model. © 2012 Elsevier B.V.
A reduced adjoint approach to variational data assimilation
Altaf, Muhammad
2013-02-01
The adjoint method has been used very often for variational data assimilation. The computational cost to run the adjoint model often exceeds several original model runs and the method needs significant programming efforts to implement the adjoint model code. The work proposed here is variational data assimilation based on proper orthogonal decomposition (POD) which avoids the implementation of the adjoint of the tangent linear approximation of the original nonlinear model. An ensemble of the forward model simulations is used to determine the approximation of the covariance matrix and only the dominant eigenvectors of this matrix are used to define a model subspace. The adjoint of the tangent linear model is replaced by the reduced adjoint based on this reduced space. Thus the adjoint model is run in reduced space with negligible computational cost. Once the gradient is obtained in reduced space it is projected back in full space and the minimization process is carried in full space. In the paper the reduced adjoint approach to variational data assimilation is introduced. The characteristics and performance of the method are illustrated with a number of data assimilation experiments in a ground water subsurface contaminant model. © 2012 Elsevier B.V.
Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems
Garg, Vikram V
2014-09-27
Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.
Sensitivity analysis of predictive models with an automated adjoint generator
International Nuclear Information System (INIS)
Pin, F.G.; Oblow, E.M.
1987-01-01
The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig
ADGEN: ADjoint GENerator for computer models
Energy Technology Data Exchange (ETDEWEB)
Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.
1989-05-01
This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.
ADGEN: ADjoint GENerator for computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.
1989-05-01
This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs
Approximation for the adjoint neutron spectrum; Aproximacao para o espectro adjunto de neutrons
Energy Technology Data Exchange (ETDEWEB)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear
2002-07-01
The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)
International Nuclear Information System (INIS)
Khuat, Quang Huy; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho
2015-01-01
This technique is known as Consistent Adjoint Driven Importance Sampling (CADIS) method and it is implemented in SCALE code system. In the CADIS method, adjoint transport equation has to be solved to determine deterministic importance functions. Using the CADIS method, a problem was noted that the biased adjoint flux estimated by deterministic methods can affect the calculation efficiency and error. The biases of adjoint function are caused by the methodology, calculation strategy, tolerance of result calculated by the deterministic method and inaccurate multi-group cross section libraries. In this paper, a study to analyze the influence of the biased adjoint functions into Monte Carlo computational efficiency is pursued. In this study, a method to estimate the calculation efficiency was proposed for applying the biased adjoint fluxes in the CADIS approach. For a benchmark problem, the responses and FOMs using SCALE code system were evaluated as applying the adjoint fluxes. The results show that the biased adjoint fluxes significantly affects the calculation efficiencies
International Nuclear Information System (INIS)
Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.
1994-06-01
NESTLE is a FORTRAN77 code that solves the few-group neutron diffusion equation utilizing the Nodal Expansion Method (NEM). NESTLE can solve the eigenvalue (criticality); eigenvalue adjoint; external fixed-source steady-state; or external fixed-source. or eigenvalue initiated transient problems. The code name NESTLE originates from the multi-problem solution capability, abbreviating Nodal Eigenvalue, Steady-state, Transient, Le core Evaluator. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two or four energy groups can be utilized, with all energy groups being thermal groups (i.e. upscatter exits) if desired. Core geometries modelled include Cartesian and Hexagonal. Three, two and one dimensional models can be utilized with various symmetries. The non-linear iterative strategy associated with the NEM method is employed. An advantage of the non-linear iterative strategy is that NSTLE can be utilized to solve either the nodal or Finite Difference Method representation of the few-group neutron diffusion equation
Double-Difference Global Adjoint Tomography
Orsvuran, R.; Bozdag, E.; Lei, W.; Tromp, J.
2017-12-01
The adjoint method allows us to incorporate full waveform simulations in inverse problems. Misfit functions play an important role in extracting the relevant information from seismic waveforms. In this study, our goal is to apply the Double-Difference (DD) methodology proposed by Yuan et al. (2016) to global adjoint tomography. Dense seismic networks, such as USArray, lead to higher-resolution seismic images underneath continents. However, the imbalanced distribution of stations and sources poses challenges in global ray coverage. We adapt double-difference multitaper measurements to global adjoint tomography. We normalize each DD measurement by its number of pairs, and if a measurement has no pair, as may frequently happen for data recorded at oceanic stations, classical multitaper measurements are used. As a result, the differential measurements and pair-wise weighting strategy help balance uneven global kernel coverage. Our initial experiments with minor- and major-arc surface waves show promising results, revealing more pronounced structure near dense networks while reducing the prominence of paths towards cluster of stations. We have started using this new measurement in global adjoint inversions, addressing azimuthal anisotropy in upper mantle. Meanwhile, we are working on combining the double-difference approach with instantaneous phase measurements to emphasize contributions of scattered waves in global inversions and extending it to body waves. We will present our results and discuss challenges and future directions in the context of global tomographic inversions.
Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization
Green, Lawrence; Carle, Alan; Fagan, Mike
1999-01-01
Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
Adjoint current-based approaches to prostate brachytherapy optimization
International Nuclear Information System (INIS)
Roberts, J. A.; Henderson, D. L.
2009-01-01
This paper builds on previous work done at the Univ. of Wisconsin - Madison to employ the adjoint concept of nuclear reactor physics in the so-called greedy heuristic of brachytherapy optimization. Whereas that previous work focused on the adjoint flux, i.e. the importance, this work has included use of the adjoint current to increase the amount of information available in optimizing. Two current-based approaches were developed for 2-D problems, and each was compared to the most recent form of the flux-based methodology. The first method aimed to take a treatment plan from the flux-based greedy heuristic and adjust via application of the current-displacement, or a vector displacement based on a combination of tissue (adjoint) and seed (forward) currents acting as forces on a seed. This method showed promise in improving key urethral and rectal dosimetric quantities. The second method uses the normed current-displacement as the greedy criterion such that seeds are placed in regions of least force. This method, coupled with the dose-update scheme, generated treatment plans with better target irradiation and sparing of the urethra and normal tissues than the flux-based approach. Tables of these parameters are given for both approaches. In summary, these preliminary results indicate adjoint current methods are useful in optimization and further work in 3-D should be performed. (authors)
Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators
Sykes, J. F.; Wilson, J. L.; Andrews, R. W.
1985-03-01
Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.
Probability density adjoint for sensitivity analysis of the Mean of Chaos
Energy Technology Data Exchange (ETDEWEB)
Blonigan, Patrick J., E-mail: blonigan@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
2014-08-01
Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.
Weak self-adjoint differential equations
International Nuclear Information System (INIS)
Gandarias, M L
2011-01-01
The concepts of self-adjoint and quasi self-adjoint equations were introduced by Ibragimov (2006 J. Math. Anal. Appl. 318 742-57; 2007 Arch. ALGA 4 55-60). In Ibragimov (2007 J. Math. Anal. Appl. 333 311-28), a general theorem on conservation laws was proved. In this paper, we generalize the concept of self-adjoint and quasi self-adjoint equations by introducing the definition of weak self-adjoint equations. We find a class of weak self-adjoint quasi-linear parabolic equations. The property of a differential equation to be weak self-adjoint is important for constructing conservation laws associated with symmetries of the differential equation. (fast track communication)
Variation estimation of the averaged cross sections in the direct and adjoint fluxes
International Nuclear Information System (INIS)
Cardoso, Carlos Eduardo Santos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
1995-01-01
There are several applications of the perturbation theory to specifics problems of reactor physics, such as nonuniform fuel burnup, nonuniform poison accumulation and evaluations of Doppler effects on reactivity. The neutron fluxes obtained from the solutions of direct and adjoint diffusion equations, are used in these applications. In the adjoint diffusion equation has been used the group constants averaged in the energy-dependent direct neutron flux, that it is not theoretically consistent. In this paper it is presented a method to calculate the energy-dependent adjoint neutron flux, to obtain the average group-constant that will be used in the adjoint diffusion equation. The method is based on the solution of the adjoint neutron balance equations, that were derived for a two regions cell. (author). 5 refs, 2 figs, 1 tab
Solution of the mathematical adjoint equations for an interface current nodal formulation
International Nuclear Information System (INIS)
Yang, W.S.; Taiwo, T.A.; Khalil, H.
1994-01-01
Two techniques for solving the mathematical adjoint equations of an interface current nodal method are described. These techniques are the ''similarity transformation'' procedure and a direct solution scheme. A theoretical basis is provided for the similarity transformation procedure originally proposed by Lawrence. It is shown that the matrices associated with the mathematical and physical adjoint equations are similar to each other for the flat transverse leakage approximation but not for the quadratic leakage approximation. It is also shown that a good approximate solution of the mathematical adjoint for the quadratic transverse leakage approximation is obtained by applying the similarity transformation for the flat transverse leakage approximation to the physical adjoint solution. The direct solution scheme, which was developed as an alternative to the similarity transformation procedure, yields the correct mathematical adjoint solution for both flat and quadratic transverse leakage approximations. In this scheme, adjoint nodal equations are cast in a form very similar to that of the forward equations by employing a linear transformation of the adjoint partial currents. This enables the use of the forward solution algorithm with only minor modifications for solving the mathematical adjoint equations. By using the direct solution scheme as a reference method, it is shown that while the results computed with the similarity transformation procedure are approximate, they are sufficiently accurate for calculations of global and local reactivity changes resulting from coolant voiding in a liquid-metal reactor
Self-adjoint extensions and spectral analysis in the Calogero problem
International Nuclear Information System (INIS)
Gitman, D M; Tyutin, I V; Voronov, B L
2010-01-01
In this paper, we present a mathematically rigorous quantum-mechanical treatment of a one-dimensional motion of a particle in the Calogero potential αx -2 . Although the problem is quite old and well studied, we believe that our consideration based on a uniform approach to constructing a correct quantum-mechanical description for systems with singular potentials and/or boundaries, proposed in our previous works, adds some new points to its solution. To demonstrate that a consideration of the Calogero problem requires mathematical accuracy, we discuss some 'paradoxes' inherent in the 'naive' quantum-mechanical treatment. Using a self-adjoint extension method, we construct and study all possible self-adjoint operators (self-adjoint Hamiltonians) associated with a formal differential expression for the Calogero Hamiltonian. In particular, we discuss a spontaneous scale-symmetry breaking associated with self-adjoint extensions. A complete spectral analysis of all self-adjoint Hamiltonians is presented.
Adjoint-Based Design of Rotors Using the Navier-Stokes Equations in a Noninertial Reference Frame
Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.
2010-01-01
Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated by using comparisons with a complex-variable technique, and a number of single- and multipoint optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.
New Monte Carlo approach to the adjoint Boltzmann equation
International Nuclear Information System (INIS)
De Matteis, A.; Simonini, R.
1978-01-01
A class of stochastic models for the Monte Carlo integration of the adjoint neutron transport equation is described. Some current general methods are brought within this class, thus preparing the ground for subsequent comparisons. Monte Carlo integration of the adjoint Boltzmann equation can be seen as a simulation of the transport of mathematical particles with reaction kernels not normalized to unity. This last feature is a source of difficulty: It can influence the variance of the result negatively and also often leads to preparation of special ''libraries'' consisting of tables of normalization factors as functions of energy, presently used by several methods. These are the two main points that are discussed and that are taken into account to devise a nonmultigroup method of solution for a certain class of problems. Reactions considered in detail are radiative capture, elastic scattering, discrete levels and continuum inelastic scattering, for which the need for tables has been almost completely eliminated. The basic policy pursued to avoid a source of statistical fluctuations is to try to make the statistical weight of the traveling particle dependent only on its starting and current energies, at least in simple cases. The effectiveness of the sampling schemes proposed is supported by numerical comparison with other more general adjoint Monte Carlo methods. Computation of neutron flux at a point by means of an adjoint formulation is the problem taken as a test for numerical experiments. Very good results have been obtained in the difficult case of resonant cross sections
Adjoint sensitivity analysis of high frequency structures with Matlab
Bakr, Mohamed; Demir, Veysel
2017-01-01
This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.
Effect of lattice-level adjoint-weighting on the kinetics parameters of CANDU reactors
International Nuclear Information System (INIS)
Nichita, Eleodor
2009-01-01
Space-time kinetics calculations for CANDU reactors are routinely performed using the Improved Quasistatic (IQS) method. The IQS method calculates kinetics parameters such as the effective delayed-neutron fraction and generation time using adjoint weighting. In the current implementation of IQS, the direct flux, as well as the adjoint, is calculated using a two-group cell-homogenized reactor model which is inadequate for capturing the effect of the softer energy spectrum of the delayed neutrons. Additionally, there may also be fine spatial effects that are lost because the intra-cell adjoint shape is ignored. The purpose of this work is to compare the kinetics parameters calculated using the two-group cell-homogenized model with those calculated using lattice-level fine-group heterogeneous adjoint weighting and to assess whether the differences are large enough to justify further work on incorporating lattice-level adjoint weighting into the IQS method. A second goal is to evaluate whether the use of a fine-group cell-homogenized lattice-level adjoint, such as is the current practice for Light Water Reactors (LWRs), is sufficient to capture the lattice effects in question. It is found that, for CANDU lattices, the generation time is almost unaffected by the type of adjoint used to calculate it, but that the effective delayed-neutron fraction is affected by the type of adjoint used. The effective delayed-neutron fraction calculated using the two-group cell-homogenized adjoint is 5.2% higher than the 'best' effective delayed-neutron fraction value obtained using the detailed lattice-level fine-group heterogeneous adjoint. The effective delayed-neutron fraction calculated using the fine-group cell-homogenized adjoint is only 1.7% higher than the 'best' effective delayed-neutron fraction value but is still not equal to it. This situation is different from that encountered in LWRs where weighting by a fine-group cell-homogenized adjoint is sufficient to calculate the
Variable selection by lasso-type methods
Directory of Open Access Journals (Sweden)
Sohail Chand
2011-09-01
Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.
Tan, Z.; Zhuang, Q.; Henze, D. K.; Frankenberg, C.; Dlugokencky, E. J.; Sweeney, C.; Turner, A. J.
2015-12-01
Understanding CH4 emissions from wetlands and lakes are critical for the estimation of Arctic carbon balance under fast warming climatic conditions. To date, our knowledge about these two CH4 sources is almost solely built on the upscaling of discontinuous measurements in limited areas to the whole region. Many studies indicated that, the controls of CH4 emissions from wetlands and lakes including soil moisture, lake morphology and substrate content and quality are notoriously heterogeneous, thus the accuracy of those simple estimates could be questionable. Here we apply a high spatial resolution atmospheric inverse model (nested-grid GEOS-Chem Adjoint) over the Arctic by integrating SCIAMACHY and NOAA/ESRL CH4 measurements to constrain the CH4 emissions estimated with process-based wetland and lake biogeochemical models. Our modeling experiments using different wetland CH4 emission schemes and satellite and surface measurements show that the total amount of CH4 emitted from the Arctic wetlands is well constrained, but the spatial distribution of CH4 emissions is sensitive to priors. For CH4 emissions from lakes, our high-resolution inversion shows that the models overestimate CH4 emissions in Alaskan costal lowlands and East Siberian lowlands. Our study also indicates that the precision and coverage of measurements need to be improved to achieve more accurate high-resolution estimates.
Toward regional-scale adjoint tomography in the deep earth
Masson, Y.; Romanowicz, B. A.
2013-12-01
Thanks to the development of efficient numerical computation methods, such as the Spectral Element Method (SEM) and to the increasing power of computer clusters, it is now possible to obtain regional-scale images of the Earth's interior using adjoint-tomography (e.g. Tape, C., et al., 2009). As for now, these tomographic models are limited to the upper layers of the earth, i.e., they provide us with high-resolution images of the crust and the upper part of the mantle. Given the gigantic amount of calculation it represents, obtaing similar models at the global scale (i.e. images of the entire Earth) seems out of reach at the moment. Furthermore, it's likely that the first generation of such global adjoint tomographic models will have a resolution significantly smaller than the current regional models. In order to image regions of interests in the deep Earth, such as plumes, slabs or large low shear velocity provinces (LLSVPs), while keeping the computation tractable, we are developing new tools that will allow us to perform regional-scale adjoint-tomography at arbitrary depths. In a recent study (Masson et al., 2013), we showed that a numerical equivalent of the time reversal mirrors used in experimental acoustics permits to confine the wave propagation computations (i.e. using SEM simulations) inside the region to be imaged. With this ability to limit wave propagation modeling inside a region of interest, obtaining the adjoint sensitivity kernels needed for tomographic imaging is only two steps further. First, the local wavefield modeling needs to be coupled with field extrapolation techniques in order to obtain synthetic seismograms at the surface of the earth. These seismograms will account for the 3D structure inside the region of interest in a quasi-exact manner. We will present preliminary results where the field-extrapolation is performed using Green's function computed in a 1D Earth model thanks to the Direct Solution Method (DSM). Once synthetic seismograms
Gait variability: methods, modeling and meaning
Directory of Open Access Journals (Sweden)
Hausdorff Jeffrey M
2005-07-01
Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.
The Laplace transformation of adjoint transport equations
International Nuclear Information System (INIS)
Hoogenboom, J.E.
1985-01-01
A clarification is given of the difference between the equation adjoint to the Laplace-transformed time-dependent transport equation and the Laplace-transformed time-dependent adjoint transport equation. Proper procedures are derived to obtain the Laplace transform of the instantaneous detector response. (author)
The adjoint space in heat transport theory
International Nuclear Information System (INIS)
Dam, H. van; Hoogenboom, J.E.
1980-01-01
The mathematical concept of adjoint operators is applied to the heat transport equation and an adjoint equation is defined with a detector function as source term. The physical meaning of the solutions for the latter equation is outlined together with an application in the field of perturbation analysis. (author)
Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids
Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.
2009-01-01
An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.
Estimation of Adjoint-Weighted Kinetics Parameters in Monte Carlo Wieland Calculations
International Nuclear Information System (INIS)
Choi, Sung Hoon; Shim, Hyung Jin
2013-01-01
The effective delayed neutron fraction, β eff , and the prompt neutron generation time, Λ, in the point kinetics equation are weighted by the adjoint flux to improve the accuracy of the reactivity estimate. Recently the Monte Carlo (MC) kinetics parameter estimation methods by using the self-consistent adjoint flux calculated in the MC forward simulations have been developed and successfully applied for the research reactor analyses. However these adjoint estimation methods based on the cycle-by-cycle genealogical table require a huge memory size to store the pedigree hierarchy. In this paper, we present a new adjoint estimation in which the pedigree of a single history is utilized by applying the MC Wielandt method. The effectiveness of the new method is demonstrated in the kinetics parameter estimations for infinite homogeneous two-group problems and the Godiva critical facility
Unsteady adjoint for large eddy simulation of a coupled turbine stator-rotor system
Talnikar, Chaitanya; Wang, Qiqi; Laskowski, Gregory
2016-11-01
Unsteady fluid flow simulations like large eddy simulation are crucial in capturing key physics in turbomachinery applications like separation and wake formation in flow over a turbine vane with a downstream blade. To determine how sensitive the design objectives of the coupled system are to control parameters, an unsteady adjoint is needed. It enables the computation of the gradient of an objective with respect to a large number of inputs in a computationally efficient manner. In this paper we present unsteady adjoint solutions for a coupled turbine stator-rotor system. As the transonic fluid flows over the stator vane, the boundary layer transitions to turbulence. The turbulent wake then impinges on the rotor blades, causing early separation. This coupled system exhibits chaotic dynamics which causes conventional adjoint solutions to diverge exponentially, resulting in the corruption of the sensitivities obtained from the adjoint solutions for long-time simulations. In this presentation, adjoint solutions for aerothermal objectives are obtained through a localized adjoint viscosity injection method which aims to stabilize the adjoint solution and maintain accurate sensitivities. Preliminary results obtained from the supercomputer Mira will be shown in the presentation.
Self-consistent adjoint analysis for topology optimization of electromagnetic waves
Deng, Yongbo; Korvink, Jan G.
2018-05-01
In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2015-04-01
The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each
GPU-accelerated adjoint algorithmic differentiation
Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe
2016-03-01
Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the ;tape;. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.
Global Linear Representations of Nonlinear Systems and the Adjoint Map
Banks, S.P.
1988-01-01
In this paper we shall study the global linearization of nonlinear systems on a manifold by two methods. The first consists of an expansion of the vector field in the space of square integrable vector fields. In the second method we use the adjoint representation of the Lie algebra vector fields to obtain an infinite-dimensional matrix representation of the system. A connection between the two approaches will be developed.
Characterization and uniqueness of distinguished self-adjoint extensions of dirac operators
International Nuclear Information System (INIS)
Klaus, M.; Wuest, R.; Princeton Univ., NJ
1979-01-01
Distinguished self-adjoint extensions of Dirac operators are characterized by Nenciu and constructed by means of cut-off potentials by Wuest. In this paper it is shown that the existence and a more explicit characterization of Nenciu's self-adjoint extensions can be obtained as a consequence from results of the cut-off method, that these extensions are the same as the extensions constructed with cut-off potentials and that they are unique in some sense. (orig.) [de
International Nuclear Information System (INIS)
Stripling, H.F.; Anitescu, M.; Adams, M.L.
2013-01-01
Highlights: ► We develop an abstract framework for computing the adjoint to the neutron/nuclide burnup equations posed as a system of differential algebraic equations. ► We validate use of the adjoint for computing both sensitivity to uncertain inputs and for estimating global time discretization error. ► Flexibility of the framework is leveraged to add heat transfer physics and compute its adjoint without a reformulation of the adjoint system. ► Such flexibility is crucial for high performance computing applications. -- Abstract: We develop a general framework for computing the adjoint variable to nuclear engineering problems governed by a set of differential–algebraic equations (DAEs). The nuclear engineering community has a rich history of developing and applying adjoints for sensitivity calculations; many such formulations, however, are specific to a certain set of equations, variables, or solution techniques. Any change or addition to the physics model would require a reformulation of the adjoint problem and substantial difficulties in its software implementation. In this work we propose an abstract framework that allows for the modification and expansion of the governing equations, leverages the existing theory of adjoint formulation for DAEs, and results in adjoint equations that can be used to efficiently compute sensitivities for parametric uncertainty quantification. Moreover, as we justify theoretically and demonstrate numerically, the same framework can be used to estimate global time discretization error. We first motivate the framework and show that the coupled Bateman and transport equations, which govern the time-dependent neutronic behavior of a nuclear reactor, may be formulated as a DAE system with a power constraint. We then use a variational approach to develop the parameter-dependent adjoint framework and apply existing theory to give formulations for sensitivity and global time discretization error estimates using the adjoint
Development and validation of continuous energy adjoint-weighted calculations
International Nuclear Information System (INIS)
Truchet, Guillaume
2015-01-01
A key issue in nowadays Reactor Physics is to propagate input data uncertainties (e.g. nuclear data, manufacturing tolerances, etc.) to nuclear codes final results (e.g. k(eff), reaction rate, etc.). In order to propagate uncertainties, one typically assumes small variations around a reference and evaluates at first sensitivity profiles. Problem is that nuclear Monte Carlo codes are not - or were not until very recently - able to straightforwardly process such sensitivity profiles, even thought they are considered as reference codes. First goal of this PhD thesis is to implement a method to calculate k(eff)-sensitivity profiles to nuclear data or any perturbations in TRIPOLI-4, the CEA Monte Carlo neutrons transport code. To achieve such a goal, a method has first been developed to calculate the adjoint flux using the Iterated Fission Probability (IFP) principle that states that the adjoint flux at a given phase space point is proportional to the neutron importance in a just critical core after several power iterations. Thanks to our developments, it has been made possible, for the fist time, to calculate the continuous adjoint flux for an actual and complete reactor core configuration. From that new feature, we have elaborated a new method able to forwardly apply the exact perturbation theory in Monte Carlo codes. Exact perturbation theory does not rely on small variations which makes possible to calculate very complex experiments. Finally and after a deep analysis of the IFP method, this PhD thesis also reproduces and improves an already used method to calculate adjoint weighted kinetic parameters as well as reference migrations areas. (author) [fr
The adjoint string at finite temperature
International Nuclear Information System (INIS)
Damgaard, P.H.
1986-10-01
Expectations for the behavior of the adjoint string at finite temperature are presented. In the Migdal-Kadanoff approximation a real-space renormalization group study of the effective Polyakov like action predicts a deconfinement-like crossover for adjoint sources at a temperature slightly below the deconfinement temperature of fundamental sources. This prediction is compared with a Monte Carlo simulation of SU(2) lattice gauge theory on an 8 3 x2 lattice. (orig.)
Constrained variable projection method for blind deconvolution
International Nuclear Information System (INIS)
Cornelio, A; Piccolomini, E Loli; Nagy, J G
2012-01-01
This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.
International Nuclear Information System (INIS)
Muldoon, Frank H.; Kuhlmann, Hendrik C.
2015-01-01
Highlights: • Suppression of oscillations in a thermocapillary flow is addressed by optimization. • The gradient of the objective function is obtained by solving the adjoint equations. • The issue of choosing an objective function is investigated. - Abstract: The problem of suppressing flow oscillations in a thermocapillary flow is addressed using a gradient-based control strategy. The physical problem addressed is the “open boat” process of crystal growth, the flow in which is driven by thermocapillary and buoyancy effects. The problem is modeled by the two-dimensional unsteady incompressible Navier–Stokes and energy equations under the Boussinesq approximation. The goal of the control is to suppress flow oscillations which arise when the driving forces are such that the flow becomes unsteady. The control is a spatially and temporally varying temperature gradient boundary condition at the free surface. The control which minimizes the flow oscillations is found using a conjugate gradient method, where the gradient of the objective function with respect to the control variables is obtained from solving a set of adjoint equations. The issue of choosing an objective function that can be both optimized in a computationally efficient manner and optimization of which provides control that damps the flow oscillations is investigated. Almost complete suppression of the flow oscillations is obtained for certain choices of the objective function.
Morrow, Rosemary; de Mey, Pierre
1995-12-01
The flow characteristics in the region of the Azores Current are investigated by assimilating TOPEX/POSEIDON and ERS 1 altimeter data into the multilevel Harvard quasigeostrophic (QG) model with open boundaries (Miller et al., 1983) using an adjoint variational scheme (Moore, 1991). The study site lies in the path of the Azores Current, where a branch retroflects to the south in the vicinity of the Madeira Rise. The region was the site of an intensive field program in 1993, SEMAPHORE. We had two main aims in this adjoint assimilation project. The first was to see whether the adjoint method could be applied locally to optimize an initial guess field, derived from the continous assimilation of altimetry data using optimal interpolation (OI). The second aim was to assimilate a variety of different data sets and evaluate their importance in constraining our QG model. The adjoint assimilation of surface data was effective in optimizing the initial conditions from OI. After 20 iterations the cost function was generally reduced by 50-80%, depending on the chosen data constraints. The primary adjustment process was via the barotropic mode. Altimetry proved to be a good constraint on the variable flow field, in particular, for constraining the barotropic field. The excellent data quality of the TOPEX/POSEIDON (T/P) altimeter data provided smooth and reliable forcing; but for our mesoscale study in a region of long decorrelation times O(30 days), the spatial coverage from the combined T/P and ERS 1 data sets was more important for constraining the solution and providing stable flow at all levels. Surface drifters provided an excellent constraint on both the barotropic and baroclinic model fields. More importantly, the drifters provided a reliable measure of the mean field. Hydrographic data were also applied as a constraint; in general, hydrography provided a weak but effective constraint on the vertical Rossby modes in the model. Finally, forecasts run over a 2-month period
Risk assessment of groundwater level variability using variable Kriging methods
Spanoudaki, Katerina; Kampanis, Nikolaos A.
2015-04-01
Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the
Global adjoint tomography: first-generation model
Bozdağ, Ebru
2016-09-23
We present the first-generation global tomographic model constructed based on adjoint tomography, an iterative full-waveform inversion technique. Synthetic seismograms were calculated using GPU-accelerated spectral-element simulations of global seismic wave propagation, accommodating effects due to 3-D anelastic crust & mantle structure, topography & bathymetry, the ocean load, ellipticity, rotation, and self-gravitation. Fréchet derivatives were calculated in 3-D anelastic models based on an adjoint-state method. The simulations were performed on the Cray XK7 named \\'Titan\\', a computer with 18 688 GPU accelerators housed at Oak Ridge National Laboratory. The transversely isotropic global model is the result of 15 tomographic iterations, which systematically reduced differences between observed and simulated three-component seismograms. Our starting model combined 3-D mantle model S362ANI with 3-D crustal model Crust2.0. We simultaneously inverted for structure in the crust and mantle, thereby eliminating the need for widely used \\'crustal corrections\\'. We used data from 253 earthquakes in the magnitude range 5.8 ≤ M ≤ 7.0. We started inversions by combining ~30 s body-wave data with ~60 s surface-wave data. The shortest period of the surface waves was gradually decreased, and in the last three iterations we combined ~17 s body waves with ~45 s surface waves. We started using 180 min long seismograms after the 12th iteration and assimilated minor- and major-arc body and surface waves. The 15th iteration model features enhancements of well-known slabs, an enhanced image of the Samoa/Tahiti plume, as well as various other plumes and hotspots, such as Caroline, Galapagos, Yellowstone and Erebus. Furthermore, we see clear improvements in slab resolution along the Hellenic and Japan Arcs, as well as subduction along the East of Scotia Plate, which does not exist in the starting model. Point-spread function tests demonstrate that we are approaching the
Vich, M.; Romero, R.; Richard, E.; Arbogast, P.; Maynard, K.
2010-09-01
Heavy precipitation events occur regularly in the western Mediterranean region. These events often have a high impact on the society due to economic and personal losses. The improvement of the mesoscale numerical forecasts of these events can be used to prevent or minimize their impact on the society. In previous studies, two ensemble prediction systems (EPSs) based on perturbing the model initial and boundary conditions were developed and tested for a collection of high-impact MEDEX cyclonic episodes. These EPSs perturb the initial and boundary potential vorticity (PV) field through a PV inversion algorithm. This technique ensures modifications of all the meteorological fields without compromising the mass-wind balance. One EPS introduces the perturbations along the zones of the three-dimensional PV structure presenting the local most intense values and gradients of the field (a semi-objective choice, PV-gradient), while the other perturbs the PV field over the MM5 adjoint model calculated sensitivity zones (an objective method, PV-adjoint). The PV perturbations are set from a PV error climatology (PVEC) that characterizes typical PV errors in the ECMWF forecasts, both in intensity and displacement. This intensity and displacement perturbation of the PV field is chosen randomly, while its location is given by the perturbation zones defined in each ensemble generation method. Encouraged by the good results obtained by these two EPSs that perturb the PV field, a new approach based on a manual perturbation of the PV field has been tested and compared with the previous results. This technique uses the satellite water vapor (WV) observations to guide the correction of initial PV structures. The correction of the PV field intents to improve the match between the PV distribution and the WV image, taking advantage of the relation between dark and bright features of WV images and PV anomalies, under some assumptions. Afterwards, the PV inversion algorithm is applied to run
The Hausdorff measure of chaotic sets of adjoint shift maps
Energy Technology Data Exchange (ETDEWEB)
Wang Huoyun [Department of Mathematics of Guangzhou University, Guangzhou 510006 (China)]. E-mail: wanghuoyun@sina.com; Song Wangan [Department of Computer, Huaibei Coal Industry Teacher College, Huaibei 235000 (China)
2006-11-15
In this paper, the size of chaotic sets of adjoint shift maps is estimated by Hausdorff measure. We prove that for any adjoint shift map there exists a finitely chaotic set with full Hausdorff measure.
Global Seismic Imaging Based on Adjoint Tomography
Bozdag, E.; Lefebvre, M.; Lei, W.; Peter, D. B.; Smith, J. A.; Zhu, H.; Komatitsch, D.; Tromp, J.
2013-12-01
Our aim is to perform adjoint tomography at the scale of globe to image the entire planet. We have started elastic inversions with a global data set of 253 CMT earthquakes with moment magnitudes in the range 5.8 ≤ Mw ≤ 7 and used GSN stations as well as some local networks such as USArray, European stations, etc. Using an iterative pre-conditioned conjugate gradient scheme, we initially set the aim to obtain a global crustal and mantle model with confined transverse isotropy in the upper mantle. Global adjoint tomography has so far remained a challenge mainly due to computational limitations. Recent improvements in our 3D solvers (e.g., a GPU version) and access to high-performance computational centers (e.g., ORNL's Cray XK7 "Titan" system) now enable us to perform iterations with higher-resolution (T > 9 s) and longer-duration (200 min) simulations to accommodate high-frequency body waves and major-arc surface waves, respectively, which help improve data coverage. The remaining challenge is the heavy I/O traffic caused by the numerous files generated during the forward/adjoint simulations and the pre- and post-processing stages of our workflow. We improve the global adjoint tomography workflow by adopting the ADIOS file format for our seismic data as well as models, kernels, etc., to improve efficiency on high-performance clusters. Our ultimate aim is to use data from all available networks and earthquakes within the magnitude range of our interest (5.5 ≤ Mw ≤ 7) which requires a solid framework to manage big data in our global adjoint tomography workflow. We discuss the current status and future of global adjoint tomography based on our initial results as well as practical issues such as handling big data in inversions and on high-performance computing systems.
System of adjoint P1 equations for neutron moderation
International Nuclear Information System (INIS)
Martinez, Aquilino Senra; Silva, Fernando Carvalho da; Cardoso, Carlos Eduardo Santos
2000-01-01
In some applications of perturbation theory, it is necessary know the adjoint neutron flux, which is obtained by the solution of adjoint neutron diffusion equation. However, the multigroup constants used for this are weighted in only the direct neutron flux, from the solution of direct P1 equations. In this work, this procedure is questioned and the adjoint P1 equations are derived by the neutron transport equation, the reversion operators rules and analogies between direct and adjoint parameters. (author)
Collective variables method in relativistic theory
International Nuclear Information System (INIS)
Shurgaya, A.V.
1983-01-01
Classical theory of N-component field is considered. The method of collective variables accurately accounting for conservation laws proceeding from invariance theory under homogeneous Lorentz group is developed within the frames of generalized hamiltonian dynamics. Hyperboloids are invariant surfaces Under the homogeneous Lorentz group. Proceeding from this, field transformation is introduced, and the surface is parametrized so that generators of the homogeneous Lorentz group do not include components dependent on interaction and their effect on the field function is reduced to geometrical. The interaction is completely included in the expression for the energy-momentum vector of the system which is a dynamical value. Gauge is chosen where parameters of four-dimensional translations and their canonically-conjugated pulses are non-physical and thus phase space is determined by parameters of the homogeneous Lorentz group, field function and their canonically-conjugated pulses. So it is managed to accurately account for conservation laws proceeding from the requirement of lorentz-invariance
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J.E. [Delft University of Technology, Interfaculty Reactor Institute, Delft (Netherlands)
2000-07-01
The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)
International Nuclear Information System (INIS)
Hoogenboom, J.E.
2000-01-01
The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)
Adjoint-Based Uncertainty Quantification with MCNP
Energy Technology Data Exchange (ETDEWEB)
Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.
Adjoint Based A Posteriori Analysis of Multiscale Mortar Discretizations with Multinumerics
Tavener, Simon
2013-01-01
In this paper we derive a posteriori error estimates for linear functionals of the solution to an elliptic problem discretized using a multiscale nonoverlapping domain decomposition method. The error estimates are based on the solution of an appropriately defined adjoint problem. We present a general framework that allows us to consider both primal and mixed formulations of the forward and adjoint problems within each subdomain. The primal subdomains are discretized using either an interior penalty discontinuous Galerkin method or a continuous Galerkin method with weakly imposed Dirichlet conditions. The mixed subdomains are discretized using Raviart- Thomas mixed finite elements. The a posteriori error estimate also accounts for the errors due to adjoint-inconsistent subdomain discretizations. The coupling between the subdomain discretizations is achieved via a mortar space. We show that the numerical discretization error can be broken down into subdomain and mortar components which may be used to drive adaptive refinement.Copyright © by SIAM.
DATA COLLECTION METHOD FOR PEDESTRIAN MOVEMENT VARIABLES
Directory of Open Access Journals (Sweden)
Hajime Inamura
2000-01-01
Full Text Available The need of tools for design and evaluation of pedestrian areas, subways stations, entrance hall, shopping mall, escape routes, stadium etc lead to the necessity of a pedestrian model. One approach pedestrian model is Microscopic Pedestrian Simulation Model. To be able to develop and calibrate a microscopic pedestrian simulation model, a number of variables need to be considered. As the first step of model development, some data was collected using video and the coordinate of the head path through image processing were also taken. Several numbers of variables can be gathered to describe the behavior of pedestrian from a different point of view. This paper describes how to obtain variables from video taking and simple image processing that can represent the movement of pedestrians and its variables
Dual of QCD with One Adjoint Fermion
DEFF Research Database (Denmark)
Mojaza, Matin; Nardecchia, Marco; Pica, Claudio
2011-01-01
We construct the magnetic dual of QCD with one adjoint Weyl fermion. The dual is a consistent solution of the 't Hooft anomaly matching conditions, allows for flavor decoupling and remarkably constitutes the first nonsupersymmetric dual valid for any number of colors. The dual allows to bound...
Adjoint P1 equations solution for neutron slowing down
International Nuclear Information System (INIS)
Cardoso, Carlos Eduardo Santos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2002-01-01
In some applications of perturbation theory, it is necessary know the adjoint neutron flux, which is obtained by the solution of adjoint neutron diffusion equation. However, the multigroup constants used for this are weighted in only the direct neutron flux, from the solution of direct P1 equations. In this work, the adjoint P1 equations are derived by the neutron transport equation, the reversion operators rules and analogies between direct and adjoint parameters. The direct and adjoint neutron fluxes resulting from the solution of P 1 equations were used to three different weighting processes, to obtain the macrogroup macroscopic cross sections. It was found out noticeable differences among them. (author)
Self-adjoint extensions and spectral analysis in the generalized Kratzer problem
International Nuclear Information System (INIS)
Baldiotti, M C; Gitman, D M; Tyutin, I V; Voronov, B L
2011-01-01
We present a mathematically rigorous quantum-mechanical treatment of a one-dimensional non-relativistic motion of a particle in the potential field V(x)=g 1 x -1 +g 2 x -2 , x is an element of R + = [0, ∞). For g 2 >0 and g 1 K (x) and is usually used to describe molecular energy and structure, interactions between different molecules and interactions between non-bonded atoms. We construct all self-adjoint Schroedinger operators with the potential V(x) and represent rigorous solutions of the corresponding spectral problems. Solving the first part of the problem, we use a method of specifying self-adjoint extensions by (asymptotic) self-adjoint boundary conditions. Solving spectral problems, we follow Krein's method of guiding functionals. This work is a continuation of our previous works devoted to the Coulomb, Calogero and Aharonov-Bohm potentials.
Adjoint sensitivity studies of loop current and eddy shedding in the Gulf of Mexico
Gopalakrishnan, Ganesh; Cornuelle, Bruce D.; Hoteit, Ibrahim
2013-01-01
Adjoint model sensitivity analyses were applied for the loop current (LC) and its eddy shedding in the Gulf of Mexico (GoM) using the MIT general circulation model (MITgcm). The circulation in the GoM is mainly driven by the energetic LC and subsequent LC eddy separation. In order to understand which ocean regions and features control the evolution of the LC, including anticyclonic warm-core eddy shedding in the GoM, forward and adjoint sensitivities with respect to previous model state and atmospheric forcing were computed using the MITgcm and its adjoint. Since the validity of the adjoint model sensitivities depends on the capability of the forward model to simulate the real LC system and the eddy shedding processes, a 5 year (2004–2008) forward model simulation was performed for the GoM using realistic atmospheric forcing, initial, and boundary conditions. This forward model simulation was compared to satellite measurements of sea-surface height (SSH) and sea-surface temperature (SST), and observed transport variability. Despite realistic mean state, standard deviations, and LC eddy shedding period, the simulated LC extension shows less variability and more regularity than the observations. However, the model is suitable for studying the LC system and can be utilized for examining the ocean influences leading to a simple, and hopefully generic LC eddy separation in the GoM. The adjoint sensitivities of the LC show influences from the Yucatan Channel (YC) flow and Loop Current Frontal Eddy (LCFE) on both LC extension and eddy separation, as suggested by earlier work. Some of the processes that control LC extension after eddy separation differ from those controlling eddy shedding, but include YC through-flow. The sensitivity remains stable for more than 30 days and moves generally upstream, entering the Caribbean Sea. The sensitivities of the LC for SST generally remain closer to the surface and move at speeds consistent with advection by the high-speed core of
Adjoint sensitivity studies of loop current and eddy shedding in the Gulf of Mexico
Gopalakrishnan, Ganesh
2013-07-01
Adjoint model sensitivity analyses were applied for the loop current (LC) and its eddy shedding in the Gulf of Mexico (GoM) using the MIT general circulation model (MITgcm). The circulation in the GoM is mainly driven by the energetic LC and subsequent LC eddy separation. In order to understand which ocean regions and features control the evolution of the LC, including anticyclonic warm-core eddy shedding in the GoM, forward and adjoint sensitivities with respect to previous model state and atmospheric forcing were computed using the MITgcm and its adjoint. Since the validity of the adjoint model sensitivities depends on the capability of the forward model to simulate the real LC system and the eddy shedding processes, a 5 year (2004–2008) forward model simulation was performed for the GoM using realistic atmospheric forcing, initial, and boundary conditions. This forward model simulation was compared to satellite measurements of sea-surface height (SSH) and sea-surface temperature (SST), and observed transport variability. Despite realistic mean state, standard deviations, and LC eddy shedding period, the simulated LC extension shows less variability and more regularity than the observations. However, the model is suitable for studying the LC system and can be utilized for examining the ocean influences leading to a simple, and hopefully generic LC eddy separation in the GoM. The adjoint sensitivities of the LC show influences from the Yucatan Channel (YC) flow and Loop Current Frontal Eddy (LCFE) on both LC extension and eddy separation, as suggested by earlier work. Some of the processes that control LC extension after eddy separation differ from those controlling eddy shedding, but include YC through-flow. The sensitivity remains stable for more than 30 days and moves generally upstream, entering the Caribbean Sea. The sensitivities of the LC for SST generally remain closer to the surface and move at speeds consistent with advection by the high-speed core of
Self-adjoint extensions and spectral analysis in the Calogero problem
Energy Technology Data Exchange (ETDEWEB)
Gitman, D M [Institute of Physics, University of Sao Paulo (Brazil); Tyutin, I V; Voronov, B L [Lebedev Physical Institute, Moscow (Russian Federation)], E-mail: gitman@dfn.if.usp.br, E-mail: tyutin@lpi.ru, E-mail: voronov@lpi.ru
2010-04-09
In this paper, we present a mathematically rigorous quantum-mechanical treatment of a one-dimensional motion of a particle in the Calogero potential {alpha}x{sup -2}. Although the problem is quite old and well studied, we believe that our consideration based on a uniform approach to constructing a correct quantum-mechanical description for systems with singular potentials and/or boundaries, proposed in our previous works, adds some new points to its solution. To demonstrate that a consideration of the Calogero problem requires mathematical accuracy, we discuss some 'paradoxes' inherent in the 'naive' quantum-mechanical treatment. Using a self-adjoint extension method, we construct and study all possible self-adjoint operators (self-adjoint Hamiltonians) associated with a formal differential expression for the Calogero Hamiltonian. In particular, we discuss a spontaneous scale-symmetry breaking associated with self-adjoint extensions. A complete spectral analysis of all self-adjoint Hamiltonians is presented.
Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models
Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.
2012-04-01
The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation
A new approach for developing adjoint models
Farrell, P. E.; Funke, S. W.
2011-12-01
Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Carlos Eduardo Santos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia
1995-12-31
There are several applications of the perturbation theory to specifics problems of reactor physics, such as nonuniform fuel burnup, nonuniform poison accumulation and evaluations of Doppler effects on reactivity. The neutron fluxes obtained from the solutions of direct and adjoint diffusion equations, are used in these applications. In the adjoint diffusion equation has been used the group constants averaged in the energy-dependent direct neutron flux, that it is not theoretically consistent. In this paper it is presented a method to calculate the energy-dependent adjoint neutron flux, to obtain the average group-constant that will be used in the adjoint diffusion equation. The method is based on the solution of the adjoint neutron balance equations, that were derived for a two regions cell. (author). 5 refs, 2 figs, 1 tab.
Formulation of coarse mesh finite difference to calculate mathematical adjoint flux
International Nuclear Information System (INIS)
Pereira, Valmir; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2002-01-01
The objective of this work is the obtention of the mathematical adjoint flux, having as its support the nodal expansion method (NEM) for coarse mesh problems. Since there are difficulties to evaluate this flux by using NEM. directly, a coarse mesh finite difference program was developed to obtain this adjoint flux. The coarse mesh finite difference formulation (DFMG) adopted uses results of the direct calculation (node average flux and node face averaged currents) obtained by NEM. These quantities (flux and currents) are used to obtain the correction factors which modify the classical finite differences formulation . Since the DFMG formulation is also capable of calculating the direct flux it was also tested to obtain this flux and it was verified that it was able to reproduce with good accuracy both the flux and the currents obtained via NEM. In this way, only matrix transposition is needed to calculate the mathematical adjoint flux. (author)
An Adjoint-Based Approach to Study a Flexible Flapping Wing in Pitching-Rolling Motion
Jia, Kun; Wei, Mingjun; Xu, Min; Li, Chengyu; Dong, Haibo
2017-11-01
Flapping-wing aerodynamics, with advantages in agility, efficiency, and hovering capability, has been the choice of many flyers in nature. However, the study of bio-inspired flapping-wing propulsion is often hindered by the problem's large control space with different wing kinematics and deformation. The adjoint-based approach reduces largely the computational cost to a feasible level by solving an inverse problem. Facing the complication from moving boundaries, non-cylindrical calculus provides an easy extension of traditional adjoint-based approach to handle the optimization involving moving boundaries. The improved adjoint method with non-cylindrical calculus for boundary treatment is first applied on a rigid pitching-rolling plate, then extended to a flexible one with active deformation to further increase its propulsion efficiency. The comparison of flow dynamics with the initial and optimal kinematics and deformation provides a unique opportunity to understand the flapping-wing mechanism. Supported by AFOSR and ARL.
Advances in Global Adjoint Tomography -- Massive Data Assimilation
Ruan, Y.; Lei, W.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Krischer, L.; Tromp, J.
2015-12-01
Azimuthal anisotropy and anelasticity are key to understanding a myriad of processes in Earth's interior. Resolving these properties requires accurate simulations of seismic wave propagation in complex 3-D Earth models and an iterative inversion strategy. In the wake of successes in regional studies(e.g., Chen et al., 2007; Tape et al., 2009, 2010; Fichtner et al., 2009, 2010; Chen et al.,2010; Zhu et al., 2012, 2013; Chen et al., 2015), we are employing adjoint tomography based on a spectral-element method (Komatitsch & Tromp 1999, 2002) on a global scale using the supercomputer ''Titan'' at Oak Ridge National Laboratory. After 15 iterations, we have obtained a high-resolution transversely isotropic Earth model (M15) using traveltime data from 253 earthquakes. To obtain higher resolution images of the emerging new features and to prepare the inversion for azimuthal anisotropy and anelasticity, we expanded the original dataset with approximately 4,220 additional global earthquakes (Mw5.5-7.0) --occurring between 1995 and 2014-- and downloaded 300-minute-long time series for all available data archived at the IRIS Data Management Center, ORFEUS, and F-net. Ocean Bottom Seismograph data from the last decade are also included to maximize data coverage. In order to handle the huge dataset and solve the I/O bottleneck in global adjoint tomography, we implemented a python-based parallel data processing workflow based on the newly developed Adaptable Seismic Data Format (ASDF). With the help of the data selection tool MUSTANG developed by IRIS, we cleaned our dataset and assembled event-based ASDF files for parallel processing. We have started Centroid Moment Tensors (CMT) inversions for all 4,220 earthquakes with the latest model M15, and selected high-quality data for measurement. We will statistically investigate each channel using synthetic seismograms calculated in M15 for updated CMTs and identify problematic channels. In addition to data screening, we also modified
Haines, P. E.; Esler, J. G.; Carver, G. D.
2014-06-01
A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model, RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments) to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time symmetric, suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.
Adjoint optimization of natural convection problems: differentially heated cavity
Saglietti, Clio; Schlatter, Philipp; Monokrousos, Antonios; Henningson, Dan S.
2017-12-01
Optimization of natural convection-driven flows may provide significant improvements to the performance of cooling devices, but a theoretical investigation of such flows has been rarely done. The present paper illustrates an efficient gradient-based optimization method for analyzing such systems. We consider numerically the natural convection-driven flow in a differentially heated cavity with three Prandtl numbers (Pr=0.15{-}7) at super-critical conditions. All results and implementations were done with the spectral element code Nek5000. The flow is analyzed using linear direct and adjoint computations about a nonlinear base flow, extracting in particular optimal initial conditions using power iteration and the solution of the full adjoint direct eigenproblem. The cost function for both temperature and velocity is based on the kinetic energy and the concept of entransy, which yields a quadratic functional. Results are presented as a function of Prandtl number, time horizons and weights between kinetic energy and entransy. In particular, it is shown that the maximum transient growth is achieved at time horizons on the order of 5 time units for all cases, whereas for larger time horizons the adjoint mode is recovered as optimal initial condition. For smaller time horizons, the influence of the weights leads either to a concentric temperature distribution or to an initial condition pattern that opposes the mean shear and grows according to the Orr mechanism. For specific cases, it could also been shown that the computation of optimal initial conditions leads to a degenerate problem, with a potential loss of symmetry. In these situations, it turns out that any initial condition lying in a specific span of the eigenfunctions will yield exactly the same transient amplification. As a consequence, the power iteration converges very slowly and fails to extract all possible optimal initial conditions. According to the authors' knowledge, this behavior is illustrated here for
Solar wind reconstruction from magnetosheath data using an adjoint approach
International Nuclear Information System (INIS)
Nabert, C.; Othmer, C.
2015-01-01
We present a new method to reconstruct solar wind conditions from spacecraft data taken during magnetosheath passages, which can be used to support, e.g., magnetospheric models. The unknown parameters of the solar wind are used as boundary conditions of an MHD (magnetohydrodynamics) magnetosheath model. The boundary conditions are varied until the spacecraft data matches the model predictions. The matching process is performed using a gradient-based minimization of the misfit between data and model. To achieve this time-consuming procedure, we introduce the adjoint of the magnetosheath model, which allows efficient calculation of the gradients. An automatic differentiation tool is used to generate the adjoint source code of the model. The reconstruction method is applied to THEMIS (Time History of Events and Macroscale Interactions during Substorms) data to calculate the solar wind conditions during spacecraft magnetosheath transitions. The results are compared to actual solar wind data. This allows validation of our reconstruction method and indicates the limitations of the MHD magnetosheath model used.
Solar wind reconstruction from magnetosheath data using an adjoint approach
Energy Technology Data Exchange (ETDEWEB)
Nabert, C.; Othmer, C. [Technische Univ. Braunschweig (Germany). Inst. fuer Geophysik und extraterrestrische Physik; Glassmeier, K.H. [Technische Univ. Braunschweig (Germany). Inst. fuer Geophysik und extraterrestrische Physik; Max Planck Institute for Solar System Research, Goettingen (Germany)
2015-07-01
We present a new method to reconstruct solar wind conditions from spacecraft data taken during magnetosheath passages, which can be used to support, e.g., magnetospheric models. The unknown parameters of the solar wind are used as boundary conditions of an MHD (magnetohydrodynamics) magnetosheath model. The boundary conditions are varied until the spacecraft data matches the model predictions. The matching process is performed using a gradient-based minimization of the misfit between data and model. To achieve this time-consuming procedure, we introduce the adjoint of the magnetosheath model, which allows efficient calculation of the gradients. An automatic differentiation tool is used to generate the adjoint source code of the model. The reconstruction method is applied to THEMIS (Time History of Events and Macroscale Interactions during Substorms) data to calculate the solar wind conditions during spacecraft magnetosheath transitions. The results are compared to actual solar wind data. This allows validation of our reconstruction method and indicates the limitations of the MHD magnetosheath model used.
Emittance measurements by variable quadrupole method
International Nuclear Information System (INIS)
Toprek, D.
2005-01-01
The beam emittance is a measure of both the beam size and beam divergence, we cannot directly measure its value. If the beam size is measured at different locations or under different focusing conditions such that different parts of the phase space ellipse will be probed by the beam size monitor, the beam emittance can be determined. An emittance measurement can be performed by different methods. Here we will consider the varying quadrupole setting method.
Adjoint string breaking in the pseudoparticle approach
International Nuclear Information System (INIS)
Szasz, Christian; Wagner, Marc
2008-01-01
We apply the pseudoparticle approach to SU(2) Yang-Mills theory and perform a detailed study of the potential between two static charges for various representations. Whereas for charges in the fundamental representation we find a linearly rising confining potential, we clearly observe string breaking, when considering charges in the adjoint representation. We also demonstrate Casimir scaling and compute gluelump masses for different spin and parity. Numerical results are in qualitative agreement with lattice results.
Ambient noise adjoint tomography for a linear array in North China
Zhang, C.; Yao, H.; Liu, Q.; Yuan, Y. O.; Zhang, P.; Feng, J.; Fang, L.
2017-12-01
Ambient noise tomography based on dispersion data and ray theory has been widely utilized for imaging crustal structures. In order to improve the inversion accuracy, ambient noise tomography based on the 3D adjoint approach or full waveform inversion has been developed recently, however, the computational cost is tremendous. In this study we present 2D ambient noise adjoint tomography for a linear array in north China with significant computational efficiency compared to 3D ambient noise adjoint tomography. During the preprocessing, we first convert the observed data in 3D media, i.e., surface-wave empirical Green's functions (EGFs) from ambient noise cross-correlation, to the reconstructed EGFs in 2D media using a 3D/2D transformation scheme. Different from the conventional steps of measuring phase dispersion, the 2D adjoint tomography refines 2D shear wave speeds along the profile directly from the reconstructed Rayleigh wave EGFs in the period band 6-35s. With the 2D initial model extracted from the 3D model from traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime misfits between the reconstructed EGFs and synthetic Green function (SGFs) in 2D media generated by the spectral-element method (SEM), with a preconditioned conjugate gradient method. The multitaper traveltime difference measurement is applied in four period bands during the inversion: 20-35s, 15-30s, 10-20s and 6-15s. The recovered model shows more detailed crustal structures with pronounced low velocity anomaly in the mid-lower crust beneath the junction of Taihang Mountains and Yin-Yan Mountains compared with the initial model. This low velocity structure may imply the possible intense crust-mantle interactions, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of the region. To our knowledge, it's first time that ambient noise adjoint tomography is implemented in 2D media
International Nuclear Information System (INIS)
Nievaart, V. A.; Legrady, D.; Moss, R. L.; Kloosterman, J. L.; Hagen, T. H. J. J. van der; Dam, H. van
2007-01-01
This paper deals with the application of the adjoint transport theory in order to optimize Monte Carlo based radiotherapy treatment planning. The technique is applied to Boron Neutron Capture Therapy where most often mixed beams of neutrons and gammas are involved. In normal forward Monte Carlo simulations the particles start at a source and lose energy as they travel towards the region of interest, i.e., the designated point of detection. Conversely, with adjoint Monte Carlo simulations, the so-called adjoint particles start at the region of interest and gain energy as they travel towards the source where they are detected. In this respect, the particles travel backwards and the real source and real detector become the adjoint detector and adjoint source, respectively. At the adjoint detector, an adjoint function is obtained with which numerically the same result, e.g., dose or flux in the tumor, can be derived as with forward Monte Carlo. In many cases, the adjoint method is more efficient and by that is much quicker when, for example, the response in the tumor or organ at risk for many locations and orientations of the treatment beam around the patient is required. However, a problem occurs when the treatment beam is mono-directional as the probability of detecting adjoint Monte Carlo particles traversing the beam exit (detector plane in adjoint mode) in the negative direction of the incident beam is zero. This problem is addressed here and solved first with the use of next event estimators and second with the application of a Legendre expansion technique of the angular adjoint function. In the first approach, adjoint particles are tracked deterministically through a tube to a (adjoint) point detector far away from the geometric model. The adjoint particles will traverse the disk shaped entrance of this tube (the beam exit in the actual geometry) perpendicularly. This method is slow whenever many events are involved that are not contributing to the point
An introduction to the self-adjointness and spectral analysis of Schroedinger operators
International Nuclear Information System (INIS)
Simon, B.
1977-01-01
The author first explains the basic results about self adjointness, from a point of view which emphasizes the connection with solvability of the Schroedinger equation. He then describes four methods that define self ajoint Hamiltonians, for most Schroedinger operators and discusses types of spectra, closing by considering the essential spectrum in the two body case. (P.D.)
Adjoint sensitivity analysis of the thermomechanical behavior of repositories
International Nuclear Information System (INIS)
Wilson, J.L.; Thompson, B.M.
1984-01-01
The adjoint sensitivity method is applied to thermomechanical models for the first time. The method provides an efficient and inexpensive answer to the question: how sensitive are thermomechanical predictions to assumed parameters. The answer is exact, in the sense that it yields exact derivatives of response measures to parameters, and approximate, in the sense that projections of the response fo other parameter assumptions are only first order correct. The method is applied to linear finite element models of thermomechanical behavior. Extensions to more complicated models are straight-forward but often laborious. An illustration of the method with a two-dimensional repository corridor model reveals that the chosen stress response measure was most sensitive to Poisson's ratio for the rock matrix
Estimation of ex-core detector responses by adjoint Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)
2006-07-01
Ex-core detector responses can be efficiently calculated by combining an adjoint Monte Carlo calculation with the converged source distribution of a forward Monte Carlo calculation. As the fission source distribution from a Monte Carlo calculation is given only as a collection of discrete space positions, the coupling requires a point flux estimator for each collision in the adjoint calculation. To avoid the infinite variance problems of the point flux estimator, a next-event finite-variance point flux estimator has been applied, witch is an energy dependent form for heterogeneous media of a finite-variance estimator known from the literature. To test the effects of this combined adjoint-forward calculation a simple geometry of a homogeneous core with a reflector was adopted with a small detector in the reflector. To demonstrate the potential of the method the continuous-energy adjoint Monte Carlo technique with anisotropic scattering was implemented with energy dependent absorption and fission cross sections and constant scattering cross section. A gain in efficiency over a completely forward calculation of the detector response was obtained, which is strongly dependent on the specific system and especially the size and position of the ex-core detector and the energy range considered. Further improvements are possible. The method works without problems for small detectors, even for a point detector and a small or even zero energy range. (authors)
Probabilistic Power Flow Method Considering Continuous and Discrete Variables
Directory of Open Access Journals (Sweden)
Xuexia Zhang
2017-04-01
Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.
Time reversal imaging, Inverse problems and Adjoint Tomography}
Montagner, J.; Larmat, C. S.; Capdeville, Y.; Kawakatsu, H.; Fink, M.
2010-12-01
With the increasing power of computers and numerical techniques (such as spectral element methods), it is possible to address a new class of seismological problems. The propagation of seismic waves in heterogeneous media is simulated more and more accurately and new applications developed, in particular time reversal methods and adjoint tomography in the three-dimensional Earth. Since the pioneering work of J. Claerbout, theorized by A. Tarantola, many similarities were found between time-reversal methods, cross-correlations techniques, inverse problems and adjoint tomography. By using normal mode theory, we generalize the scalar approach of Draeger and Fink (1999) and Lobkis and Weaver (2001) to the 3D- elastic Earth, for theoretically understanding time-reversal method on global scale. It is shown how to relate time-reversal methods on one hand, with auto-correlations of seismograms for source imaging and on the other hand, with cross-correlations between receivers for structural imaging and retrieving Green function. Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and to seismic waves in seismology for earthquake imaging. In the case of source imaging, time reversal techniques make it possible an automatic location in time and space as well as the retrieval of focal mechanism of earthquakes or unknown environmental sources . We present here some applications at the global scale of these techniques on synthetic tests and on real data, such as Sumatra-Andaman (Dec. 2004), Haiti (Jan. 2010), as well as glacial earthquakes and seismic hum.
A Streamlined Artificial Variable Free Version of Simplex Method
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new ...
Development of CO2 inversion system based on the adjoint of the global coupled transport model
Belikov, Dmitry; Maksyutov, Shamil; Chevallier, Frederic; Kaminski, Thomas; Ganshin, Alexander; Blessing, Simon
2014-05-01
We present the development of an inverse modeling system employing an adjoint of the global coupled transport model consisting of the National Institute for Environmental Studies (NIES) Eulerian transport model (TM) and the Lagrangian plume diffusion model (LPDM) FLEXPART. NIES TM is a three-dimensional atmospheric transport model, which solves the continuity equation for a number of atmospheric tracers on a grid spanning the entire globe. Spatial discretization is based on a reduced latitude-longitude grid and a hybrid sigma-isentropic coordinate in the vertical. NIES TM uses a horizontal resolution of 2.5°×2.5°. However, to resolve synoptic-scale tracer distributions and to have the ability to optimize fluxes at resolutions of 0.5° and higher we coupled NIES TM with the Lagrangian model FLEXPART. The Lagrangian component of the forward and adjoint models uses precalculated responses of the observed concentration to the surface fluxes and 3-D concentrations field simulated with the FLEXPART model. NIES TM and FLEXPART are driven by JRA-25/JCDAS reanalysis dataset. Construction of the adjoint of the Lagrangian part is less complicated, as LPDMs calculate the sensitivity of measurements to the surrounding emissions field by tracking a large number of "particles" backwards in time. Developing of the adjoint to Eulerian part was performed with automatic differentiation tool the Transformation of Algorithms in Fortran (TAF) software (http://www.FastOpt.com). This method leads to the discrete adjoint of NIES TM. The main advantage of the discrete adjoint is that the resulting gradients of the numerical cost function are exact, even for nonlinear algorithms. The overall advantages of our method are that: 1. No code modification of Lagrangian model is required, making it applicable to combination of global NIES TM and any Lagrangian model; 2. Once run, the Lagrangian output can be applied to any chemically neutral gas; 3. High-resolution results can be obtained over
Elementary operators on self-adjoint operators
Molnar, Lajos; Semrl, Peter
2007-03-01
Let H be a Hilbert space and let and be standard *-operator algebras on H. Denote by and the set of all self-adjoint operators in and , respectively. Assume that and are surjective maps such that M(AM*(B)A)=M(A)BM(A) and M*(BM(A)B)=M*(B)AM*(B) for every pair , . Then there exist an invertible bounded linear or conjugate-linear operator and a constant c[set membership, variant]{-1,1} such that M(A)=cTAT*, , and M*(B)=cT*BT, .
Fast parallel algorithms for the x-ray transform and its adjoint.
Gao, Hao
2012-11-01
Iterative reconstruction methods often offer better imaging quality and allow for reconstructions with lower imaging dose than classical methods in computed tomography. However, the computational speed is a major concern for these iterative methods, for which the x-ray transform and its adjoint are two most time-consuming components. The speed issue becomes even notable for the 3D imaging such as cone beam scans or helical scans, since the x-ray transform and its adjoint are frequently computed as there is usually not enough computer memory to save the corresponding system matrix. The purpose of this paper is to optimize the algorithm for computing the x-ray transform and its adjoint, and their parallel computation. The fast and highly parallelizable algorithms for the x-ray transform and its adjoint are proposed for the infinitely narrow beam in both 2D and 3D. The extension of these fast algorithms to the finite-size beam is proposed in 2D and discussed in 3D. The CPU and GPU codes are available at https://sites.google.com/site/fastxraytransform. The proposed algorithm is faster than Siddon's algorithm for computing the x-ray transform. In particular, the improvement for the parallel computation can be an order of magnitude. The authors have proposed fast and highly parallelizable algorithms for the x-ray transform and its adjoint, which are extendable for the finite-size beam. The proposed algorithms are suitable for parallel computing in the sense that the computational cost per parallel thread is O(1).
International Nuclear Information System (INIS)
Royston, K.; Haghighat, A.; Yi, C.
2010-01-01
The hybrid deterministic transport code TITAN is being applied to a Single Photon Emission Computed Tomography (SPECT) simulation of a myocardial perfusion study. The TITAN code's hybrid methodology allows the use of a discrete ordinates solver in the phantom region and a characteristics method solver in the collimator region. Currently we seek to validate the adjoint methodology in TITAN for this application using a SPECT model that has been created in the MCNP5 Monte Carlo code. The TITAN methodology was examined based on the response of a single voxel detector placed in front of the heart with and without collimation. For the case without collimation, the TITAN response for single voxel-sized detector had a -9.96% difference relative to the MCNP5 response. To simulate collimation, the adjoint source was specified in directions located within the collimator acceptance angle. For a single collimator hole with a diameter matching the voxel dimension, a difference of -0.22% was observed. Comparisons to groupings of smaller collimator holes of two different sizes resulted in relative differences of 0.60% and 0.12%. The number of adjoint source directions within an acceptance angle was increased and showed no significant change in accuracy. Our results indicate that the hybrid adjoint methodology of TITAN yields accurate solutions greater than a factor of two faster than MCNP5. (authors)
Self-adjointness of the Gaffney Laplacian on Vector Bundles
Energy Technology Data Exchange (ETDEWEB)
Bandara, Lashi, E-mail: lashi.bandara@chalmers.se [Chalmers University of Technology and University of Gothenburg, Mathematical Sciences (Sweden); Milatovic, Ognjen, E-mail: omilatov@unf.edu [University of North Florida, Department of Mathematics and Statistics (United States)
2015-12-15
We study the Gaffney Laplacian on a vector bundle equipped with a compatible metric and connection over a Riemannian manifold that is possibly geodesically incomplete. Under the hypothesis that the Cauchy boundary is polar, we demonstrate the self-adjointness of this Laplacian. Furthermore, we show that negligible boundary is a necessary and sufficient condition for the self-adjointness of this operator.
Self-adjointness of the Gaffney Laplacian on Vector Bundles
International Nuclear Information System (INIS)
Bandara, Lashi; Milatovic, Ognjen
2015-01-01
We study the Gaffney Laplacian on a vector bundle equipped with a compatible metric and connection over a Riemannian manifold that is possibly geodesically incomplete. Under the hypothesis that the Cauchy boundary is polar, we demonstrate the self-adjointness of this Laplacian. Furthermore, we show that negligible boundary is a necessary and sufficient condition for the self-adjointness of this operator
The dynamic adjoint as a Green’s function
International Nuclear Information System (INIS)
Pázsit, I.; Dykin, V.
2015-01-01
Highlight: • The relationship between the direct Green’s function and the dynamic adjoint function is discussed in two-group theory. • It is shown that the elements of the direct Greens’ function matrix are identical to those of the transpose of the adjoint Green’s function matrix, with an interchange of arguments. • It is also remarked how the dynamic adjoint function of van Dam can be given in terms of the direct Green’s function matrix. - Abstract: The concept of the dynamic adjoint was introduced by Hugo van Dam for calculating the in-core neutron noise in boiling water reactors in the mid-70’s. This successful approach found numerous applications for calculating the neutron noise in both PWRs and BWRs since then. Although the advantages and disadvantages of using the direct (forward) or the adjoint (backward) approach for the calculation of the neutron noise were analysed in a number of publications, the direct relationship between the forward Green’s function and the dynamic adjoint has not been discussed. On the other hand, in particle transport theory the relationship between the direct and adjoint Green’s function has been discussed in detail, in which Mike Williams has had many seminal contributions. In this note we analyse the relationship between the direct Green’s function and the dynamic adjoint in the spirit of Mike’s work in neutron transport and radiation damage theory. The paper is closed with some personal remarks and reminiscences.
Two methods for studying the X-ray variability
Yan, Shu-Ping; Ji, Li; Méndez, Mariano; Wang, Na; Liu, Siming; Li, Xiang-Dong
2016-01-01
The X-ray aperiodic variability and quasi-periodic oscillation (QPO) are the important tools to study the structure of the accretion flow of X-ray binaries. However, the origin of the complex X-ray variability from X-ray binaries remains yet unsolved. We proposed two methods for studying the X-ray
The functional variable method for solving the fractional Korteweg ...
Indian Academy of Sciences (India)
The physical and engineering processes have been modelled by means of fractional ... very important role in various fields such as economics, chemistry, notably control the- .... In §3, the functional variable method is applied for finding exact.
Extensions of von Neumann's method for generating random variables
International Nuclear Information System (INIS)
Monahan, J.F.
1979-01-01
Von Neumann's method of generating random variables with the exponential distribution and Forsythe's method for obtaining distributions with densities of the form e/sup -G//sup( x/) are generalized to apply to certain power series representations. The flexibility of the power series methods is illustrated by algorithms for the Cauchy and geometric distributions
An Adjoint-Based Adaptive Ensemble Kalman Filter
Song, Hajoon
2013-10-01
A new hybrid ensemble Kalman filter/four-dimensional variational data assimilation (EnKF/4D-VAR) approach is introduced to mitigate background covariance limitations in the EnKF. The work is based on the adaptive EnKF (AEnKF) method, which bears a strong resemblance to the hybrid EnKF/three-dimensional variational data assimilation (3D-VAR) method. In the AEnKF, the representativeness of the EnKF ensemble is regularly enhanced with new members generated after back projection of the EnKF analysis residuals to state space using a 3D-VAR [or optimal interpolation (OI)] scheme with a preselected background covariance matrix. The idea here is to reformulate the transformation of the residuals as a 4D-VAR problem, constraining the new member with model dynamics and the previous observations. This should provide more information for the estimation of the new member and reduce dependence of the AEnKF on the assumed stationary background covariance matrix. This is done by integrating the analysis residuals backward in time with the adjoint model. Numerical experiments are performed with the Lorenz-96 model under different scenarios to test the new approach and to evaluate its performance with respect to the EnKF and the hybrid EnKF/3D-VAR. The new method leads to the least root-mean-square estimation errors as long as the linear assumption guaranteeing the stability of the adjoint model holds. It is also found to be less sensitive to choices of the assimilation system inputs and parameters.
An Adjoint-Based Adaptive Ensemble Kalman Filter
Song, Hajoon; Hoteit, Ibrahim; Cornuelle, Bruce D.; Luo, Xiaodong; Subramanian, Aneesh C.
2013-01-01
A new hybrid ensemble Kalman filter/four-dimensional variational data assimilation (EnKF/4D-VAR) approach is introduced to mitigate background covariance limitations in the EnKF. The work is based on the adaptive EnKF (AEnKF) method, which bears a strong resemblance to the hybrid EnKF/three-dimensional variational data assimilation (3D-VAR) method. In the AEnKF, the representativeness of the EnKF ensemble is regularly enhanced with new members generated after back projection of the EnKF analysis residuals to state space using a 3D-VAR [or optimal interpolation (OI)] scheme with a preselected background covariance matrix. The idea here is to reformulate the transformation of the residuals as a 4D-VAR problem, constraining the new member with model dynamics and the previous observations. This should provide more information for the estimation of the new member and reduce dependence of the AEnKF on the assumed stationary background covariance matrix. This is done by integrating the analysis residuals backward in time with the adjoint model. Numerical experiments are performed with the Lorenz-96 model under different scenarios to test the new approach and to evaluate its performance with respect to the EnKF and the hybrid EnKF/3D-VAR. The new method leads to the least root-mean-square estimation errors as long as the linear assumption guaranteeing the stability of the adjoint model holds. It is also found to be less sensitive to choices of the assimilation system inputs and parameters.
Variable identification in group method of data handling methodology
Energy Technology Data Exchange (ETDEWEB)
Pereira, Iraci Martinez, E-mail: martinez@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil)
2011-07-01
The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)
Variable identification in group method of data handling methodology
International Nuclear Information System (INIS)
Pereira, Iraci Martinez; Bueno, Elaine Inacio
2011-01-01
The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)
International Nuclear Information System (INIS)
Gilli, L.; Lathouwers, D.; Kloosterman, J.L.; Van der Hagen, T.H.J.J.
2011-01-01
In this paper a method to perform sensitivity analysis for a simplified multi-physics problem is presented. The method is based on the Adjoint Sensitivity Analysis Procedure which is used to apply first order perturbation theory to linear and nonlinear problems using adjoint techniques. The multi-physics problem considered includes a neutronic, a thermo-kinetics, and a thermal-hydraulics part and it is used to model the time dependent behavior of a sodium cooled fast reactor. The adjoint procedure is applied to calculate the sensitivity coefficients with respect to the kinetic parameters of the problem for two reference transients using two different model responses, the results obtained are then compared with the values given by a direct sampling of the forward nonlinear problem. Our first results show that, thanks to modern numerical techniques, the procedure is relatively easy to implement and provides good estimation for most perturbations, making the method appealing for more detailed problems. (author)
Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.
Pizer, Steven D
2016-04-01
To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.
The functional variable method for finding exact solutions of some ...
Indian Academy of Sciences (India)
Abstract. In this paper, we implemented the functional variable method and the modified. Riemann–Liouville derivative for the exact solitary wave solutions and periodic wave solutions of the time-fractional Klein–Gordon equation, and the time-fractional Hirota–Satsuma coupled. KdV system. This method is extremely simple ...
The Adjoint Monte Carlo - a viable option for efficient radiotherapy treatment planning
Energy Technology Data Exchange (ETDEWEB)
Goldstein, M [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev
1996-12-01
In cancer therapy using collimated beams of photons, the radiation oncologist must determine a set of beams that delivers the required dose to each point in the tumor and minimizes the risk of damage to the healthy tissue and vital organs. Currently, the oncologist determines these beams iteratively, by using a sequence of dose calculations using approximate numerical methods. In this paper, a more accurate and potentially faster approach, based on the Adjoint Monte Carlo method, is presented (authors).
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2003-01-01
The Variational Variance Reduction (VVR) method is an effective technique for increasing the efficiency of Monte Carlo simulations [Ann. Nucl. Energy 28 (2001) 457; Nucl. Sci. Eng., in press]. This method uses a variational functional, which employs first-order estimates of forward and adjoint fluxes, to yield a second-order estimate of a desired system characteristic - which, in this paper, is the criticality eigenvalue k. If Monte Carlo estimates of the forward and adjoint fluxes are used, each having global 'first-order' errors of O(1/√N), where N is the number of histories used in the Monte Carlo simulation, then the statistical error in the VVR estimation of k will in principle be O(1/N). In this paper, we develop this theoretical possibility and demonstrate with numerical examples that implementations of the VVR method for criticality problems can approximate O(1/N) convergence for significantly large values of N
International Nuclear Information System (INIS)
Yu, Dequan; Cong, Shu-Lin; Sun, Zhigang
2015-01-01
Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method
Energy Technology Data Exchange (ETDEWEB)
Yu, Dequan [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Cong, Shu-Lin, E-mail: shlcong@dlut.edu.cn [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Sun, Zhigang, E-mail: zsun@dicp.ac.cn [State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Center for Advanced Chemical Physics and 2011 Frontier Center for Quantum Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei 230026 (China)
2015-09-08
Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method.
Phenomenology of spinless adjoints in two universal extra dimensions
International Nuclear Information System (INIS)
Ghosh, Kirtiman; Datta, Anindya
2008-01-01
We discuss the phenomenology of (1,1)-mode adjoint scalars in the framework of two Universal Extra Dimensions. The Kaluza-Klein (KK) towers of these adjoint scalars arise in the 4-dimensional effective theory from the 6th component of the gauge fields after compactification. Adjoint scalars can have KK-number conserving as well as KK-number violating interactions. We calculate the KK-number violating operators involving these scalars and two Standard Model fields. Decay widths of these scalars into different channels have been estimated. We have also briefly discussed pair-production and single production of such scalars at the Large Hadron Collider
Measuring the surgical 'learning curve': methods, variables and competency.
Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran
2014-03-01
To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.
New complex variable meshless method for advection—diffusion problems
International Nuclear Information System (INIS)
Wang Jian-Fei; Cheng Yu-Min
2013-01-01
In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency
Error response test system and method using test mask variable
Gender, Thomas K. (Inventor)
2006-01-01
An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.
Marcotte, Christopher D; Grigoriev, Roman O
2016-09-01
This paper introduces a numerical method for computing the spectrum of adjoint (left) eigenfunctions of spiral wave solutions to reaction-diffusion systems in arbitrary geometries. The method is illustrated by computing over a hundred eigenfunctions associated with an unstable time-periodic single-spiral solution of the Karma model on a square domain. We show that all leading adjoint eigenfunctions are exponentially localized in the vicinity of the spiral tip, although the marginal modes (response functions) demonstrate the strongest localization. We also discuss the implications of the localization for the dynamics and control of unstable spiral waves. In particular, the interaction with no-flux boundaries leads to a drift of spiral waves which can be understood with the help of the response functions.
Improvement of the variable storage coefficient method with water surface gradient as a variable
The variable storage coefficient (VSC) method has been used for streamflow routing in continuous hydrological simulation models such as the Agricultural Policy/Environmental eXtender (APEX) and the Soil and Water Assessment Tool (SWAT) for more than 30 years. APEX operates on a daily time step and ...
Recursive form of general limited memory variable metric methods
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Vlček, Jan
2013-01-01
Roč. 49, č. 2 (2013), s. 224-235 ISSN 0023-5954 Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * limited memory methods * variable metric updates * recursive matrix formulation * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://dml.cz/handle/10338.dmlcz/143365
The variability of piezoelectric measurements. Material and measurement method contributions
International Nuclear Information System (INIS)
Stewart, M.; Cain, M.
2002-01-01
The variability of piezoelectric materials measurements has been investigated in order to separate the contributions from intrinsic instrumental variability, and the contributions from the variability in materials. The work has pinpointed several areas where weaknesses in the measurement methods result in high variability, and also show that good correlation between piezoelectric parameters allow simpler measurement methods to be used. The Berlincourt method has been shown to be unreliable when testing thin discs, however when testing thicker samples there is a good correlation between this and other methods. The high field permittivity and low field permittivity correlate well, so tolerances on low field measurements would predict high field performance. In trying to identify microstructural origins of samples that behave differently to others within a batch, no direct evidence was found to suggest that outliers originate from either differences in microstructure or crystallography. Some of the samples chosen as maximum outliers showed pin-holes, probably from electrical breakdown during poling, even though these defects would ordinarily be detrimental to piezoelectric output. (author)
Variable Lifting Index (VLI): A New Method for Evaluating Variable Lifting Tasks.
Waters, Thomas; Occhipinti, Enrico; Colombini, Daniela; Alvarez-Casado, Enrique; Fox, Robert
2016-08-01
We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. © 2015, Human Factors and Ergonomics Society.
Self-Adjointness Criterion for Operators in Fock Spaces
International Nuclear Information System (INIS)
Falconi, Marco
2015-01-01
In this paper we provide a criterion of essential self-adjointness for operators in the tensor product of a separable Hilbert space and a Fock space. The class of operators we consider may contain a self-adjoint part, a part that preserves the number of Fock space particles and a non-diagonal part that is at most quadratic with respect to the creation and annihilation operators. The hypotheses of the criterion are satisfied in several interesting applications
International Nuclear Information System (INIS)
Yoo, Sua; Kowalok, Michael E; Thomadsen, Bruce R; Henderson, Douglass L
2003-01-01
We have developed an efficient treatment-planning algorithm for prostate implants that is based on region of interest (ROI) adjoint functions and a greedy heuristic. For this work, we define the adjoint function for an ROI as the sensitivity of the average dose in the ROI to a unit-strength brachytherapy source at any seed position. The greedy heuristic uses a ratio of target and critical structure adjoint functions to rank seed positions according to their ability to irradiate the target ROI while sparing critical structure ROIs. This ratio is computed once for each seed position prior to the optimization process. Optimization is performed by a greedy heuristic that selects seed positions according to their ratio values. With this method, clinically acceptable treatment plans are obtained in less than 2 s. For comparison, a branch-and-bound method to solve a mixed integer-programming model took more than 50 min to arrive at a feasible solution. Both methods achieved good treatment plans, but the speedup provided by the greedy heuristic was a factor of approximately 1500. This attribute makes this algorithm suitable for intra-operative real-time treatment planning
Neural Network Training by Integration of Adjoint Systems of Equations Forward in Time
Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)
1999-01-01
A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically. it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved. but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. Tbc trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.
Assessment of hip dysplasia and osteoarthritis: Variability of different methods
International Nuclear Information System (INIS)
Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld; Roemer, Lone; Kring, Soeren
2010-01-01
Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3
Assessment of hip dysplasia and osteoarthritis: Variability of different methods
Energy Technology Data Exchange (ETDEWEB)
Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld (Orthopedic Research Unit, Univ. Hospital of Aarhus, Aarhus (Denmark)), e-mail: a_troelsen@hotmail.com; Roemer, Lone (Dept. of Radiology, Univ. Hospital of Aarhus, Aarhus (Denmark)); Kring, Soeren (Dept. of Orthopedic Surgery, Aabenraa Hospital, Aabenraa (Denmark))
2010-03-15
Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3
Energy Technology Data Exchange (ETDEWEB)
Cessenat, M.; Genta, P.
1996-12-31
We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables {phi}1, {phi}2, {phi}3 in addition of the time variable {tau} and then searching a solution which is separated with respect to {phi}1 and {tau} only. This is allowed by a very simple relation, called a `metric separation equation`, which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H{sub {Sigma}} of the magnetic field on the unit sphere {Sigma} by solving a non linear partial differential equation on {Sigma}. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors).
Chaos synchronization using single variable feedback based on backstepping method
International Nuclear Information System (INIS)
Zhang Jian; Li Chunguang; Zhang Hongbin; Yu Juebang
2004-01-01
In recent years, backstepping method has been developed in the field of nonlinear control, such as controller, observer and output regulation. In this paper, an effective backstepping design is applied to chaos synchronization. There are some advantages in this method for synchronizing chaotic systems, such as (a) the synchronization error is exponential convergent; (b) only one variable information of the master system is needed; (c) it presents a systematic procedure for selecting a proper controller. Numerical simulations for the Chua's circuit and the Roessler system demonstrate that this method is very effective
Zhang, Chao; Yao, Huajian; Liu, Qinya; Zhang, Ping; Yuan, Yanhua O.; Feng, Jikun; Fang, Lihua
2018-01-01
We present a 2-D ambient noise adjoint tomography technique for a linear array with a significant reduction in computational cost and show its application to an array in North China. We first convert the observed data for 3-D media, i.e., surface-wave empirical Green's functions (EGFs) to the reconstructed EGFs (REGFs) for 2-D media using a 3-D/2-D transformation scheme. Different from the conventional steps of measuring phase dispersion, this technology refines 2-D shear wave speeds along the profile directly from REGFs. With an initial model based on traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime delays between the REGFs and synthetic Green functions calculated by the spectral-element method. The multitaper traveltime difference measurement is applied in four-period bands: 20-35 s, 15-30 s, 10-20 s, and 6-15 s. The recovered model shows detailed crustal structures including pronounced low-velocity anomalies in the lower crust and a gradual crust-mantle transition zone beneath the northern Trans-North China Orogen, which suggest the possible intense thermo-chemical interactions between mantle-derived upwelling melts and the lower crust, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of this region. To our knowledge, it is the first time that ambient noise adjoint tomography is implemented for a 2-D medium. Compared with the intensive computational cost and storage requirement of 3-D adjoint tomography, this method offers a computationally efficient and inexpensive alternative to imaging fine-scale crustal structures beneath linear arrays.
Self-adjoint extensions and spectral analysis in the generalized Kratzer problem
Energy Technology Data Exchange (ETDEWEB)
Baldiotti, M C; Gitman, D M [Institute of Physics, University of Sao Paulo (Brazil); Tyutin, I V; Voronov, B L, E-mail: baldiott@fma.if.usp.br, E-mail: gitman@dfn.if.usp.br, E-mail: tyutin@lpi.ru, E-mail: voronov@lpi.ru [Lebedev Physical Institute, Moscow (Russian Federation)
2011-06-01
We present a mathematically rigorous quantum-mechanical treatment of a one-dimensional non-relativistic motion of a particle in the potential field V(x)=g{sub 1}x{sup -1}+g{sub 2}x{sup -2}, x is an element of R{sub +} = [0, {infinity}). For g{sub 2}>0 and g{sub 1}<0, the potential is known as the Kratzer potential V{sub K}(x) and is usually used to describe molecular energy and structure, interactions between different molecules and interactions between non-bonded atoms. We construct all self-adjoint Schroedinger operators with the potential V(x) and represent rigorous solutions of the corresponding spectral problems. Solving the first part of the problem, we use a method of specifying self-adjoint extensions by (asymptotic) self-adjoint boundary conditions. Solving spectral problems, we follow Krein's method of guiding functionals. This work is a continuation of our previous works devoted to the Coulomb, Calogero and Aharonov-Bohm potentials.
Directory of Open Access Journals (Sweden)
C. Nabert
2017-05-01
Full Text Available The interaction of the solar wind with a planetary magnetic field causes electrical currents that modify the magnetic field distribution around the planet. We present an approach to estimating the planetary magnetic field from in situ spacecraft data using a magnetohydrodynamic (MHD simulation approach. The method is developed with respect to the upcoming BepiColombo mission to planet Mercury aimed at determining the planet's magnetic field and its interior electrical conductivity distribution. In contrast to the widely used empirical models, global MHD simulations allow the calculation of the strongly time-dependent interaction process of the solar wind with the planet. As a first approach, we use a simple MHD simulation code that includes time-dependent solar wind and magnetic field parameters. The planetary parameters are estimated by minimizing the misfit of spacecraft data and simulation results with a gradient-based optimization. As the calculation of gradients with respect to many parameters is usually very time-consuming, we investigate the application of an adjoint MHD model. This adjoint MHD model is generated by an automatic differentiation tool to compute the gradients efficiently. The computational cost for determining the gradient with an adjoint approach is nearly independent of the number of parameters. Our method is validated by application to THEMIS (Time History of Events and Macroscale Interactions during Substorms magnetosheath data to estimate Earth's dipole moment.
Manipulating Rayleigh-Taylor Growth Using Adjoints
Kord, Ali; Capecelatro, Jesse
2017-11-01
It has been observed that initial interfacial perturbations affect the growth of Rayleigh-Taylor (RT) instabilities. However, it remains to be seen to what extent the perturbations alter the RT growth rate. Direct numerical simulations (DNS) provide a powerful means for studying the effects of initial conditions (IC) on the growth rate. However, a brute-force approach for identifying optimal initial perturbations is not practical via DNS. In addition, identifying sensitivity of the RT growth to the large number of parameters used in defining the IC is computationally expensive. A discrete adjoint is formulated to measure sensitivities of multi-mode RT growth to ICs in a high-order finite difference framework. The sensitivity is used as a search direction for adjusting the initial perturbations to both maximize and suppress the RT growth rate during its non-linear regime. The modes that contribute the greatest sensitivity are identified, and optimized perturbation energy spectrum are reported. PhD Student, Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI.
A streamlined artificial variable free version of simplex method.
Directory of Open Access Journals (Sweden)
Syed Inayatullah
Full Text Available This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.
A streamlined artificial variable free version of simplex method.
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.
Variable scaling method and Stark effect in hydrogen atom
International Nuclear Information System (INIS)
Choudhury, R.K.R.; Ghosh, B.
1983-09-01
By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)
Variable importance and prediction methods for longitudinal problems with missing variables.
Directory of Open Access Journals (Sweden)
Iván Díaz
Full Text Available We present prediction and variable importance (VIM methods for longitudinal data sets containing continuous and binary exposures subject to missingness. We demonstrate the use of these methods for prognosis of medical outcomes of severe trauma patients, a field in which current medical practice involves rules of thumb and scoring methods that only use a few variables and ignore the dynamic and high-dimensional nature of trauma recovery. Well-principled prediction and VIM methods can provide a tool to make care decisions informed by the high-dimensional patient's physiological and clinical history. Our VIM parameters are analogous to slope coefficients in adjusted regressions, but are not dependent on a specific statistical model, nor require a certain functional form of the prediction regression to be estimated. In addition, they can be causally interpreted under causal and statistical assumptions as the expected outcome under time-specific clinical interventions, related to changes in the mean of the outcome if each individual experiences a specified change in the variable (keeping other variables in the model fixed. Better yet, the targeted MLE used is doubly robust and locally efficient. Because the proposed VIM does not constrain the prediction model fit, we use a very flexible ensemble learner (the SuperLearner, which returns a linear combination of a list of user-given algorithms. Not only is such a prediction algorithm intuitive appealing, it has theoretical justification as being asymptotically equivalent to the oracle selector. The results of the analysis show effects whose size and significance would have been not been found using a parametric approach (such as stepwise regression or LASSO. In addition, the procedure is even more compelling as the predictor on which it is based showed significant improvements in cross-validated fit, for instance area under the curve (AUC for a receiver-operator curve (ROC. Thus, given that 1 our VIM
Instrumental variable methods in comparative safety and effectiveness research.
Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian
2010-06-01
Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.
Instrumental variable methods in comparative safety and effectiveness research†
Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian
2010-01-01
Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968
Data and Workflow Management Challenges in Global Adjoint Tomography
Lei, W.; Ruan, Y.; Smith, J. A.; Modrak, R. T.; Orsvuran, R.; Krischer, L.; Chen, Y.; Balasubramanian, V.; Hill, J.; Turilli, M.; Bozdag, E.; Lefebvre, M. P.; Jha, S.; Tromp, J.
2017-12-01
It is crucial to take the complete physics of wave propagation into account in seismic tomography to further improve the resolution of tomographic images. The adjoint method is an efficient way of incorporating 3D wave simulations in seismic tomography. However, global adjoint tomography is computationally expensive, requiring thousands of wavefield simulations and massive data processing. Through our collaboration with the Oak Ridge National Laboratory (ORNL) computing group and an allocation on Titan, ORNL's GPU-accelerated supercomputer, we are now performing our global inversions by assimilating waveform data from over 1,000 earthquakes. The first challenge we encountered is dealing with the sheer amount of seismic data. Data processing based on conventional data formats and processing tools (such as SAC), which are not designed for parallel systems, becomes our major bottleneck. To facilitate the data processing procedures, we designed the Adaptive Seismic Data Format (ASDF) and developed a set of Python-based processing tools to replace legacy FORTRAN-based software. These tools greatly enhance reproducibility and accountability while taking full advantage of highly parallel system and showing superior scaling on modern computational platforms. The second challenge is that the data processing workflow contains more than 10 sub-procedures, making it delicate to handle and prone to human mistakes. To reduce human intervention as much as possible, we are developing a framework specifically designed for seismic inversion based on the state-of-the art workflow management research, specifically the Ensemble Toolkit (EnTK), in collaboration with the RADICAL team from Rutgers University. Using the initial developments of the EnTK, we are able to utilize the full computing power of the data processing cluster RHEA at ORNL while keeping human interaction to a minimum and greatly reducing the data processing time. Thanks to all the improvements, we are now able to
Wind resource in metropolitan France: assessment methods, variability and trends
International Nuclear Information System (INIS)
Jourdier, Benedicte
2015-01-01
France has one of the largest wind potentials in Europe, yet far from being fully exploited. The wind resource and energy yield assessment is a key step before building a wind farm, aiming at predicting the future electricity production. Any over-estimation in the assessment process puts in jeopardy the project's profitability. This has been the case in the recent years, when wind farm managers have noticed that they produced less than expected. The under-production problem leads to questioning both the validity of the assessment methods and the inter-annual wind variability. This thesis tackles these two issues. In a first part are investigated the errors linked to the assessment methods, especially in two steps: the vertical extrapolation of wind measurements and the statistical modelling of wind-speed data by a Weibull distribution. The second part investigates the inter-annual to decadal variability of wind speeds, in order to understand how this variability may have contributed to the under-production and so that it is better taken into account in the future. (author) [fr
International Nuclear Information System (INIS)
Kelsey IV, Charles T.; Prinja, Anil K.
2011-01-01
We evaluate the Monte Carlo calculation efficiency for multigroup transport relative to continuous energy transport using the MCNPX code system to evaluate secondary neutron doses from a proton beam. We consider both fully forward simulation and application of a midway forward adjoint coupling method to the problem. Previously we developed tools for building coupled multigroup proton/neutron cross section libraries and showed consistent results for continuous energy and multigroup proton/neutron transport calculations. We observed that forward multigroup transport could be more efficient than continuous energy. Here we quantify solution efficiency differences for a secondary radiation dose problem characteristic of proton beam therapy problems. We begin by comparing figures of merit for forward multigroup and continuous energy MCNPX transport and find that multigroup is 30 times more efficient. Next we evaluate efficiency gains for coupling out-of-beam adjoint solutions with forward in-beam solutions. We use a variation of a midway forward-adjoint coupling method developed by others for neutral particle transport. Our implementation makes use of the surface source feature in MCNPX and we use spherical harmonic expansions for coupling in angle rather than solid angle binning. The adjoint out-of-beam transport for organs of concern in a phantom or patient can be coupled with numerous forward, continuous energy or multigroup, in-beam perturbations of a therapy beam line configuration. Out-of-beam dose solutions are provided without repeating out-of-beam transport. (author)
International Nuclear Information System (INIS)
Finn, John M.
2015-01-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012
Implementation of Generalized Adjoint Equation Solver for DeCART
International Nuclear Information System (INIS)
Han, Tae Young; Cho, Jin Young; Lee, Hyun Chul; Noh, Jae Man
2013-01-01
In this paper, the generalized adjoint solver based on the generalized perturbation theory is implemented on DeCART and the verification calculations were carried out. As the results, the adjoint flux for the general response coincides with the reference solution and it is expected that the solver could produce the parameters for the sensitivity and uncertainty analysis. Recently, MUSAD (Modules of Uncertainty and Sensitivity Analysis for DeCART) was developed for the uncertainty analysis of PMR200 core and the fundamental adjoint solver was implemented into DeCART. However, the application of the code was limited to the uncertainty to the multiplication factor, k eff , because it was based on the classical perturbation theory. For the uncertainty analysis to the general response as like the power density, it is necessary to develop the analysis module based on the generalized perturbation theory and it needs the generalized adjoint solutions from DeCART. In this paper, the generalized adjoint solver is implemented on DeCART and the calculation results are compared with the results by TSUNAMI of SCALE 6.1
An adjoint-based scheme for eigenvalue error improvement
International Nuclear Information System (INIS)
Merton, S.R.; Smedley-Stevenson, R.P.; Pain, C.C.; El-Sheikh, A.H.; Buchan, A.G.
2011-01-01
A scheme for improving the accuracy and reducing the error in eigenvalue calculations is presented. Using a rst order Taylor series expansion of both the eigenvalue solution and the residual of the governing equation, an approximation to the error in the eigenvalue is derived. This is done using a convolution of the equation residual and adjoint solution, which is calculated in-line with the primal solution. A defect correction on the solution is then performed in which the approximation to the error is used to apply a correction to the eigenvalue. The method is shown to dramatically improve convergence of the eigenvalue. The equation for the eigenvalue is shown to simplify when certain normalizations are applied to the eigenvector. Two such normalizations are considered; the rst of these is a fission-source type of normalisation and the second is an eigenvector normalisation. Results are demonstrated on a number of demanding elliptic problems using continuous Galerkin weighted nite elements. Moreover, the correction scheme may also be applied to hyperbolic problems and arbitrary discretization. This is not limited to spatial corrections and may be used throughout the phase space of the discrete equation. The applied correction not only improves fidelity of the calculation, it allows assessment of the reliability of numerical schemes to be made and could be used to guide mesh adaption algorithms or to automate mesh generation schemes. (author)
Surface spectra of Weyl semimetals through self-adjoint extensions
Seradjeh, Babak; Vennettilli, Michael
2018-02-01
We apply the method of self-adjoint extensions of Hermitian operators to the low-energy, continuum Hamiltonians of Weyl semimetals in bounded geometries and derive the spectrum of the surface states on the boundary. This allows for the full characterization of boundary conditions and the surface spectra on surfaces both normal to the Weyl node separation as well as parallel to it. We show that the boundary conditions for quadratic bulk dispersions are, in general, specified by a U (2 ) matrix relating the wave function and its derivatives normal to the surface. We give a general procedure to obtain the surface spectra from these boundary conditions and derive them in specific cases of bulk dispersion. We consider the role of global symmetries in the boundary conditions and their effect on the surface spectrum. We point out several interesting features of the surface spectra for different choices of boundary conditions, such as a Mexican-hat shaped dispersion on the surface normal to Weyl node separation. We find that the existence of bound states, Fermi arcs, and the shape of their dispersion, depend on the choice of boundary conditions. This illustrates the importance of the physics at and near the boundaries in the general statement of bulk-boundary correspondence.
Modeling intraindividual variability with repeated measures data methods and applications
Hershberger, Scott L
2013-01-01
This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp
Viscoelastic Earthquake Cycle Simulation with Memory Variable Method
Hirahara, K.; Ohtani, M.
2017-12-01
There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half
Directory of Open Access Journals (Sweden)
Corey Sparks
2009-07-01
Full Text Available This paper presents an analysis of the differential growth rates of the farming and non-farming segments of a rural Scottish community during the 19th and early 20th centuries using the variable-r method allowing for net migration. Using this method, I find that the farming population of Orkney, Scotland, showed less variability in their reproduction and growth rates than the non-farming population during a period of net population decline. I conclude by suggesting that the variable-r method can be used in general cases where the relative growth of subpopulations or subpopulation reproduction is of interest.
Application of adjoint sensitivity analysis to nuclear reactor fuel rod performance
International Nuclear Information System (INIS)
Wilderman, S.J.; Was, G.S.
1984-01-01
Adjoint sensitivity analysis in nuclear fuel behavior modeling is extended to operate on the entire power history for both Zircaloy and stainless steel cladding via the computer codes FCODE-ALPHA/SS and SCODE/SS. The sensitivities of key variables to input parameters are found to be highly non-intuitive and strongly dependent on the fuel-clad gap status and the history of the fuel during the cycle. The sensitivities of five key variables, clad circumferential stress and strain, fission gas release, fuel centerline temperature and fuel-clad gap, to eleven input parameters are studied. The most important input parameters (yielding significances between 1 and 100) are fabricated clad inner and outer radii and fuel radius. The least important significances (less than 0.01) are the time since reactor start-up and fuel-burnup densification rate. Intermediate to these are fabricated fuel porosity, linear heat generation rate, the power history scale factor, clad outer temperature, fill gas pressure and coolant pressure. Stainless steel and Zircaloy have similar sensitivities at start-up but these diverges a burnup proceeds due to the effect of the higher creep rate of Zircaloy which causes the system to be more responsive to changes in input parameters. The value of adjoint sensitivity analysis lies in its capability of uncovering dependencies of fuel variables on input parameters that cannot be determined by a sequential thought process. (orig.)
Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems
Garg, Vikram V; Prudhomme, Serge; van der Zee, Kris G; Carey, Graham F
2014-01-01
Models based on the Helmholtz `slip' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint
Normal and adjoint integral and integrodifferential neutron transport equations. Pt. 2
International Nuclear Information System (INIS)
Velarde, G.
1976-01-01
Using the simplifying hypotheses of the integrodifferential Boltzmann equations of neutron transport, given in JEN 334 report, several integral equations, and theirs adjoint ones, are obtained. Relations between the different normal and adjoint eigenfunctions are established and, in particular, proceeding from the integrodifferential Boltzmann equation it's found out the relation between the solutions of the adjoint equation of its integral one, and the solutions of the integral equation of its adjoint one (author)
Directory of Open Access Journals (Sweden)
Sandvik Leiv
2011-04-01
Full Text Available Abstract Background The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Methods Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. Results The Welch U test (the T test with adjustment for unequal variances and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group. The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. Conclusions The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.
Variable aperture-based ptychographical iterative engine method.
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Variable aperture-based ptychographical iterative engine method
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.
Method for curing polymers using variable-frequency microwave heating
Lauf, Robert J.; Bible, Don W.; Paulauskas, Felix L.
1998-01-01
A method for curing polymers (11) incorporating a variable frequency microwave furnace system (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34). By varying the frequency of the microwave signal, non-uniformities within the cavity (34) are minimized, thereby achieving a more uniform cure throughout the workpiece (36). A directional coupler (24) is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. The furnace cavity (34) may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing.
Interpolation decoding method with variable parameters for fractal image compression
International Nuclear Information System (INIS)
He Chuanjiang; Li Gaoping; Shen Xiaona
2007-01-01
The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal
Non-self-adjoint hamiltonians defined by Riesz bases
Energy Technology Data Exchange (ETDEWEB)
Bagarello, F., E-mail: fabio.bagarello@unipa.it [Dipartimento di Energia, Ingegneria dell' Informazione e Modelli Matematici, Facoltà di Ingegneria, Università di Palermo, I-90128 Palermo, Italy and INFN, Università di Torino, Torino (Italy); Inoue, A., E-mail: a-inoue@fukuoka-u.ac.jp [Department of Applied Mathematics, Fukuoka University, Fukuoka 814-0180 (Japan); Trapani, C., E-mail: camillo.trapani@unipa.it [Dipartimento di Matematica e Informatica, Università di Palermo, I-90123 Palermo (Italy)
2014-03-15
We discuss some features of non-self-adjoint Hamiltonians with real discrete simple spectrum under the assumption that the eigenvectors form a Riesz basis of Hilbert space. Among other things, we give conditions under which these Hamiltonians can be factorized in terms of generalized lowering and raising operators.
Nefness of adjoint bundles for ample vector bundles
Directory of Open Access Journals (Sweden)
Hidetoshi Maeda
1995-11-01
Full Text Available Let E be an ample vector bundle of rank >1 on a smooth complex projective variety X of dimension n. This paper gives a classification of pairs (X,E whose adjoint bundles K_X+det E are not nef in the case when r=n-2.
Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"
Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...
Continuous energy adjoint Monte Carlo for coupled neutron-photon transport
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J.E. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.
2001-07-01
Although the theory for adjoint Monte Carlo calculations with continuous energy treatment for neutrons as well as for photons is known, coupled neutron-photon transport problems present fundamental difficulties because of the discrete energies of the photons produced by neutron reactions. This problem was solved by forcing the energy of the adjoint photon to the required discrete value by an adjoint Compton scattering reaction or an adjoint pair production reaction. A mathematical derivation shows the exact procedures to follow for the generation of an adjoint neutron and its statistical weight. A numerical example demonstrates that correct detector responses are obtained compared to a standard forward Monte Carlo calculation. (orig.)
Variable threshold method for ECG R-peak detection.
Kew, Hsein-Ping; Jeong, Do-Un
2011-10-01
In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.
Feasibility of wavelet expansion methods to treat the energy variable
International Nuclear Information System (INIS)
Van Rooijen, W. F. G.
2012-01-01
This paper discusses the use of the Discrete Wavelet Transform (DWT) to implement a functional expansion of the energy variable in neutron transport. The motivation of the work is to investigate the possibility of adapting the expansion level of the neutron flux in a material region to the complexity of the cross section in that region. If such an adaptive treatment is possible, 'simple' material regions (e.g., moderator regions) require little effort, while a detailed treatment is used for 'complex' regions (e.g., fuel regions). Our investigations show that in fact adaptivity cannot be achieved. The most fundamental reason is that in a multi-region system, the energy dependence of the cross section in a material region does not imply that the neutron flux in that region has a similar energy dependence. If it is chosen to sacrifice adaptivity, then the DWT method can be very accurate, but the complexity of such a method is higher than that of an equivalent hyper-fine group calculation. The conclusion is thus that, unfortunately, the DWT approach is not very practical. (authors)
Radiation source reconstruction with known geometry and materials using the adjoint
International Nuclear Information System (INIS)
Hykes, Joshua M.; Azmy, Yousry Y.
2011-01-01
We present a method to estimate an unknown isotropic source distribution, in space and energy, using detector measurements when the geometry and material composition are known. The estimated source distribution minimizes the difference between the measured and computed responses of detectors located at a selected number of points within the domain. In typical methods, a forward flux calculation is performed for each source guess in an iterative process. In contrast, we use the adjoint flux to compute the responses. Potential applications of the proposed method include determining the distribution of radio-contaminants following a nuclear event, monitoring the flow of radioactive fluids in pipes to determine hold-up locations, and retroactive reconstruction of radiation fields using workers' detectors' readings. After presenting the method, we describe a numerical test problem to demonstrate the preliminary viability of the method. As expected, using the adjoint flux reduces the number of transport solves to be proportional to the number of detector measurements, in contrast to methods using the forward flux that require a typically larger number proportional to the number of spatial mesh cells. (author)
Discrete adjoint of fractional step Navier-Stokes solver in generalized coordinates
Wang, Mengze; Mons, Vincent; Zaki, Tamer
2017-11-01
Optimization and control in transitional and turbulent flows require evaluation of gradients of the flow state with respect to the problem parameters. Using adjoint approaches, these high-dimensional gradients can be evaluated with a similar computational cost as the forward Navier-Stokes simulations. The adjoint algorithm can be obtained by discretizing the continuous adjoint Navier-Stokes equations or by deriving the adjoint to the discretized Navier-Stokes equations directly. The latter algorithm is necessary when the forward-adjoint relations must be satisfied to machine precision. In this work, our forward model is the fractional step solution to the Navier-Stokes equations in generalized coordinates, proposed by Rosenfeld, Kwak & Vinokur. We derive the corresponding discrete adjoint equations. We also demonstrate the accuracy of the combined forward-adjoint model, and its application to unsteady wall-bounded flows. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542).
International Nuclear Information System (INIS)
Nazareth, J. L.
1979-01-01
1 - Description of problem or function: OCOPTR and DRVOCR are computer programs designed to find minima of non-linear differentiable functions f: R n →R with n dimensional domains. OCOPTR requires that the user only provide function values (i.e. it is a derivative-free routine). DRVOCR requires the user to supply both function and gradient information. 2 - Method of solution: OCOPTR and DRVOCR use the variable metric (or quasi-Newton) method of Davidon (1975). For OCOPTR, the derivatives are estimated by finite differences along a suitable set of linearly independent directions. For DRVOCR, the derivatives are user- supplied. Some features of the codes are the storage of the approximation to the inverse Hessian matrix in lower trapezoidal factored form and the use of an optimally-conditioned updating method. Linear equality constraints are permitted subject to the initial Hessian factor being chosen correctly. 3 - Restrictions on the complexity of the problem: The functions to which the routine is applied are assumed to be differentiable. The routine also requires (n 2 /2) + 0(n) storage locations where n is the problem dimension
International Nuclear Information System (INIS)
Hoogenboom, J.E.
1981-01-01
An adjoint Monte Carlo technique is described for the solution of neutron transport problems. The optimum biasing function for a zero-variance collision estimator is derived. The optimum treatment of an analog of a non-velocity thermal group has also been derived. The method is extended to multiplying systems, especially for eigenfunction problems to enable the estimate of averages over the unknown fundamental neutron flux distribution. A versatile computer code, FOCUS, has been written, based on the described theory. Numerical examples are given for a shielding problem and a critical assembly, illustrating the performance of the FOCUS code. 19 refs
Four-loop vacuum energy density of the SU(Nc) + adjoint Higgs theory
International Nuclear Information System (INIS)
Kajantie, K.; Rummukainen, K.; Schroder, Y.; Laine, M.
2003-01-01
We compute the dimensionally regularised four-loop vacuum energy density of the SU(N c ) gauge + adjoint Higgs theory, in the disordered phase. 'Scalarisation', or reduction to a small set of master integrals of the type appearing in scalar field theories, is carried out in d dimensions, employing general partial integration identities through an algorithm developed by Laporta, while the remaining scalar integrals are evaluated in d=3-2ε dimensions, by expanding in ε 6 ln(1/g)), O(g 6 ) to the pressure, while the general methods are applicable also to studies of critical phenomena in QED-like statistical physics systems. (author)
Gonzales, Matthew Alejandro
The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research
Electromagnetic variable degrees of freedom actuator systems and methods
Montesanti, Richard C [Pleasanton, CA; Trumper, David L [Plaistow, NH; Kirtley, Jr., James L.
2009-02-17
The present invention provides a variable reluctance actuator system and method that can be adapted for simultaneous rotation and translation of a moving element by applying a normal-direction magnetic flux on the moving element. In a beneficial example arrangement, the moving element includes a swing arm that carries a cutting tool at a set radius from an axis of rotation so as to produce a rotary fast tool servo that provides a tool motion in a direction substantially parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. An actuator rotates a swing arm such that a cutting tool moves toward and away from a mounted rotating workpiece in a controlled manner in order to machine the workpiece. Position sensors provide rotation and displacement information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in feed slide of a precision lathe.
Energy Technology Data Exchange (ETDEWEB)
Pereira, Valmir; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear
2002-07-01
The objective of this work is the obtention of the mathematical adjoint flux, having as its support the nodal expansion method (NEM) for coarse mesh problems. Since there are difficulties to evaluate this flux by using NEM. directly, a coarse mesh finite difference program was developed to obtain this adjoint flux. The coarse mesh finite difference formulation (DFMG) adopted uses results of the direct calculation (node average flux and node face averaged currents) obtained by NEM. These quantities (flux and currents) are used to obtain the correction factors which modify the classical finite differences formulation . Since the DFMG formulation is also capable of calculating the direct flux it was also tested to obtain this flux and it was verified that it was able to reproduce with good accuracy both the flux and the currents obtained via NEM. In this way, only matrix transposition is needed to calculate the mathematical adjoint flux. (author)
Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach
Aguilar, José G.; Magri, Luca; Juniper, Matthew P.
2017-07-01
Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.
Adjoint-Based Climate Model Tuning: Application to the Planet Simulator
Lyu, Guokun; Köhl, Armin; Matei, Ion; Stammer, Detlef
2018-01-01
The adjoint method is used to calibrate the medium complexity climate model "Planet Simulator" through parameter estimation. Identical twin experiments demonstrate that this method can retrieve default values of the control parameters when using a long assimilation window of the order of 2 months. Chaos synchronization through nudging, required to overcome limits in the temporal assimilation window in the adjoint method, is employed successfully to reach this assimilation window length. When assimilating ERA-Interim reanalysis data, the observations of air temperature and the radiative fluxes are the most important data for adjusting the control parameters. The global mean net longwave fluxes at the surface and at the top of the atmosphere are significantly improved by tuning two model parameters controlling the absorption of clouds and water vapor. The global mean net shortwave radiation at the surface is improved by optimizing three model parameters controlling cloud optical properties. The optimized parameters improve the free model (without nudging terms) simulation in a way similar to that in the assimilation experiments. Results suggest a promising way for tuning uncertain parameters in nonlinear coupled climate models.
Bachegowda, Lohith S; Cheng, Yan H; Long, Thomas; Shaz, Beth H
2017-01-01
-Substantial variability between different antibody titration methods prompted development and introduction of uniform methods in 2008. -To determine whether uniform methods consistently decrease interlaboratory variation in proficiency testing. -Proficiency testing data for antibody titration between 2009 and 2013 were obtained from the College of American Pathologists. Each laboratory was supplied plasma and red cells to determine anti-A and anti-D antibody titers by their standard method: gel or tube by uniform or other methods at different testing phases (immediate spin and/or room temperature [anti-A], and/or anti-human globulin [AHG: anti-A and anti-D]) with different additives. Interlaboratory variations were compared by analyzing the distribution of titer results by method and phase. -A median of 574 and 1100 responses were reported for anti-A and anti-D antibody titers, respectively, during a 5-year period. The 3 most frequent (median) methods performed for anti-A antibody were uniform tube room temperature (147.5; range, 119-159), uniform tube AHG (143.5; range, 134-150), and other tube AHG (97; range, 82-116); for anti-D antibody, the methods were other tube (451; range, 431-465), uniform tube (404; range, 382-462), and uniform gel (137; range, 121-153). Of the larger reported methods, uniform gel AHG phase for anti-A and anti-D antibodies had the most participants with the same result (mode). For anti-A antibody, 0 of 8 (uniform versus other tube room temperature) and 1 of 8 (uniform versus other tube AHG), and for anti-D antibody, 0 of 8 (uniform versus other tube) and 0 of 8 (uniform versus other gel) proficiency tests showed significant titer variability reduction. -Uniform methods harmonize laboratory techniques but rarely reduce interlaboratory titer variance in comparison with other methods.
International Nuclear Information System (INIS)
Goncalves, Glenio A.; Orengo, Gilberto; Vilhena, Marco Tullio M.B. de; Graca, Claudio O.
2002-01-01
In this work we present the LTS N solution of the adjoint transport equation for an arbitrary source, testing the aptness of this analytical solution for high order of quadrature in transport problems and comparing some preliminary results with the ANISN computations in a homogeneneous slab geometry. In order to do that we apply the new formulation for the LTS N method based on the invariance projection property, becoming possible to handle problems with arbitrary sources and demanding high order of quadrature or deep penetration. This new approach for the LTS N method is important both for direct and adjoint transport calculations and its development was inspired by the necessity of using generalized adjoint sources for important calculations. Although the mathematical convergence has been proved for an arbitrary source, when the quadrature order or deep penetration is required the LTS N method presents computational overflow even for simple sources (sin, cos, exp, polynomial). With the new formulation we eliminate this drawback and in this work we report the numerical simulations testing the new approach
Perturbation of self-adjoint operators by Dirac distributions
International Nuclear Information System (INIS)
Zorbas, J.
1980-01-01
The existence of a family of self-adjoint Hamiltonians H/sub theta/, theta element of [0, 2π), corresponding to the formal expression H 0 +νdelta (x) is shown for a general class of self-adjoint operators H 0 . Expressions for the Green's function and wavefunction corresponding to H/sub theta/ are obtained in terms of the Green's function and wavefunction corresponding to H 0 . Similar results are shown for the perturbation of H 0 by a finite sum of Dirac distributions. A prescription is given for obtaining H/sub theta/ as the strong resolvent limit of a family of momentum cutoff Hamiltonians H/sup N/. The relationship between the scattering theories corresponding to H/sup N/ and H/sub theta/ is examined
Field calculations. Part I: Choice of variables and methods
International Nuclear Information System (INIS)
Turner, L.R.
1981-01-01
Magnetostatic calculations can involve (in order of increasing complexity) conductors only, material with constant or infinite permeability, or material with variable permeability. We consider here only the most general case, calculations involving ferritic material with variable permeability. Variables suitable for magnetostatic calculations are the magnetic field, the magnetic vector potential, and the magnetic scalar potential. For two-dimensional calculations the potentials, which each have only one component, have advantages over the field, which has two components. Because it is a single-valued variable, the vector potential is perhaps the best variable for two-dimensional calculations. In three dimensions, both the field and the vector potential have three components; the scalar potential, with only one component,provides a much smaller system of equations to be solved. However the scalar potential is not single-valued. To circumvent this problem, a calculation with two scalar potentials can be performed. The scalar potential whose source is the conductors can be calculated directly by the Biot-Savart law, and the scalar potential whose source is the magnetized material is single valued. However in some situations, the fields from the two potentials nearly cancel; and the numerical accuracy is lost. The 3-D magnetostatic program TOSCA employs a single total scalar potential; the program GFUN uses the magnetic field as its variable
Brereton, Carol A.; Joynes, Ian M.; Campbell, Lucy J.; Johnson, Matthew R.
2018-05-01
Fugitive emissions are important sources of greenhouse gases and lost product in the energy sector that can be difficult to detect, but are often easily mitigated once they are known, located, and quantified. In this paper, a scalar transport adjoint-based optimization method is presented to locate and quantify unknown emission sources from downstream measurements. This emission characterization approach correctly predicted locations to within 5 m and magnitudes to within 13% of experimental release data from Project Prairie Grass. The method was further demonstrated on simulated simultaneous releases in a complex 3-D geometry based on an Alberta gas plant. Reconstructions were performed using both the complex 3-D transient wind field used to generate the simulated release data and using a sequential series of steady-state RANS wind simulations (SSWS) representing 30 s intervals of physical time. Both the detailed transient and the simplified wind field series could be used to correctly locate major sources and predict their emission rates within 10%, while predicting total emission rates from all sources within 24%. This SSWS case would be much easier to implement in a real-world application, and gives rise to the possibility of developing pre-computed databases of both wind and scalar transport adjoints to reduce computational time.
Forward and adjoint sensitivity computation of chaotic dynamical systems
Energy Technology Data Exchange (ETDEWEB)
Wang, Qiqi, E-mail: qiqi@mit.edu [Department of Aeronautics and Astronautics, MIT, 77 Mass Ave., Cambridge, MA 02139 (United States)
2013-02-15
This paper describes a forward algorithm and an adjoint algorithm for computing sensitivity derivatives in chaotic dynamical systems, such as the Lorenz attractor. The algorithms compute the derivative of long time averaged “statistical” quantities to infinitesimal perturbations of the system parameters. The algorithms are demonstrated on the Lorenz attractor. We show that sensitivity derivatives of statistical quantities can be accurately estimated using a single, short trajectory (over a time interval of 20) on the Lorenz attractor.
Numerical study of dense adjoint 2-color matter
International Nuclear Information System (INIS)
Hands, S.; Scorzato, L.; Oevers, M.
2000-11-01
We study the global symmetries of SU(2) gauge theory with N flavors of staggered fermions in the presence of a chemical potential. We motivate the special interest of the case N=1 (staggered) with fermions in the adjoint representation of the gauge group. We present results from numerical simulations with both hybrid Monte Carlo and the two-step multi-bosonic algorithm. (orig.)
Four-fermi anomalous dimension with adjoint fermions
Del Debbio, Luigi; Ruano, Carlos Pena
2014-01-01
The four-fermi interaction can play an important role in models of strong dynamical EW sym- metry breaking if the anomalous dimensions of the four-fermi operators become large in the IR. We discuss a number of issues that are relevant for the nonperturbative computation of the four- fermi anomalous dimensions for the SU(2) gauge theory with two flavors of Dirac fermions in the adjoint representation, using a Schrödinger functional formalism.
DEFF Research Database (Denmark)
Xie, Zhinan; Komatitsch, Dimitri; Martin, Roland
2014-01-01
with perfectly matched absorbing layers we introduce a computationally efficient boundary storage strategy by saving information along the interface between the CFS-UPML and the main domain only, thus avoiding the need to solve a backward wave propagation problem inside the CFS-UPML, which is known to be highly......In recent years, the application of time-domain adjoint methods to improve large, complex underground tomographic models at the regional scale has led to new challenges for the numerical simulation of forward or adjoint elastic wave propagation problems. An important challenge is to design...... convolution formulation of the complex-frequency-shifted unsplit-field perfectly matched layer (CFS-UPML) derived in previous work more flexible by providing a new treatment to analytically remove singular parameters in the formulation. We also extend this new formulation to 3-D. Furthermore, we derive...
Spectral monodromy of non-self-adjoint operators
International Nuclear Information System (INIS)
Phan, Quang Sang
2014-01-01
In the present paper, we build a combinatorial invariant, called the “spectral monodromy” from the spectrum of a single (non-self-adjoint) h-pseudodifferential operator with two degrees of freedom in the semi-classical limit. Our inspiration comes from the quantum monodromy defined for the joint spectrum of an integrable system of n commuting self-adjoint h-pseudodifferential operators, given by S. Vu Ngoc [“Quantum monodromy in integrable systems,” Commun. Math. Phys. 203(2), 465–479 (1999)]. The first simple case that we treat in this work is a normal operator. In this case, the discrete spectrum can be identified with the joint spectrum of an integrable quantum system. The second more complex case we propose is a small perturbation of a self-adjoint operator with a classical integrability property. We show that the discrete spectrum (in a small band around the real axis) also has a combinatorial monodromy. The main difficulty in this case is that we do not know the description of the spectrum everywhere, but only in a Cantor type set. In addition, we also show that the corresponding monodromy can be identified with the classical monodromy, defined by J. Duistermaat [“On global action-angle coordinates,” Commun. Pure Appl. Math. 33(6), 687–706 (1980)
Optimization of a neutron detector design using adjoint transport simulation
International Nuclear Information System (INIS)
Yi, C.; Manalo, K.; Huang, M.; Chin, M.; Edgar, C.; Applegate, S.; Sjoden, G.
2012-01-01
A synthetic aperture approach has been developed and investigated for Special Nuclear Materials (SNM) detection in vehicles passing a checkpoint at highway speeds. SNM is postulated to be stored in a moving vehicle and detector assemblies are placed on the road-side or in chambers embedded below the road surface. Neutron and gamma spectral awareness is important for the detector assembly design besides high efficiencies, so that different SNMs can be detected and identified with various possible shielding settings. The detector assembly design is composed of a CsI gamma-ray detector block and five neutron detector blocks, with peak efficiencies targeting different energy ranges determined by adjoint simulations. In this study, formulations are derived using adjoint transport simulations to estimate detector efficiencies. The formulations is applied to investigate several neutron detector designs for Block IV, which has its peak efficiency in the thermal range, and Block V, designed to maximize the total neutron counts over the entire energy spectrum. Other Blocks detect different neutron energies. All five neutron detector blocks and the gamma-ray block are assembled in both MCNP and deterministic simulation models, with detector responses calculated to validate the fully assembled design using a 30-group library. The simulation results show that the 30-group library, collapsed from an 80-group library using an adjoint-weighting approach with the YGROUP code, significantly reduced the computational cost while maintaining accuracy. (authors)
Application of the adjoint function methodology for neutron fluence determination
International Nuclear Information System (INIS)
Haghighat, A.; Nanayakkara, B.; Livingston, J.; Mahgerefteh, M.; Luoma, J.
1991-01-01
In previous studies, the neutron fluence at a reactor pressure vessel has been estimated based on consolidation of transport theory calculations and experimental data obtained from in-vessel capsules and/or cavity dosimeters. Normally, a forward neutron transport calculation is performed for each fuel cycle and the neutron fluxes are integrated over the reactor operating time to estimate the neutron fluence. Such calculations are performed for a geometrical model which is composed of one-eighth (0 to 45 deg) of the reactor core and its surroundings; i.e., core barrel, thermal shield, downcomer, reactor vessel, cavity region, concrete wall, and instrumentation well. Because the model is large, transport theory calculations generally require a significant amount of computer memory and time; hence, more efficient methodologies such as the adjoint transport approach have been proposed. These studies, however, do not address the necessary sensitivity studies needed for adjoint function calculations. The adjoint methodology has been employed to estimate the activity of a cavity dosimeter and that of an in-vessel capsule. A sensitivity study has been performed on the mesh distribution used in and around the cavity dosimeter and the in-vessel capsule. Further, since a major portion of the detector response is due to the neutrons originated in the peripheral fuel assemblies, a study on the use of a smaller calculational model has been performed
Spectral monodromy of non-self-adjoint operators
Energy Technology Data Exchange (ETDEWEB)
Phan, Quang Sang, E-mail: quang.phan@uj.edu.pl [Université de Rennes 1, Institut de Recherche Mathématique de Rennes (UMR 6625), Campus de Beaulieu, 35042 Rennes (France)
2014-01-15
In the present paper, we build a combinatorial invariant, called the “spectral monodromy” from the spectrum of a single (non-self-adjoint) h-pseudodifferential operator with two degrees of freedom in the semi-classical limit. Our inspiration comes from the quantum monodromy defined for the joint spectrum of an integrable system of n commuting self-adjoint h-pseudodifferential operators, given by S. Vu Ngoc [“Quantum monodromy in integrable systems,” Commun. Math. Phys. 203(2), 465–479 (1999)]. The first simple case that we treat in this work is a normal operator. In this case, the discrete spectrum can be identified with the joint spectrum of an integrable quantum system. The second more complex case we propose is a small perturbation of a self-adjoint operator with a classical integrability property. We show that the discrete spectrum (in a small band around the real axis) also has a combinatorial monodromy. The main difficulty in this case is that we do not know the description of the spectrum everywhere, but only in a Cantor type set. In addition, we also show that the corresponding monodromy can be identified with the classical monodromy, defined by J. Duistermaat [“On global action-angle coordinates,” Commun. Pure Appl. Math. 33(6), 687–706 (1980)].
Spectral monodromy of non-self-adjoint operators
Phan, Quang Sang
2014-01-01
In the present paper, we build a combinatorial invariant, called the "spectral monodromy" from the spectrum of a single (non-self-adjoint) h-pseudodifferential operator with two degrees of freedom in the semi-classical limit. Our inspiration comes from the quantum monodromy defined for the joint spectrum of an integrable system of n commuting self-adjoint h-pseudodifferential operators, given by S. Vu Ngoc ["Quantum monodromy in integrable systems," Commun. Math. Phys. 203(2), 465-479 (1999)]. The first simple case that we treat in this work is a normal operator. In this case, the discrete spectrum can be identified with the joint spectrum of an integrable quantum system. The second more complex case we propose is a small perturbation of a self-adjoint operator with a classical integrability property. We show that the discrete spectrum (in a small band around the real axis) also has a combinatorial monodromy. The main difficulty in this case is that we do not know the description of the spectrum everywhere, but only in a Cantor type set. In addition, we also show that the corresponding monodromy can be identified with the classical monodromy, defined by J. Duistermaat ["On global action-angle coordinates," Commun. Pure Appl. Math. 33(6), 687-706 (1980)].
International Nuclear Information System (INIS)
Metcalfe, D.E.; Campbell, J.E.; RamaRao, B.S.; Harper, W.V.; Battelle Project Management Div., Columbus, OH)
1985-01-01
Sensitivity and uncertainty analysis are important components of performance assessment activities for potential high-level radioactive waste repositories. The application of geostatistical and adjoint sensitivity techniques to aid in the calibration of an existing conceptual model of ground-water flow is demonstrated for the Leadville Limestone in Paradox Basin, Utah. The geostatistical method called kriging is used to statistically analyze the measured potentiometric data for the Leadville. This analysis consists of identifying anomalous data and data trends and characterizing the correlation structure between data points. Adjoint sensitivity analysis is then performed to aid in the calibration of a conceptual model of ground-water flow to the Leadville measured potentiometric data. Sensitivity derivatives of the fit between the modeled Leadville potentiometric surface and the measured potentiometric data to model parameters and boundary conditions are calculated by the adjoint method. These sensitivity derivatives are used to determine which model parameter and boundary condition values should be modified to most efficiently improve the fit of modeled to measured potentiometric conditions
Validation of a new midway forward-adjoint coupling option in MCNP
Energy Technology Data Exchange (ETDEWEB)
Serov, I.V.; John, T.M.; Hoogenboom, J.E. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.
1996-09-01
The new midway Monte Carlo is based on the coupling of scores from a forward and an adjoint Monte Carlo calculation on a surface in between the source and the detector. The method is implemented in MCNP. The utilization of the method is fairly straight-forward and does not require any substantial expertise. The midway Monte Carlo method was tested against the gamma-ray skyshine MCNP benchmark problem. This problem involves deep penetration and streaming along complicated paths. The midway method supplied results, which agree with the results of the reference calculation within the limits of the estimated statistical uncertainties. The efficiency of the easy-to-implement midway calculation is higher than the efficiency of the reference calculation which is already optimized by use of an importance function. The midway method proves to be efficient in problems with complicated streaming paths towards small detectors. (author)
Validation of a new midway forward-adjoint coupling option in MCNP
International Nuclear Information System (INIS)
Serov, I.V.; John, T.M.; Hoogenboom, J.E.
1996-01-01
The new midway Monte Carlo is based on the coupling of scores from a forward and an adjoint Monte Carlo calculation on a surface in between the source and the detector. The method is implemented in MCNP. The utilization of the method is fairly straight-forward and does not require any substantial expertise. The midway Monte Carlo method was tested against the gamma-ray skyshine MCNP benchmark problem. This problem involves deep penetration and streaming along complicated paths. The midway method supplied results, which agree with the results of the reference calculation within the limits of the estimated statistical uncertainties. The efficiency of the easy-to-implement midway calculation is higher than the efficiency of the reference calculation which is already optimized by use of an importance function. The midway method proves to be efficient in problems with complicated streaming paths towards small detectors. (author)
Directory of Open Access Journals (Sweden)
Stephen C. Anco
2017-02-01
Full Text Available A conservation law theorem stated by N. Ibragimov along with its subsequent extensions are shown to be a special case of a standard formula that uses a pair consisting of a symmetry and an adjoint-symmetry to produce a conservation law through a well-known Fréchet derivative identity. Furthermore, the connection of this formula (and of Ibragimov’s theorem to the standard action of symmetries on conservation laws is explained, which accounts for a number of major drawbacks that have appeared in recent work using the formula to generate conservation laws. In particular, the formula can generate trivial conservation laws and does not always yield all non-trivial conservation laws unless the symmetry action on the set of these conservation laws is transitive. It is emphasized that all local conservation laws for any given system of differential equations can be found instead by a general method using adjoint-symmetries. This general method is a kind of adjoint version of the standard Lie method to find all local symmetries and is completely algorithmic. The relationship between this method, Noether’s theorem and the symmetry/adjoint-symmetry formula is discussed.
Adjoint sensitivity theory for steady-state ground-water flow
International Nuclear Information System (INIS)
1983-11-01
In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady-state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah and the Wolcamp carbonate/sandstone aquifer of the Palo Duro Basin in the Texas Panhandle. Two performance measures are evaluated, local heads and velocity in the vicinity of potential high-level nuclear waste repositories. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Local velocity-related performance measures are more sensitive to hydraulic conductivities. The uncertainty in the performance measure is a function of the parameter sensitivity, parameter variance and the correlation between parameters. Given a parameter covariance matrix, the uncertainty of the performance measure can be calculated. Although no results are presented here, the implications of uncertainty calculations for the two studies are discussed. 18 references, 25 figures
Application of the adjoint optimisation of shock control bump for ONERA-M6 wing
Nejati, A.; Mazaheri, K.
2017-11-01
This article is devoted to the numerical investigation of the shock wave/boundary layer interaction (SWBLI) as the main factor influencing the aerodynamic performance of transonic bumped airfoils and wings. The numerical analysis is conducted for the ONERA-M6 wing through a shock control bump (SCB) shape optimisation process using the adjoint optimisation method. SWBLI is analyzed for both clean and bumped airfoils and wings, and it is shown how the modified wave structure originating from upstream of the SCB reduces the wave drag, by improving the boundary layer velocity profile downstream of the shock wave. The numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm are used to find the optimum location and shape of the SCB for the ONERA-M6 airfoil and wing. Two different geometrical models are introduced for the 3D SCB, one with linear variations, and another with periodic variations. Both configurations result in drag reduction and improvement in the aerodynamic efficiency, but the periodic model is more effective. Although the three-dimensional flow structure involves much more complexities, the overall results are shown to be similar to the two-dimensional case.
An adjoint-based framework for maximizing mixing in binary fluids
Eggl, Maximilian; Schmid, Peter
2017-11-01
Mixing in the inertial, but laminar parameter regime is a common application in a wide range of industries. Enhancing the efficiency of mixing processes thus has a fundamental effect on product quality, material homogeneity and, last but not least, production costs. In this project, we address mixing efficiency in the above mentioned regime (Reynolds number Re = 1000 , Peclet number Pe = 1000) by developing and demonstrating an algorithm based on nonlinear adjoint looping that minimizes the variance of a passive scalar field which models our binary Newtonian fluids. The numerical method is based on the FLUSI code (Engels et al. 2016), a Fourier pseudo-spectral code, which we modified and augmented by scalar transport and adjoint equations. Mixing is accomplished by moving stirrers which are numerically modeled using a penalization approach. In our two-dimensional simulations we consider rotating circular and elliptic stirrers and extract optimal mixing strategies from the iterative scheme. The case of optimizing shape and rotational speed of the stirrers will be demonstrated.
Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids
Nielsen, Eric J.; Diskin, Boris
2012-01-01
A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.
Self-adjointness of the fast flux in a pressurized water reactor
International Nuclear Information System (INIS)
Mosteller, R.D.
1985-01-01
Most computer codes for the analysis of systems transients rely on a simplified representation of the active core, typically employing either a one-dimensional or a point kinetics model. The collapsing of neutronics data from multidimensional steady-state calculations normally employs flux/flux-adjoint weighting. The multidimensional calculations, however, usually are performed only for the forward problem, not the adjoint. The collapsing methodologies employed in generating the neutronics input for transient codes typically construct adjoint fluxes from the assumption that the fast flux is self-adjoint. Until now, no further verification of this assumption has been undertaken for thermal reactors. As part of the verification effort for EPRI's reactor analysis support package, the validity of this assumption now has been investigated for a modern pressurized water reactor (PWR). The PDQ-7 code was employed to perform two-group fine-mesh forward and adjoint calculations for a two-dimensional representation of Zion Unit 2 at beginning of life, based on the standard PWR ARMP model. It has been verified that the fast flux is very nearly self-adjoint in a PWR. However, a significant error can arise during the subsequent construction of the thermal adjoint flux unless allowance is made for the difference between the forward and adjoint thermal buckling terms. When such a difference is included, the thermal adjoint flux can be estimated very accurately
Hoteit, Ibrahim
2010-03-02
An eddy-permitting adjoint-based assimilation system has been implemented to estimate the state of the tropical Pacific Ocean. The system uses the Massachusetts Institute of Technology\\'s general circulation model and its adjoint. The adjoint method is used to adjust the model to observations by controlling the initial temperature and salinity; temperature, salinity, and horizontal velocities at the open boundaries; and surface fluxes of momentum, heat, and freshwater. The model is constrained with most of the available data sets in the tropical Pacific, including Tropical Atmosphere and Ocean, ARGO, expendable bathythermograph, and satellite SST and sea surface height data, and climatologies. Results of hindcast experiments in 2000 suggest that the iterated adjoint-based descent is able to significantly improve the model consistency with the multivariate data sets, providing a dynamically consistent realization of the tropical Pacific circulation that generally matches the observations to within specified errors. The estimated model state is evaluated both by comparisons with observations and by checking the controls, the momentum balances, and the representation of small-scale features that were not well sampled by the observations used in the assimilation. As part of these checks, the estimated controls are smoothed and applied in independent model runs to check that small changes in the controls do not greatly change the model hindcast. This is a simple ensemble-based uncertainty analysis. In addition, the original and smoothed controls are applied to a version of the model with doubled horizontal resolution resulting in a broadly similar “downscaled” hindcast, showing that the adjustments are not tuned to a single configuration (meaning resolution, topography, and parameter settings). The time-evolving model state and the adjusted controls should be useful for analysis or to supply the forcing, initial, and boundary conditions for runs of other models.
International Nuclear Information System (INIS)
Kowsary, F.; Pooladvand, K.; Pourshaghaghy, A.
2007-01-01
In this paper, an appropriate distribution of the heating elements' strengths in a radiation furnace is estimated using inverse methods so that a pre-specified temperature and heat flux distribution is attained on the design surface. Minimization of the sum of the squares of the error function is performed using the variable metric method (VMM), and the results are compared with those obtained by the conjugate gradient method (CGM) established previously in the literature. It is shown via test cases and a well-founded validation procedure that the VMM, when using a 'regularized' estimator, is more accurate and is able to reach at a higher quality final solution as compared to the CGM. The test cases used in this study were two-dimensional furnaces filled with an absorbing, emitting, and scattering gas
A moving mesh method with variable relaxation time
Soheili, Ali Reza; Stockie, John M.
2006-01-01
We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...
International Nuclear Information System (INIS)
Matthes, W.K.
1998-01-01
The 'adjoint transport equation in its integro-differential form' is derived for the radiation damage produced by atoms injected into solids. We reduce it to the one-dimensional form and prepare it for a numerical solution by: --discretizing the continuous variables energy, space and direction, --replacing the partial differential quotients by finite differences and --evaluating the collision integral by a double sum. By a proper manipulation of this double sum the adjoint transport equation turns into a (very large) set of linear equations with tridiagonal matrix which can be solved by a special (simple and fast) algorithm. The solution of this set of linear equations contains complete information on a specified damage type (e.g. the energy deposited in a volume V) in terms of the function D(i,E,c,x) which gives the damage produced by all particles generated in a cascade initiated by a particle of type i starting at x with energy E in direction c. It is essential to remark that one calculation gives the damage function D for the complete ranges of the variables {i,E,c and x} (for numerical reasons of course on grid-points in the {E,c,x}-space). This is most useful to applications where a general source-distribution S(i,E,c,x) of particles is given by the experimental setup (e.g. beam-window and and target in proton accelerator work. The beam-protons along their path through the window--or target material generate recoil atoms by elastic collisions or nuclear reactions. These recoil atoms form the particle source S). The total damage produced then is eventually given by: D = (Σ)i ∫ ∫ ∫ S(i, E, c, x)*D(i, E, c, x)*dE*dc*dx A Fortran-77 program running on a PC-486 was written for the overall procedure and applied to some problems
Design Method of Active Disturbance Rejection Variable Structure Control System
Directory of Open Access Journals (Sweden)
Yun-jie Wu
2015-01-01
Full Text Available Based on lines cluster approaching theory and inspired by the traditional exponent reaching law method, a new control method, lines cluster approaching mode control (LCAMC method, is designed to improve the parameter simplicity and structure optimization of the control system. The design guidelines and mathematical proofs are also given. To further improve the tracking performance and the inhibition of the white noise, connect the active disturbance rejection control (ADRC method with the LCAMC method and create the extended state observer based lines cluster approaching mode control (ESO-LCAMC method. Taking traditional servo control system as example, two control schemes are constructed and two kinds of comparison are carried out. Computer simulation results show that LCAMC method, having better tracking performance than the traditional sliding mode control (SMC system, makes the servo system track command signal quickly and accurately in spite of the persistent equivalent disturbances and ESO-LCAMC method further reduces the tracking error and filters the white noise added on the system states. Simulation results verify the robust property and comprehensive performance of control schemes.
Variable-mesh method of solving differential equations
Van Wyk, R.
1969-01-01
Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.
Sarvari, S. M. Hosseini
2017-09-01
The traditional form of discrete ordinates method is applied to solve the radiative transfer equation in plane-parallel semi-transparent media with variable refractive index through using the variable discrete ordinate directions and the concept of refracted radiative intensity. The refractive index are taken as constant in each control volume, such that the direction cosines of radiative rays remain non-variant through each control volume, and then, the directions of discrete ordinates are changed locally by passing each control volume, according to the Snell's law of refraction. The results are compared by the previous studies in this field. Despite simplicity, the results show that the variable discrete ordinate method has a good accuracy in solving the radiative transfer equation in the semi-transparent media with arbitrary distribution of refractive index.
Apparatus and method for variable angle slant hole collimator
Lee, Seung Joon; Kross, Brian J.; McKisson, John E.
2017-07-18
A variable angle slant hole (VASH) collimator for providing collimation of high energy photons such as gamma rays during radiological imaging of humans. The VASH collimator includes a stack of multiple collimator leaves and a means of quickly aligning each leaf to provide various projection angles. Rather than rotate the detector around the subject, the VASH collimator enables the detector to remain stationary while the projection angle of the collimator is varied for tomographic acquisition. High collimator efficiency is achieved by maintaining the leaves in accurate alignment through the various projection angles. Individual leaves include unique angled cuts to maintain a precise target collimation angle. Matching wedge blocks driven by two actuators with twin-lead screws accurately position each leaf in the stack resulting in the precise target collimation angle. A computer interface with the actuators enables precise control of the projection angle of the collimator.
Combustion engine variable compression ratio apparatus and method
Lawrence,; Keith, E [Peoria, IL; Strawbridge, Bryan E [Dunlap, IL; Dutart, Charles H [Washington, IL
2006-06-06
An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.
Spectral Solutions of Self-adjoint Elliptic Problems with Immersed Interfaces
International Nuclear Information System (INIS)
Auchmuty, G.; Klouček, P.
2011-01-01
This paper describes a spectral representation of solutions of self-adjoint elliptic problems with immersed interfaces. The interface is assumed to be a simple non-self-intersecting closed curve that obeys some weak regularity conditions. The problem is decomposed into two problems, one with zero interface data and the other with zero exterior boundary data. The problem with zero interface data is solved by standard spectral methods. The problem with non-zero interface data is solved by introducing an interface space H Γ (Ω) and constructing an orthonormal basis of this space. This basis is constructed using a special class of orthogonal eigenfunctions analogously to the methods used for standard trace spaces by Auchmuty (SIAM J. Math. Anal. 38, 894–915, 2006). Analytical and numerical approximations of these eigenfunctions are described and some simulations are presented.
Scattering theory for self-adjoint extensions
International Nuclear Information System (INIS)
Kuperin, Yu.A.; Pavlov, B.S.; Kurasov, P.B.; Makarov, K.A.; Melnikov, Yu. B.; Yevstratov, V.V
1989-01-01
In this paper a new approach is suggested to the construction of a wide class of exactly solvable quantum-mechanical models of scattering, quantum-mechanical models of solids and an exactly solvable quantum-stochastical model. For most of the models the spectral analysis is performed in an explicit form, for many body problems it is reduced to one-dimensional integral equations. The construction of all models is based on a new version of extension theory, which uses the boundary forms for abstract operators. This version gives a simple and general method to join the pair of operators, one of them abstract, and the other one differential. The solvability of these models is based on Krein's formula for quasiresolvents
Fast analytical method for the addition of random variables
International Nuclear Information System (INIS)
Senna, V.; Milidiu, R.L.; Fleming, P.V.; Salles, M.R.; Oliveria, L.F.S.
1983-01-01
Using the minimal cut sets representation of a fault tree, a new approach to the method of moments is proposed in order to estimate confidence bounds to the top event probability. The method utilizes two or three moments either to fit a distribution (the normal and lognormal families) or to evaluate bounds from standard inequalities (e.g. Markov, Tchebycheff, etc.) Examples indicate that the results obtained by the log-normal family are in good agreement with those obtained by Monte Carlo simulation
A Latent Variable Clustering Method for Wireless Sensor Networks
DEFF Research Database (Denmark)
Vasilev, Vladislav; Iliev, Georgi; Poulkov, Vladimir
2016-01-01
In this paper we derive a clustering method based on the Hidden Conditional Random Field (HCRF) model in order to maximizes the performance of a wireless sensor. Our novel approach to clustering in this paper is in the application of an index invariant graph that we defined in a previous work and...
Convergence problems associated with the iteration of adjoint equations in nuclear reactor theory
International Nuclear Information System (INIS)
Ngcobo, E.
2003-01-01
Convergence problems associated with the iteration of adjoint equations based on two-group neutron diffusion theory approximations in slab geometry are considered. For this purpose first-order variational techniques are adopted to minimise numerical errors involved. The importance of deriving the adjoint source from a breeding ratio is illustrated. The results obtained are consistent with the expected improvement in accuracy
Energy Technology Data Exchange (ETDEWEB)
Martinez, Aquilino Senra; Silva, Fernando Carvalho da; Cardoso, Carlos Eduardo Santos [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear
2000-07-01
In some applications of perturbation theory, it is necessary know the adjoint neutron flux, which is obtained by the solution of adjoint neutron diffusion equation. However, the multigroup constants used for this are weighted in only the direct neutron flux, from the solution of direct P1 equations. In this work, this procedure is questioned and the adjoint P1 equations are derived by the neutron transport equation, the reversion operators rules and analogies between direct and adjoint parameters. (author)
Variational method for objective analysis of scalar variable and its ...
Indian Academy of Sciences (India)
e-mail: sinha@tropmet.res.in. In this study real time data have been used to compare the standard and triangle method by ... The work presented in this paper is about a vari- ... But when the balance is needed ..... tred at 17:30h IST of 11 June within half a degree of ..... Ogura Y and Chen Y L 1977 A life history of an intense.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both
Methods for Analyzing Electric Load Shape and its Variability
Energy Technology Data Exchange (ETDEWEB)
Price, Philip
2010-05-12
Current methods of summarizing and analyzing electric load shape are discussed briefly and compared. Simple rules of thumb for graphical display of load shapes are suggested. We propose a set of parameters that quantitatively describe the load shape in many buildings. Using the example of a linear regression model to predict load shape from time and temperature, we show how quantities such as the load?s sensitivity to outdoor temperature, and the effectiveness of demand response (DR), can be quantified. Examples are presented using real building data.
Oscillator representations for self-adjoint Calogero Hamiltonians
Energy Technology Data Exchange (ETDEWEB)
Gitman, D M [Institute of Physics, University of Sao Paulo (Brazil); Tyutin, I V; Voronov, B L, E-mail: gitman@dfn.if.usp.br, E-mail: tyutin@lpi.ru, E-mail: voronov@lpi.ru [Lebedev Physical Institute, Moscow (Russian Federation)
2011-10-21
In Gitman et al (2010 J. Phys. A: Math. Theor. 43 145205), we presented a mathematically rigorous quantum-mechanical treatment of a one-dimensional motion of a particle in the Calogero potential V(x) = {alpha}x{sup -2}. We described all possible self-adjoint (s.a.) operators (s.a. Hamiltonians) associated with the differential operation H=-d{sub x}{sup 2}+{alpha}x{sup -2} for the Calogero Hamiltonian. Here, we discuss a new aspect of the problem, the so-called oscillator representations for the Calogero Hamiltonians. As is known, operators of the form N-hat = a-hat{sup +} a-hat and A-hat = a-hat a-hat{sup +} are called operators of oscillator type. Oscillator-type operators possess a number of useful properties in the case when the elementary operators a-hat are closed. It turns out that some s.a. Calogero Hamiltonians allow oscillator-type representations. We describe such Hamiltonians and find the corresponding mutually adjoint elementary operators a-hat and a-hat{sup +}. An oscillator-type representation for a given Hamiltonian is generally not unique. (paper)
Oscillator representations for self-adjoint Calogero Hamiltonians
International Nuclear Information System (INIS)
Gitman, D M; Tyutin, I V; Voronov, B L
2011-01-01
In Gitman et al (2010 J. Phys. A: Math. Theor. 43 145205), we presented a mathematically rigorous quantum-mechanical treatment of a one-dimensional motion of a particle in the Calogero potential V(x) = αx -2 . We described all possible self-adjoint (s.a.) operators (s.a. Hamiltonians) associated with the differential operation H=-d x 2 +αx -2 for the Calogero Hamiltonian. Here, we discuss a new aspect of the problem, the so-called oscillator representations for the Calogero Hamiltonians. As is known, operators of the form N-hat = a-hat + a-hat and A-hat = a-hat a-hat + are called operators of oscillator type. Oscillator-type operators possess a number of useful properties in the case when the elementary operators a-hat are closed. It turns out that some s.a. Calogero Hamiltonians allow oscillator-type representations. We describe such Hamiltonians and find the corresponding mutually adjoint elementary operators a-hat and a-hat + . An oscillator-type representation for a given Hamiltonian is generally not unique. (paper)
Numerical study of dense adjoint matter in two color QCD
International Nuclear Information System (INIS)
Hands, S.; Morrison, S.; Scorzato, L.; Oevers, M.
2000-06-01
We identify the global symmetries of SU(2) lattice gauge theory with N flavors of staggered fermion in the presence of a quark chemical potential μ, for fermions in both fundamental and adjoint representations, and anticipate likely patterns of symmetry breaking at both low and high densities. Results from numerical simulations of the model with N=1 adjoint flavor on a 4 3 x 8 lattice are presented, using both hybrid Monte Carlo and two-step multi-boson algorithms. It is shown that the sign of the fermion determinant starts to fluctuate once the model enters a phase with non-zero baryon charge density. HMC simulations are not ergodic in this regime, but TSMB simulations retain ergodicity even in the dense phase, and in addition appear to show superior decorrelation. The HMC results for the equation of state and the pion mass show good quantitative agreement with the predictions of chiral perturbation theory, which should hold only for N≥2. The TSMB results incorporating the sign of the determinant support a delayed onset transition, consistent with the pattern of symmetry breaking expected for N=1. (orig.)
Self-adjoint oscillator operator from a modified factorization
Energy Technology Data Exchange (ETDEWEB)
Reyes, Marco A. [Departamento de Fisica, DCI Campus Leon, Universidad de Guanajuato, Apdo. Postal E143, 37150 Leon, Gto. (Mexico); Rosu, H.C., E-mail: hcr@ipicyt.edu.mx [IPICyT, Instituto Potosino de Investigacion Cientifica y Tecnologica, Apdo. Postal 3-74 Tangamanga, 78231 San Luis Potosi, S.L.P. (Mexico); Gutierrez, M. Ranferi [Departamento de Fisica, DCI Campus Leon, Universidad de Guanajuato, Apdo. Postal E143, 37150 Leon, Gto. (Mexico)
2011-05-30
By using an alternative factorization, we obtain a self-adjoint oscillator operator of the form L{sub δ}=d/(dx) (p{sub δ}(x)d/(dx) )-((x{sup 2})/(p{sub δ}(x)) +p{sub δ}(x)-1), where p{sub δ}(x)=1+δe{sup -x{sup 2}}, with δ element of (-1,∞) an arbitrary real factorization parameter. At positive values of δ, this operator interpolates between the quantum harmonic oscillator Hamiltonian for δ=0 and a scaled Hermite operator at high values of δ. For the negative values of δ, the eigenfunctions look like deformed quantum mechanical Hermite functions. Possible applications are mentioned. -- Highlights: → We present a generalization of the Mielnik factorization. → We study the case of linear relationship between the factorization coefficients. → We introduce a new one-parameter self-adjoint oscillator operator. → We show its properties depending on the values of the parameter.
The Variability and Evaluation Method of Recycled Concrete Aggregate Properties
Directory of Open Access Journals (Sweden)
Zhiqing Zhang
2017-01-01
Full Text Available With the same sources and regeneration techniques, given RA’s properties may display large variations. The same single property index of different sets maybe has a large difference of the whole property. How shall we accurately evaluate the whole property of RA? 8 groups of RAs from pavement and building were used to research the method of evaluating the holistic characteristics of RA. After testing and investigating, the parameters of aggregates were analyzed. The data of physical and mechanical properties show a distinct dispersion and instability; thus, it has been difficult to express the whole characteristics in any single property parameter. The Euclidean distance can express the similarity of samples. The closer the distance, the more similar the property. The standard variance of the whole property Euclidean distances for two types of RA is Sk=7.341 and Sk=2.208, respectively, which shows that the property of building RA has great fluctuation, while pavement RA is more stable. There are certain correlations among the apparent density, water absorption, and crushed value of RAs, and the Mahalanobis distance method can directly evaluate the whole property by using its parameters: mean, variance, and covariance, and it can provide a grade evaluation model for RAs.
Bernhardt, Jase; Carleton, Andrew M.
2018-05-01
The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.
Running coupling in SU(2) gauge theory with two adjoint fermions
DEFF Research Database (Denmark)
Rantaharju, Jarno; Rantalaiho, Teemu; Rummukainen, Kari
2016-01-01
We study SU(2) gauge theory with two Dirac fermions in the adjoint representation of the gauge group on the lattice. Using clover improved Wilson fermion action with hypercubic truncated stout smearing we perform simulations at larger coupling than before. We measure the evolution of the coupling...... with the existence of a fixed point in the interval 2.2g∗23. We also measure the anomalous dimension and find that its value at the fixed point is γ∗≃0.2±0.03....... constant using the step scaling method with the Schrödinger functional and study the remaining discretization effects. At weak coupling we observe significant discretization effects, which make it difficult to obtain a fully controlled continuum limit. Nevertheless, the data remains consistent...
Four-loop vacuum energy density of the SU($N_c$) + adjoint Higgs theory
Kajantie, Keijo; Rummukainen, K; Schröder, Y
2003-01-01
We compute the dimensionally regularised four-loop vacuum energy density of the SU(N_c) gauge + adjoint Higgs theory, in the disordered phase. ``Scalarisation'', or reduction to a small set of master integrals of the type appearing in scalar field theories, is carried out in d dimensions, employing general partial integration identities through an algorithm developed by Laporta, while the remaining scalar integrals are evaluated in d = 3 - 2\\epsilon dimensions, by expanding in \\epsilon << 1 and evaluating a number of coefficients. The results have implications for the thermodynamics of finite temperature QCD, allowing to determine perturbative contributions of orders O(g^6 ln(1/g)), O(g^6) to the pressure, while the general methods are applicable also to studies of critical phenomena in QED-like statistical physics systems.
Partial differential equations with variable exponents variational methods and qualitative analysis
Radulescu, Vicentiu D
2015-01-01
Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity
Li, Dunzhu; Gurnis, Michael; Stadler, Georg
2017-04-01
We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.
Miyoshi, Takayuki; Obayashi, Masayuki; Peter, Daniel; Tono, Yoko; Tsuboi, Seiji
2017-01-01
A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography for application in the effective reproduction of observed waveforms. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds Vp and Vs. Additionally, all centroid times of the source solutions were determined before the structural inversion. The synthetic displacements were calculated using the spectral-element method (SEM) in which the Kanto region was parameterized using 16 million grid points. The model parameters Vp and Vs were updated iteratively by Newton’s method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. Computations of the forward and adjoint simulations were conducted on the K computer in Japan. The optimized SEM code required a total of 6720 simulations using approximately 62,000 node hours to obtain the final model after 16 iterations. The proposed model reveals several anomalous areas with extremely low-Vs values in comparison with those of the initial model. These anomalies were found to correspond to geological features, earthquake sources, and volcanic regions with good data coverage and resolution. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes showed better fit than the initial model to the observed waveforms in different period ranges within 5–30 s. This result indicates that the model can accurately predict actual waveforms.
Miyoshi, Takayuki
2017-10-04
A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography for application in the effective reproduction of observed waveforms. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds Vp and Vs. Additionally, all centroid times of the source solutions were determined before the structural inversion. The synthetic displacements were calculated using the spectral-element method (SEM) in which the Kanto region was parameterized using 16 million grid points. The model parameters Vp and Vs were updated iteratively by Newton’s method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. Computations of the forward and adjoint simulations were conducted on the K computer in Japan. The optimized SEM code required a total of 6720 simulations using approximately 62,000 node hours to obtain the final model after 16 iterations. The proposed model reveals several anomalous areas with extremely low-Vs values in comparison with those of the initial model. These anomalies were found to correspond to geological features, earthquake sources, and volcanic regions with good data coverage and resolution. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes showed better fit than the initial model to the observed waveforms in different period ranges within 5–30 s. This result indicates that the model can accurately predict actual waveforms.
International Nuclear Information System (INIS)
Millwater, Harry; Singh, Gulshan; Cortina, Miguel
2012-01-01
There are many methods to identify the important variable out of a set of random variables, i.e., “inter-variable” importance; however, to date there are no comparable methods to identify the “region” of importance within a random variable, i.e., “intra-variable” importance. Knowledge of the critical region of an input random variable (tail, near-tail, and central region) can provide valuable information towards characterizing, understanding, and improving a model through additional modeling or testing. As a result, an intra-variable probabilistic sensitivity method was developed and demonstrated for independent random variables that computes the partial derivative of a probabilistic response with respect to a localized perturbation in the CDF values of each random variable. These sensitivities are then normalized in absolute value with respect to the largest sensitivity within a distribution to indicate the region of importance. The methodology is implemented using the Score Function kernel-based method such that existing samples can be used to compute sensitivities for negligible cost. Numerical examples demonstrate the accuracy of the method through comparisons with finite difference and numerical integration quadrature estimates. - Highlights: ► Probabilistic sensitivity methodology. ► Determines the “region” of importance within random variables such as left tail, near tail, center, right tail, etc. ► Uses the Score Function approach to reuse the samples, hence, negligible cost. ► No restrictions on the random variable types or limit states.
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Carlos Eduardo Santos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear
2002-07-01
In some applications of perturbation theory, it is necessary know the adjoint neutron flux, which is obtained by the solution of adjoint neutron diffusion equation. However, the multigroup constants used for this are weighted in only the direct neutron flux, from the solution of direct P1 equations. In this work, the adjoint P1 equations are derived by the neutron transport equation, the reversion operators rules and analogies between direct and adjoint parameters. The direct and adjoint neutron fluxes resulting from the solution of P{sub 1} equations were used to three different weighting processes, to obtain the macrogroup macroscopic cross sections. It was found out noticeable differences among them. (author)
Energy Technology Data Exchange (ETDEWEB)
Yoo, Sua [Department of Radiation Oncology, Duke University Medical Center, Box 3295, Durham, NC 27710 (United States); Kowalok, Michael E [Department of Radiation Oncology, Virginia Commonwealth University Health System, 401 College St., PO Box 980058, Richmond, VA 23298-0058 (United States); Thomadsen, Bruce R [Department of Medical Physics, University of Wisconsin-Madison, 1530 MSC, 1300 University Ave., Madison, WI 53706 (United States); Henderson, Douglass L [Department of Engineering Physics, University of Wisconsin-Madison, 153 Engineering Research Bldg., 1500 Engineering Dr., Madison, WI 53706 (United States)
2007-02-07
We continue our work on the development of an efficient treatment-planning algorithm for prostate seed implants by incorporation of an automated seed and needle configuration routine. The treatment-planning algorithm is based on region of interest (ROI) adjoint functions and a greedy heuristic. As defined in this work, the adjoint function of an ROI is the sensitivity of the average dose in the ROI to a unit-strength brachytherapy source at any seed position. The greedy heuristic uses a ratio of target and critical structure adjoint functions to rank seed positions according to their ability to irradiate the target ROI while sparing critical structure ROIs. Because seed positions are ranked in advance and because the greedy heuristic does not modify previously selected seed positions, the greedy heuristic constructs a complete seed configuration quickly. Isodose surface constraints determine the search space and the needle constraint limits the number of needles. This study additionally includes a methodology that scans possible combinations of these constraint values automatically. This automated selection scheme saves the user the effort of manually searching constraint values. With this method, clinically acceptable treatment plans are obtained in less than 2 min. For comparison, the branch-and-bound method used to solve a mixed integer-programming model took close to 2.5 h to arrive at a feasible solution. Both methods achieved good treatment plans, but the speedup provided by the greedy heuristic was a factor of approximately 100. This attribute makes this algorithm suitable for intra-operative real-time treatment planning.
International Nuclear Information System (INIS)
Yoo, Sua; Kowalok, Michael E; Thomadsen, Bruce R; Henderson, Douglass L
2007-01-01
We continue our work on the development of an efficient treatment-planning algorithm for prostate seed implants by incorporation of an automated seed and needle configuration routine. The treatment-planning algorithm is based on region of interest (ROI) adjoint functions and a greedy heuristic. As defined in this work, the adjoint function of an ROI is the sensitivity of the average dose in the ROI to a unit-strength brachytherapy source at any seed position. The greedy heuristic uses a ratio of target and critical structure adjoint functions to rank seed positions according to their ability to irradiate the target ROI while sparing critical structure ROIs. Because seed positions are ranked in advance and because the greedy heuristic does not modify previously selected seed positions, the greedy heuristic constructs a complete seed configuration quickly. Isodose surface constraints determine the search space and the needle constraint limits the number of needles. This study additionally includes a methodology that scans possible combinations of these constraint values automatically. This automated selection scheme saves the user the effort of manually searching constraint values. With this method, clinically acceptable treatment plans are obtained in less than 2 min. For comparison, the branch-and-bound method used to solve a mixed integer-programming model took close to 2.5 h to arrive at a feasible solution. Both methods achieved good treatment plans, but the speedup provided by the greedy heuristic was a factor of approximately 100. This attribute makes this algorithm suitable for intra-operative real-time treatment planning
Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.
2008-04-01
Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.
Adjoint Inversion for Extended Earthquake Source Kinematics From Very Dense Strong Motion Data
Ampuero, J. P.; Somala, S.; Lapusta, N.
2010-12-01
Addressing key open questions about earthquake dynamics requires a radical improvement of the robustness and resolution of seismic observations of large earthquakes. Proposals for a new generation of earthquake observation systems include the deployment of “community seismic networks” of low-cost accelerometers in urban areas and the extraction of strong ground motions from high-rate optical images of the Earth's surface recorded by a large space telescope in geostationary orbit. Both systems could deliver strong motion data with a spatial density orders of magnitude higher than current seismic networks. In particular, a “space seismometer” could sample the seismic wave field at a spatio-temporal resolution of 100 m, 1 Hz over areas several 100 km wide with an amplitude resolution of few cm/s in ground velocity. The amount of data to process would be immensely larger than what current extended source inversion algorithms can handle, which hampers the quantitative assessment of the cost-benefit trade-offs that can guide the practical design of the proposed earthquake observation systems. We report here on the development of a scalable source imaging technique based on iterative adjoint inversion and its application to the proof-of-concept of a space seismometer. We generated synthetic ground motions for M7 earthquake rupture scenarios based on dynamic rupture simulations on a vertical strike-slip fault embedded in an elastic half-space. A range of scenarios include increasing levels of complexity and interesting features such as supershear rupture speed. The resulting ground shaking is then processed accordingly to what would be captured by an optical satellite. Based on the resulting data, we perform source inversion by an adjoint/time-reversal method. The gradient of a cost function quantifying the waveform misfit between data and synthetics is efficiently obtained by applying the time-reversed ground velocity residuals as surface force sources, back
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
Assawaroongruengchot, Monchai
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M
2007-07-01
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
International Nuclear Information System (INIS)
Assawaroongruengchot, M.
2007-01-01
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the
International Nuclear Information System (INIS)
Le Coq, G.; Boudsocq, G.; Raymond, P.
1983-03-01
The Control Variable Method is extended to multidimensional fluid flow transient computations. In this paper basic principles of the method are given. The method uses a fully implicit space discretization and is based on the decomposition of the momentum flux tensor into scalar, vectorial, and tensorial, terms. Finally some computations about viscous-driven flow and buoyancy-driven flow in cavity are presented
Variable selection methods in PLS regression - a comparison study on metabolomics data
DEFF Research Database (Denmark)
Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach
. The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...
Harris, John Richardson; Caporaso, George J; Sampayan, Stephen E
2013-10-22
A system and method for producing modulated electrical signals. The system uses a variable resistor having a photoconductive wide bandgap semiconductor material construction whose conduction response to changes in amplitude of incident radiation is substantially linear throughout a non-saturation region to enable operation in non-avalanche mode. The system also includes a modulated radiation source, such as a modulated laser, for producing amplitude-modulated radiation with which to direct upon the variable resistor and modulate its conduction response. A voltage source and an output port, are both operably connected to the variable resistor so that an electrical signal may be produced at the output port by way of the variable resistor, either generated by activation of the variable resistor or propagating through the variable resistor. In this manner, the electrical signal is modulated by the variable resistor so as to have a waveform substantially similar to the amplitude-modulated radiation.
The relationship between glass ceiling and power distance as a cultural variable by a new method
Naide Jahangirov; Guler Saglam Ari; Seymur Jahangirov; Nuray Guneri Tosunoglu
2015-01-01
Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate ...
Visualising Earth's Mantle based on Global Adjoint Tomography
Bozdag, E.; Pugmire, D.; Lefebvre, M. P.; Hill, J.; Komatitsch, D.; Peter, D. B.; Podhorszki, N.; Tromp, J.
2017-12-01
Recent advances in 3D wave propagation solvers and high-performance computing have enabled regional and global full-waveform inversions. Interpretation of tomographic models is often done on visually. Robust and efficient visualization tools are necessary to thoroughly investigate large model files, particularly at the global scale. In collaboration with Oak Ridge National Laboratory (ORNL), we have developed effective visualization tools and used for visualization of our first-generation global model, GLAD-M15 (Bozdag et al. 2016). VisIt (https://wci.llnl.gov/simulation/computer-codes/visit/) is used for initial exploration of the models and for extraction of seismological features. The broad capability of VisIt, and its demonstrated scalability proved valuable for experimenting with different visualization techniques, and in the creation of timely results. Utilizing VisIt's plugin-architecture, a data reader plugin was developed, which reads the ADIOS (https://www.olcf.ornl.gov/center-projects/adios/) format of our model files. Blender (https://www.blender.org) is used for the setup of lighting, materials, camera paths and rendering of geometry. Python scripting was used to control the orchestration of different geometries, as well as camera animation for 3D movies. While we continue producing 3D contour plots and movies for various seismic parameters to better visualize plume- and slab-like features as well as anisotropy throughout the mantle, our aim is to make visualization an integral part of our global adjoint tomography workflow to routinely produce various 2D cross-sections to facilitate examination of our models after each iteration. This will ultimately form the basis for use of pattern recognition techniques in our investigations. Simulations for global adjoint tomography are performed on ORNL's Titan system and visualization is done in parallel on ORNL's post-processing cluster Rhea.
Objective-function Hybridization in Adjoint Seismic Tomography
Yuan, Y. O.; Bozdag, E.; Simons, F.; Gao, F.
2016-12-01
In the realm of seismic tomography, we are at the threshold of a new era of huge seismic datasets. However, how to assimilate as much information as possible from every seismogram is still a challenge. Cross-correlation measurements are generally tailored to some window selection algorithms, such as FLEXWIN (Maggie et al. 2008), to balance amplitude differences between seismic phases. However, these measurements naturally favor maximum picks in selected windows. It is also difficult to select all usable portions of seismograms in an optimum way that lots of information is generally lost, particularly the scattered waves. Instantaneous phase type of misfits extract information from every wiggle without cutting seismograms into small pieces, however, dealing with cycle skips at short periods can be challenging. For this purpose, we introduce a flexible hybrid approach for adjoint seismic tomography, to combine various objective functions. We initially focus on phase measurements and propose using instantaneous phase to take into account relatively small-magnitude scattered waves at long periods while using cross-correlation measurements on FLEXWIN windows to select distinct body-wave arrivals without complicating measurements due to non-linearities at short periods. To better deal with cycle skips and reliably measure instantaneous phases we design a new misfit function that incorporates instantaneous phase information implicitly instead of measuring it explicitly, through using normalized analytic signals. We present in our synthetic experiments how instantaneous phase, cross-correlation and their hybridization affect tomographic results. The combination of two different phase measurements in a hybrid approach constitutes progress towards using "anything and everything" in a data set, addressing data quality and measurement challenges simultaneously. We further extend hybridisation of misfit functions for amplitude measurements such as cross-correlation amplitude
Directory of Open Access Journals (Sweden)
Said Broumi
2015-03-01
Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
A Comparison of Methods to Test Mediation and Other Intervening Variable Effects
MacKinnon, David P.; Lockwood, Chondra M.; Hoffman, Jeanne M.; West, Stephen G.; Sheets, Virgil
2010-01-01
A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect. PMID:11928892
Propulsion and launching analysis of variable-mass rockets by analytical methods
D.D. Ganji; M. Gorji; M. Hatami; A. Hasanpour; N. Khademzadeh
2013-01-01
In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM), homotopy perturbation method (HPM) and least square method (LSM) were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a paramet...
Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling
DEFF Research Database (Denmark)
Zimmermann, Ralf; Bertram, Anna
2018-01-01
Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...
DEFF Research Database (Denmark)
Burgess, Stephen; Thompson, Simon G; Thompson, Grahame
2010-01-01
Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context o...
Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection
DEFF Research Database (Denmark)
Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald
2013-01-01
The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...
P-Link: A method for generating multicomponent cytochrome P450 fusions with variable linker length
DEFF Research Database (Denmark)
Belsare, Ketaki D.; Ruff, Anna Joelle; Martinez, Ronny
2014-01-01
Fusion protein construction is a widely employed biochemical technique, especially when it comes to multi-component enzymes such as cytochrome P450s. Here we describe a novel method for generating fusion proteins with variable linker lengths, protein fusion with variable linker insertion (P...
International Nuclear Information System (INIS)
Wagner, J.C.; Haghighat, A.
1998-01-01
Although the Monte Carlo method is considered to be the most accurate method available for solving radiation transport problems, its applicability is limited by its computational expense. Thus, biasing techniques, which require intuition, guesswork, and iterations involving manual adjustments, are employed to make reactor shielding calculations feasible. To overcome this difficulty, the authors have developed a method for using the S N adjoint function for automated variance reduction of Monte Carlo calculations through source biasing and consistent transport biasing with the weight window technique. They describe the implementation of this method into the standard production Monte Carlo code MCNP and its application to a realistic calculation, namely, the reactor cavity dosimetry calculation. The computational effectiveness of the method, as demonstrated through the increase in calculational efficiency, is demonstrated and quantified. Important issues associated with this method and its efficient use are addressed and analyzed. Additional benefits in terms of the reduction in time and effort required of the user are difficult to quantify but are possibly as important as the computational efficiency. In general, the automated variance reduction method presented is capable of increases in computational performance on the order of thousands, while at the same time significantly reducing the current requirements for user experience, time, and effort. Therefore, this method can substantially increase the applicability and reliability of Monte Carlo for large, real-world shielding applications
International Nuclear Information System (INIS)
Qin Maochang; Fan Guihong
2008-01-01
There are many interesting methods can be utilized to construct special solutions of nonlinear differential equations with constant coefficients. However, most of these methods are not applicable to nonlinear differential equations with variable coefficients. A new method is presented in this Letter, which can be used to find special solutions of nonlinear differential equations with variable coefficients. This method is based on seeking appropriate Bernoulli equation corresponding to the equation studied. Many well-known equations are chosen to illustrate the application of this method
Comparison of different calibration methods suited for calibration problems with many variables
DEFF Research Database (Denmark)
Holst, Helle
1992-01-01
This paper describes and compares different kinds of statistical methods proposed in the literature as suited for solving calibration problems with many variables. These are: principal component regression, partial least-squares, and ridge regression. The statistical techniques themselves do...
Using traditional methods and indigenous technologies for coping with climate variability
Stigter, C.J.; Zheng Dawei,; Onyewotu, L.O.Z.; Mei Xurong,
2005-01-01
In agrometeorology and management of meteorology related natural resources, many traditional methods and indigenous technologies are still in use or being revived for managing low external inputs sustainable agriculture (LEISA) under conditions of climate variability. This paper starts with the
International Nuclear Information System (INIS)
Bosevski, T.
1971-01-01
The polynomial interpolation of neutron flux between the chosen space and energy variables enabled transformation of the integral transport equation into a system of linear equations with constant coefficients. Solutions of this system are the needed values of flux for chosen values of space and energy variables. The proposed improved method for solving the neutron transport problem including the mathematical formalism is simple and efficient since the number of needed input data is decreased both in treating the spatial and energy variables. Mathematical method based on this approach gives more stable solutions with significantly decreased probability of numerical errors. Computer code based on the proposed method was used for calculations of one heavy water and one light water reactor cell, and the results were compared to results of other very precise calculations. The proposed method was better concerning convergence rate, decreased computing time and needed computer memory. Discretization of variables enabled direct comparison of theoretical and experimental results
Almost commuting self-adjoint matrices: The real and self-dual cases
Loring, Terry A.; Sørensen, Adam P. W.
2016-08-01
We show that a pair of almost commuting self-adjoint, symmetric matrices is close to a pair of commuting self-adjoint, symmetric matrices (in a uniform way). Moreover, we prove that the same holds with self-dual in place of symmetric and also for paths of self-adjoint matrices. Since a symmetric, self-adjoint matrix is real, we get a real version of Huaxin Lin’s famous theorem on almost commuting matrices. Similarly, the self-dual case gives a version for matrices over the quaternions. To prove these results, we develop a theory of semiprojectivity for real C*-algebras and also examine various definitions of low-rank for real C*-algebras.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method
Directory of Open Access Journals (Sweden)
Jun-He Yang
2017-01-01
Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Model reduction method using variable-separation for stochastic saddle point problems
Jiang, Lijian; Li, Qiuqi
2018-02-01
In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.
Latent variable method for automatic adaptation to background states in motor imagery BCI
Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei
2018-02-01
Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.
International Nuclear Information System (INIS)
Proriol, J.
1994-01-01
Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)
International Nuclear Information System (INIS)
Tang, Bo; He, Yinnian; Wei, Leilei; Zhang, Xindong
2012-01-01
In this Letter, a generalized fractional sub-equation method is proposed for solving fractional differential equations with variable coefficients. Being concise and straightforward, this method is applied to the space–time fractional Gardner equation with variable coefficients. As a result, many exact solutions are obtained including hyperbolic function solutions, trigonometric function solutions and rational solutions. It is shown that the considered method provides a very effective, convenient and powerful mathematical tool for solving many other fractional differential equations in mathematical physics. -- Highlights: ► Study of fractional differential equations with variable coefficients plays a role in applied physical sciences. ► It is shown that the proposed algorithm is effective for solving fractional differential equations with variable coefficients. ► The obtained solutions may give insight into many considerable physical processes.
Aygunes, Gunes
2017-07-01
The objective of this paper is to survey and determine the macroeconomic factors affecting the level of venture capital (VC) investments in a country. The literary depends on venture capitalists' quality and countries' venture capital investments. The aim of this paper is to give relationship between venture capital investment and macro economic variables via statistical computation method. We investigate the countries and macro economic variables. By using statistical computation method, we derive correlation between venture capital investments and macro economic variables. According to method of logistic regression model (logit regression or logit model), macro economic variables are correlated with each other in three group. Venture capitalists regard correlations as a indicator. Finally, we give correlation matrix of our results.
International Nuclear Information System (INIS)
Talamo, A.; Gohar, Y.; Aliberti, G.; Zhong, Z.; Bournos, V.; Fokov, Y.; Kiyavitskaya, H.; Routkovskaya, C.; Serafimovich, I.
2010-01-01
In 1997, Bretscher calculated the effective delayed neutron fraction by the k-ratio method. The Bretscher's approach is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Bretscher evaluated the effective delayed neutron fraction as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as k-ratio method). In the present work, the k-ratio method is applied by deterministic nuclear codes. The ENDF/B nuclear data library of the fuel isotopes ( 238 U and 238 U) have been processed by the NJOY code with and without the delayed neutron data to prepare multigroup WIMSD nuclear data libraries for the DRAGON code. The DRAGON code has been used for preparing the PARTISN macroscopic cross sections. This calculation methodology has been applied to the YALINA-Thermal assembly of Belarus. The assembly has been modeled and analyzed using PARTISN code with 69 energy groups and 60 different material zones. The deterministic and Monte Carlo results for the effective delayed neutron fraction obtained by the k-ratio method agree very well. The results also agree with the values obtained by using the adjoint flux. (authors)
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in
A stochastic Galerkin method for the Euler equations with Roe variable transformation
Pettersson, Per; Iaccarino, Gianluca; Nordströ m, Jan
2014-01-01
The Euler equations subject to uncertainty in the initial and boundary conditions are investigated via the stochastic Galerkin approach. We present a new fully intrusive method based on a variable transformation of the continuous equations. Roe variables are employed to get quadratic dependence in the flux function and a well-defined Roe average matrix that can be determined without matrix inversion.In previous formulations based on generalized polynomial chaos expansion of the physical variables, the need to introduce stochastic expansions of inverse quantities, or square roots of stochastic quantities of interest, adds to the number of possible different ways to approximate the original stochastic problem. We present a method where the square roots occur in the choice of variables, resulting in an unambiguous problem formulation.The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, the Roe formulation is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. For certain stochastic basis functions, the proposed method can be made more effective and well-conditioned. This leads to increased robustness for both choices of variables. We use a multi-wavelet basis that can be chosen to include a large number of resolution levels to handle more extreme cases (e.g. strong discontinuities) in a robust way. For smooth cases, the order of the polynomial representation can be increased for increased accuracy. © 2013 Elsevier Inc.
Fernández-López, Sheila; Carrera, Jesús; Ledo, Juanjo; Queralt, Pilar; Luquot, Linda; Martínez, Laura; Bellmunt, Fabián
2016-04-01
Seawater intrusion in aquifers is a complex phenomenon that can be characterized with the help of electric resistivity tomography (ERT) because of the low resistivity of seawater, which underlies the freshwater floating on top. The problem is complex because of the need for joint inversion of electrical and hydraulic (density dependent flow) data. Here we present an adjoint-state algorithm to treat electrical data. This method is a common technique to obtain derivatives of an objective function, depending on potentials with respect to model parameters. The main advantages of it are its simplicity in stationary problems and the reduction of computational cost respect others methodologies. The relationship between the concentration of chlorides and the resistivity values of the field is well known. Also, these resistivities are related to the values of potentials measured using ERT. Taking this into account, it will be possible to define the different resistivities zones from the field data of potential distribution using the basis of inverse problem. In this case, the studied zone is situated in Argentona (Baix Maresme, Catalonia), where the values of chlorides obtained in some wells of the zone are too high. The adjoint-state method will be used to invert the measured data using a new finite element code in C ++ language developed in an open-source framework called Kratos. Finally, the information obtained numerically with our code will be checked with the information obtained with other codes.
Full Waveform Adjoint Seismic Tomography of the Antarctic Plate
Lloyd, A. J.; Wiens, D.; Zhu, H.; Tromp, J.; Nyblade, A.; Anandakrishnan, S.; Aster, R. C.; Huerta, A. D.; Winberry, J. P.; Wilson, T. J.; Dalziel, I. W. D.; Hansen, S. E.; Shore, P.
2017-12-01
Recent studies investigating the response and influence of the solid Earth on the evolution of the cryosphere demonstrate the need to account for 3D rheological structure to better predict ice sheet dynamics, stability, and future sea level impact, as well as to improve glacial isostatic adjustment models and more accurately measure ice mass loss. Critical rheological properties like mantle viscosity and lithospheric thickness may be estimated from shear wave velocity models that, for Antarctica, would ideally possess regional-scale resolution extending down to at least the base of the transition zone (i.e. 670 km depth). However, current global- and continental-scale seismic velocity models are unable to obtain both the resolution and spatial coverage necessary, do not take advantage of the full set of available Antarctic data, and, in most instance, employ traditional seismic imaging techniques that utilize limited seismogram information. We utilize 3-component earthquake waveforms from almost 300 Antarctic broadband seismic stations and 26 southern mid-latitude stations from 270 earthquakes (5.5 ≤ Mw ≤ 7.0) between 2001-2003 and 2007-2016 to conduct a full-waveform adjoint inversion for Antarctica and surrounding regions of the Antarctic plate. Necessary forward and adjoint wavefield simulations are performed utilizing SPECFEM3D_GLOBE with the aid of the Texas Advanced Computing Center. We utilize phase observations from seismogram segments containing P, S, Rayleigh, and Love waves, including reflections and overtones, which are autonomously identified using FLEXWIN. The FLEXWIN analysis is carried out over a short (15-50 s) and long (initially 50-150 s) period band that target body waves, or body and surface waves, respectively. As our model is iteratively refined, the short-period corner of the long period band is gradually reduced to 25 s as the model converges over 20 linearized inversion iterations. We will briefly present this new high
Big Data Challenges in Global Seismic 'Adjoint Tomography' (Invited)
Tromp, J.; Bozdag, E.; Krischer, L.; Lefebvre, M.; Lei, W.; Smith, J.
2013-12-01
The challenge of imaging Earth's interior on a global scale is closely linked to the challenge of handling large data sets. The related iterative workflow involves five distinct phases, namely, 1) data gathering and culling, 2) synthetic seismogram calculations, 3) pre-processing (time-series analysis and time-window selection), 4) data assimilation and adjoint calculations, 5) post-processing (pre-conditioning, regularization, model update). In order to implement this workflow on modern high-performance computing systems, a new seismic data format is being developed. The Adaptable Seismic Data Format (ASDF) is designed to replace currently used data formats with a more flexible format that allows for fast parallel I/O. The metadata is divided into abstract categories, such as "source" and "receiver", along with provenance information for complete reproducibility. The structure of ASDF is designed keeping in mind three distinct applications: earthquake seismology, seismic interferometry, and exploration seismology. Existing time-series analysis tool kits, such as SAC and ObsPy, can be easily interfaced with ASDF so that seismologists can use robust, previously developed software packages. ASDF accommodates an automated, efficient workflow for global adjoint tomography. Manually managing the large number of simulations associated with the workflow can rapidly become a burden, especially with increasing numbers of earthquakes and stations. Therefore, it is of importance to investigate the possibility of automating the entire workflow. Scientific Workflow Management Software (SWfMS) allows users to execute workflows almost routinely. SWfMS provides additional advantages. In particular, it is possible to group independent simulations in a single job to fit the available computational resources. They also give a basic level of fault resilience as the workflow can be resumed at the correct state preceding a failure. Some of the best candidates for our particular workflow
Energy Technology Data Exchange (ETDEWEB)
Gu, Grace [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brown, Judith Alice [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bishop, Joseph E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-08-01
The texture of a polycrystalline material refers to the preferred orientation of the grains within the material. In metallic materials, texture can significantly affect the mechanical properties such as elastic moduli, yield stress, strain hardening, and fracture toughness. Recent advances in additive manufacturing of metallic materials offer the possibility in the not too distant future of controlling the spatial variation of texture. In this work, we investigate the advantages, in terms of mechanical performance, of allowing the texture to vary spatially. We use an adjoint-based gradient optimization algorithm within a finite element solver (COMSOL) to optimize several engineering quantities of interest in a simple structure (hole in a plate) and loading (uniaxial tension) condition. As a first step to general texture optimization, we consider the idealized case of a pure fiber texture in which the homogenized properties are transversely isotropic. In this special case, the only spatially varying design variables are the three Euler angles that prescribe the orientation of the homogenized material at each point within the structure. This work paves a new way to design metallic materials for tunable mechanical properties at the microstructure level.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
VOLUMETRIC METHOD FOR EVALUATION OF BEACHES VARIABILITY BASED ON GIS-TOOLS
Directory of Open Access Journals (Sweden)
V. V. Dolotov
2015-01-01
Full Text Available In frame of cadastral beach evaluation the volumetric method of natural variability index is proposed. It base on spatial calculations with Cut-Fill method and volume accounting ofboththe common beach contour and specific areas for the each time.
Control Method for Variable Speed Wind Turbines to Support Temporary Primary Frequency Control
DEFF Research Database (Denmark)
Wang, Haijiao; Chen, Zhe; Jiang, Quanyuan
2014-01-01
This paper develops a control method for variable speed wind turbines (VSWTs) to support temporary primary frequency control of power system. The control method contains two parts: (1) up-regulate support control when a frequency drop event occurs; (2) down-regulate support control when a frequen...
A design method of compensators for multi-variable control system with PID controllers 'CHARLY'
International Nuclear Information System (INIS)
Fujiwara, Toshitaka; Yamada, Katsumi
1985-01-01
A systematic design method of compensators for a multi-variable control system having usual PID controllers in its loops is presented in this paper. The method itself is able: to determine the main manipulating variable corresponding to each controlled variable with a sensitivity analysis in the frequency domain. to tune PID controllers sufficiently to realize adequate control actions with a searching technique of minimum values of cost functionals. to design compensators improving the control preformance and to simulate a total system for confirming the designed compensators. In the phase of compensator design, the state variable feed-back gain is obtained by means of the OPTIMAL REGULATOR THEORY for the composite system of plant and PID controllers. The transfer function type compensators the configurations of which were previously given are, then, designed to approximate the frequency responces of the above mentioned state feed-back system. An example is illustrated for convenience. (author)
A Novel Flood Forecasting Method Based on Initial State Variable Correction
Directory of Open Access Journals (Sweden)
Kuang Li
2017-12-01
Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.
International Nuclear Information System (INIS)
Ibragimov, N Kh; Avdonina, E D
2013-01-01
The method of nonlinear self-adjointness, which was recently developed by the first author, gives a generalization of Noether's theorem. This new method significantly extends approaches to constructing conservation laws associated with symmetries, since it does not require the existence of a Lagrangian. In particular, it can be applied to any linear equations and any nonlinear equations that possess at least one local conservation law. The present paper provides a brief survey of results on conservation laws which have been obtained by this method and published mostly in recent preprints of the authors, along with a method for constructing exact solutions of systems of partial differential equations with the use of conservation laws. In most cases the solutions obtained by the method of conservation laws cannot be found as invariant or partially invariant solutions. Bibliography: 23 titles
International Nuclear Information System (INIS)
Sumer, Kutluk Kagan; Goktas, Ozlem; Hepsag, Aycan
2009-01-01
In this study, we used ARIMA, seasonal ARIMA (SARIMA) and alternatively the regression model with seasonal latent variable in forecasting electricity demand by using data that belongs to 'Kayseri and Vicinity Electricity Joint-Stock Company' over the 1997:1-2005:12 periods. This study tries to examine the advantages of forecasting with ARIMA, SARIMA methods and with the model has seasonal latent variable to each other. The results support that ARIMA and SARIMA models are unsuccessful in forecasting electricity demand. The regression model with seasonal latent variable used in this study gives more successful results than ARIMA and SARIMA models because also this model can consider seasonal fluctuations and structural breaks
Johnson, Kenneth L.; White, K. Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
A method to forecast quantitative variables relating to nuclear public acceptance
International Nuclear Information System (INIS)
Ohnishi, T.
1992-01-01
A methodology is proposed for forecasting the future trend of quantitative variables profoundly related to the public acceptance (PA) of nuclear energy. The social environment influencing PA is first modeled by breaking it down into a finite number of fundamental elements and then the interactive formulae between the quantitative variables, which are attributed to and characterize each element, are determined by using the actual values of the variables in the past. Inputting the estimated values of exogenous variables into these formulae, the forecast values of endogenous variables can finally be obtained. Using this method, the problem of nuclear PA in Japan is treated as, for example, where the context is considered to comprise a public sector and the general social environment and socio-psychology. The public sector is broken down into three elements of the general public, the inhabitants living around nuclear facilities and the activists of anti-nuclear movements, whereas the social environment and socio-psychological factors are broken down into several elements, such as news media and psychological factors. Twenty-seven endogenous and seven exogenous variables are introduced to quantify these elements. After quantitatively formulating the interactive features between them and extrapolating the exogenous variables into the future estimates are made of the growth or attenuation of the endogenous variables, such as the pro- and anti-nuclear fractions in public opinion polls and the frequency of occurrence of anti-nuclear movements. (author)
Resistance Torque Based Variable Duty-Cycle Control Method for a Stage II Compressor
Zhong, Meipeng; Zheng, Shuiying
2017-07-01
The resistance torque of a piston stage II compressor generates strenuous fluctuations in a rotational period, and this can lead to negative influences on the working performance of the compressor. To restrain the strenuous fluctuations in the piston stage II compressor, a variable duty-cycle control method based on the resistance torque is proposed. A dynamic model of a stage II compressor is set up, and the resistance torque and other characteristic parameters are acquired as the control targets. Then, a variable duty-cycle control method is applied to track the resistance torque, thereby improving the working performance of the compressor. Simulated results show that the compressor, driven by the proposed method, requires lower current, while the rotating speed and the output torque remain comparable to the traditional variable-frequency control methods. A variable duty-cycle control system is developed, and the experimental results prove that the proposed method can help reduce the specific power, input power, and working noise of the compressor to 0.97 kW·m-3·min-1, 0.09 kW and 3.10 dB, respectively, under the same conditions of discharge pressure of 2.00 MPa and a discharge volume of 0.095 m3/min. The proposed variable duty-cycle control method tracks the resistance torque dynamically, and improves the working performance of a Stage II Compressor. The proposed variable duty-cycle control method can be applied to other compressors, and can provide theoretical guidance for the compressor.
A survey of variable selection methods in two Chinese epidemiology journals
Directory of Open Access Journals (Sweden)
Lynn Henry S
2010-09-01
Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.
Energy Technology Data Exchange (ETDEWEB)
Schanen, Michel; Marin, Oana; Zhang, Hong; Anitescu, Mihai
2016-01-01
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validate it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.
Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.
2015-12-01
We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.
Fully automatic time-window selection using machine learning for global adjoint tomography
Chen, Y.; Hill, J.; Lei, W.; Lefebvre, M. P.; Bozdag, E.; Komatitsch, D.; Tromp, J.
2017-12-01
Selecting time windows from seismograms such that the synthetic measurements (from simulations) and measured observations are sufficiently close is indispensable in a global adjoint tomography framework. The increasing amount of seismic data collected everyday around the world demands "intelligent" algorithms for seismic window selection. While the traditional FLEXWIN algorithm can be "automatic" to some extent, it still requires both human input and human knowledge or experience, and thus is not deemed to be fully automatic. The goal of intelligent window selection is to automatically select windows based on a learnt engine that is built upon a huge number of existing windows generated through the adjoint tomography project. We have formulated the automatic window selection problem as a classification problem. All possible misfit calculation windows are classified as either usable or unusable. Given a large number of windows with a known selection mode (select or not select), we train a neural network to predict the selection mode of an arbitrary input window. Currently, the five features we extract from the windows are its cross-correlation value, cross-correlation time lag, amplitude ratio between observed and synthetic data, window length, and minimum STA/LTA value. More features can be included in the future. We use these features to characterize each window for training a multilayer perceptron neural network (MPNN). Training the MPNN is equivalent to solve a non-linear optimization problem. We use backward propagation to derive the gradient of the loss function with respect to the weighting matrices and bias vectors and use the mini-batch stochastic gradient method to iteratively optimize the MPNN. Numerical tests show that with a careful selection of the training data and a sufficient amount of training data, we are able to train a robust neural network that is capable of detecting the waveforms in an arbitrary earthquake data with negligible detection error
Quantification and variability in colonic volume with a novel magnetic resonance imaging method
DEFF Research Database (Denmark)
Nilsson, M; Sandberg, Thomas Holm; Poulsen, Jakob Lykke
2015-01-01
Background: Segmental distribution of colorectal volume is relevant in a number of diseases, but clinical and experimental use demands robust reliability and validity. Using a novel semi-automatic magnetic resonance imaging-based technique, the aims of this study were to describe: (i) inter......-individual and intra-individual variability of segmental colorectal volumes between two observations in healthy subjects and (ii) the change in segmental colorectal volume distribution before and after defecation. Methods: The inter-individual and intra-individual variability of four colorectal volumes (cecum...... (p = 0.02). Conclusions & Inferences: Imaging of segmental colorectal volume, morphology, and fecal accumulation is advantageous to conventional methods in its low variability, high spatial resolution, and its absence of contrast-enhancing agents and irradiation. Hence, the method is suitable...
2015-11-10
approach is the ab- sence of the necessity to develop and maintain tangent linear and adjoint codes and its flexibility in adaptation to various...uadratic term in the right hand side of (B.10) is negligible. In the re- orted experiments we kept it in place since the value of ε was close o 0.01 and
Energy Technology Data Exchange (ETDEWEB)
Laboure, Vincent M.; Wang, Yaqi; DeHart, Mark D.
2016-05-01
In this paper, we study the Least-Squares (LS) PN form of the transport equation compatible with voids [1] in the context of Continuous Finite Element Methods (CFEM).We first deriveweakly imposed boundary conditions which make the LS weak formulation equivalent to the Self-Adjoint Angular Flux (SAAF) variational formulation with a void treatment [2], in the particular case of constant cross-sections and a uniform mesh. We then implement this method in Rattlesnake with the Multiphysics Object Oriented Simulation Environment (MOOSE) framework [3] using a spherical harmonics (PN) expansion to discretize in angle. We test our implementation using the Method of Manufactured Solutions (MMS) and find the expected convergence behavior both in angle and space. Lastly, we investigate the impact of the global non-conservation of LS by comparing the method with SAAF on a heterogeneous test problem.
Energy Technology Data Exchange (ETDEWEB)
Vincent M. Laboure; Yaqi Wang; Mark D. DeHart
2016-05-01
In this paper, we study the Least-Squares (LS) PN form of the transport equation compatible with voids in the context of Continuous Finite Element Methods (CFEM).We first deriveweakly imposed boundary conditions which make the LS weak formulation equivalent to the Self-Adjoint Angular Flux (SAAF) variational formulation with a void treatment, in the particular case of constant cross-sections and a uniform mesh. We then implement this method in Rattlesnake with the Multiphysics Object Oriented Simulation Environment (MOOSE) framework using a spherical harmonics (PN) expansion to discretize in angle. We test our implementation using the Method of Manufactured Solutions (MMS) and find the expected convergence behavior both in angle and space. Lastly, we investigate the impact of the global non-conservation of LS by comparing the method with SAAF on a heterogeneous test problem.
Fan, Dong-Dong; Kuang, Yan-Hui; Dong, Li-Hua; Ye, Xiao; Chen, Liang-Mian; Zhang, Dong; Ma, Zhen-Shan; Wang, Jin-Yu; Zhu, Jing-Jing; Wang, Zhi-Min; Wang, De-Qin; Li, Chu-Yuan
2017-04-01
To optimize the purification process of gynostemma pentaphyllum saponins (GPS) based on "adjoint marker" online control technology with GPS as the testing index. UPLC-QTOF-MS technology was used for qualitative analysis. "Adjoint marker" online control results showed that the end point of load sample was that the UV absorbance of effluent liquid was equal to half of that of load sample solution, and the absorbance was basically stable when the end point was stable. In UPLC-QTOF-MS qualitative analysis, 16 saponins were identified from GPS, including 13 known gynostemma saponins and 3 new saponins. This optimized method was proved to be simple, scientific, reasonable, easy for online determination, real-time record, and can be better applied to the mass production and automation of production. The results of qualitative analysis indicated that the "adjoint marker" online control technology can well retain main efficacy components of medicinal materials, and provide analysis tools for the process control and quality traceability. Copyright© by the Chinese Pharmaceutical Association.
A sizing method for stand-alone PV installations with variable demand
Energy Technology Data Exchange (ETDEWEB)
Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica Para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada, Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)
2008-05-15
The practical applicability of the considerations made in a previous paper to characterize energy balances in stand-alone photovoltaic systems (SAPV) is presented. Given that energy balances were characterized based on monthly estimations, the method is appropriate for sizing installations with variable monthly demands and variable monthly panel tilt (for seasonal estimations). The method presented is original in that it is the only method proposed for this type of demand. The method is based on the rational utilization of daily solar radiation distribution functions. When exact mathematical expressions are not available, approximate empirical expressions can be used. The more precise the statistical characterization of the solar radiation on the receiver module, the more precise the sizing method given that the characterization will solely depend on the distribution function of the daily global irradiation on the tilted surface H{sub g{beta}}{sub i}. This method, like previous ones, uses the concept of loss of load probability (LLP) as a parameter to characterize system design and includes information on the standard deviation of this parameter ({sigma}{sub LLP}) as well as two new parameters: annual number of system failures (f) and the standard deviation of annual number of system failures ({sigma}{sub f}). This paper therefore provides an analytical method for evaluating and sizing stand-alone PV systems with variable monthly demand and panel inclination. The sizing method has also been applied in a practical manner. (author)
Wegner, Franz
2016-01-01
This text presents the mathematical concepts of Grassmann variables and the method of supersymmetry to a broad audience of physicists interested in applying these tools to disordered and critical systems, as well as related topics in statistical physics. Based on many courses and seminars held by the author, one of the pioneers in this field, the reader is given a systematic and tutorial introduction to the subject matter. The algebra and analysis of Grassmann variables is presented in part I. The mathematics of these variables is applied to a random matrix model, path integrals for fermions, dimer models and the Ising model in two dimensions. Supermathematics - the use of commuting and anticommuting variables on an equal footing - is the subject of part II. The properties of supervectors and supermatrices, which contain both commuting and Grassmann components, are treated in great detail, including the derivation of integral theorems. In part III, supersymmetric physical models are considered. While supersym...
KEELE, Minimization of Nonlinear Function with Linear Constraints, Variable Metric Method
International Nuclear Information System (INIS)
Westley, G.W.
1975-01-01
1 - Description of problem or function: KEELE is a linearly constrained nonlinear programming algorithm for locating a local minimum of a function of n variables with the variables subject to linear equality and/or inequality constraints. 2 - Method of solution: A variable metric procedure is used where the direction of search at each iteration is obtained by multiplying the negative of the gradient vector by a positive definite matrix which approximates the inverse of the matrix of second partial derivatives associated with the function. 3 - Restrictions on the complexity of the problem: Array dimensions limit the number of variables to 20 and the number of constraints to 50. These can be changed by the user
International Nuclear Information System (INIS)
Shadid, J.N.; Smith, T.M.; Cyr, E.C.; Wildey, T.M.; Pawlowski, R.P.
2016-01-01
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.
Energy Technology Data Exchange (ETDEWEB)
Shadid, J.N., E-mail: jnshadi@sandia.gov [Sandia National Laboratories, Computational Mathematics Department (United States); Department of Mathematics and Statistics, University of New Mexico (United States); Smith, T.M. [Sandia National Laboratories, Multiphysics Applications Department (United States); Cyr, E.C. [Sandia National Laboratories, Computational Mathematics Department (United States); Wildey, T.M. [Sandia National Laboratories, Optimization and UQ Department (United States); Pawlowski, R.P. [Sandia National Laboratories, Multiphysics Applications Department (United States)
2016-09-15
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
2018-05-01
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy
International Nuclear Information System (INIS)
Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A.; Rijn, Rick R. van; Henneman, Onno D.F.; Heijmans, Jarom; Reitsma, Johannes B.
2006-01-01
The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)
Miyoshi, Takayuki
2017-04-01
The Japanese metropolitan area has high risks of earthquakes and volcanoes associated with convergent tectonic plates. It is important to clarify detail three-dimensional structure for understanding tectonics and predicting strong motion. Classical tomographic studies based on ray theory have revealed seismotectonics and volcanic tectonics in the region, however it is unknown whether their models reproduce observed seismograms. In the present study, we construct new seismic wave-speed model by using waveform inversion. Adjoint tomography and the spectral element method (SEM) were used in the inversion (e.g. Tape et al. 2009; Peter et al. 2011). We used broadband seismograms obtained at NIED F-net stations for 140 earthquakes occurred beneath the Kanto district. We selected four frequency bands between 5 and 30 sec and used from the seismograms of longer period bands for the inversion. Tomographic iteration was conducted until obtaining the minimized misfit between data and synthetics. Our SEM model has 16 million grid points that covers the metropolitan area of the Kanto district. The model parameters were the Vp and Vs of the grid points, and density and attenuation were updated to new values depending on new Vs in each iteration. The initial model was assumed the tomographic model (Matsubara and Obara 2011) based on ray theory. The source parameters were basically used from F-net catalog, while the centroid times were inferred from comparison between data and synthetics. We simulated the forward and adjoint wavefields of each event and obtained Vp and Vs misfit kernels from their interaction. Large computation was conducted on K computer, RIKEN. We obtained final model (m16) after 16 iterations in the present study. For the waveform improvement, it is clearly shown that m16 is better than the initial model, and the seismograms especially improved in the frequency bands of longer than 8 sec and changed better for seismograms of the events occurred at deeper than a
A method based on a separation of variables in magnetohydrodynamics (MHD)
International Nuclear Information System (INIS)
Cessenat, M.; Genta, P.
1996-01-01
We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables φ1, φ2, φ3 in addition of the time variable τ and then searching a solution which is separated with respect to φ1 and τ only. This is allowed by a very simple relation, called a 'metric separation equation', which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H Σ of the magnetic field on the unit sphere Σ by solving a non linear partial differential equation on Σ. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors)
He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie
2010-11-22
In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.
Approaches for developing a sizing method for stand-alone PV systems with variable demand
Energy Technology Data Exchange (ETDEWEB)
Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada. Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)
2008-05-15
Accurate sizing is one of the most important aspects to take into consideration when designing a stand-alone photovoltaic system (SAPV). Various methods, which differ in terms of their simplicity or reliability, have been developed for this purpose. Analytical methods, which seek functional relationships between variables of interest to the sizing problem, are one of these approaches. A series of rational considerations are presented in this paper with the aim of shedding light upon the basic principles and results of various sizing methods proposed by different authors. These considerations set the basis for a new analytical method that has been designed for systems with variable monthly energy demands. Following previous approaches, the method proposed is based on the concept of loss of load probability (LLP) - a parameter that is used to characterize system design. The method includes information on the standard deviation of loss of load probability ({sigma}{sub LLP}) and on two new parameters: annual number of system failures (f) and standard deviation of annual number of failures ({sigma}{sub f}). The method proves useful for sizing a PV system in a reliable manner and serves to explain the discrepancies found in the research on systems with LLP<10{sup -2}. We demonstrate that reliability depends not only on the sizing variables and on the distribution function of solar radiation, but on the minimum value as well, which in a given location and with a monthly average clearness index, achieves total solar radiation on the receiver surface. (author)
A fast collocation method for a variable-coefficient nonlocal diffusion model
Wang, Che; Wang, Hong
2017-02-01
We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.
Directory of Open Access Journals (Sweden)
Jovković Biljana
2012-12-01
Full Text Available The aim of this paper is to present the procedure of audit sampling using the variable sampling methods for conducting the tests of income from insurance premiums in insurance company 'Takovo'. Since the incomes from the insurance premiums from vehicle insurance and third-party vehicle insurance have the dominant share of the insurance company's income, the application of this method will be shown in the audit examination of these incomes - incomes from VI and TPVI premiums. For investigating the applicability of these methods in testing the income of other insurance companies, we shall implement the method of variable sampling in the audit testing of the premium income from the three leading insurance companies in Serbia, 'Dunav', 'DDOR' and 'Delta Generali' Insurance.
A method to standardize gait and balance variables for gait velocity.
Iersel, M.B. van; Olde Rikkert, M.G.M.; Borm, G.F.
2007-01-01
Many gait and balance variables depend on gait velocity, which seriously hinders the interpretation of gait and balance data derived from walks at different velocities. However, as far as we know there is no widely accepted method to correct for effects of gait velocity on other gait and balance
Directory of Open Access Journals (Sweden)
Hongwu Zhang
2011-08-01
Full Text Available In this article, we study a Cauchy problem for an elliptic equation with variable coefficients. It is well-known that such a problem is severely ill-posed; i.e., the solution does not depend continuously on the Cauchy data. We propose a modified quasi-boundary value regularization method to solve it. Convergence estimates are established under two a priori assumptions on the exact solution. A numerical example is given to illustrate our proposed method.
Komatitsch, Dimitri
2016-06-13
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
Coolant void reactivity adjustments in advanced CANDU lattices using adjoint sensitivity technique
International Nuclear Information System (INIS)
Assawaroongruengchot, M.; Marleau, G.
2008-01-01
Coolant void reactivity (CVR) is an important factor in reactor accident analysis. Here we study the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice using the optimization and adjoint sensitivity techniques. The sensitivity coefficients are evaluated using the perturbation theory based on the integral neutron transport equations. The neutron and flux importance transport solutions are obtained by the method of cyclic characteristics (MOCC). Three sets of parameters for CVR-BOC and k eff -EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR-BOC (CBCVR-BOC). To approximate the EOC sensitivity coefficient, we perform constant-power burnup/depletion calculations using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Our aim is to achieve a desired negative CVR-BOC of -2 mk and k eff -EOC of 0.900 for the first two cases, and a CBCVR-BOC of -2 mk and k eff -EOC of 0.900 for the last case. Sensitivity analyses of CVR and eigenvalue are also included in our study
Coolant void reactivity adjustments in advanced CANDU lattices using adjoint sensitivity technique
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M. [Institut de Genie Nucleaire, Ecole Polytechnique de Montreal, P.O. Box 6079, stn. Centre-ville, Montreal, H3C3A7 (Canada)], E-mail: monchaia@gmail.com; Marleau, G. [Institut de Genie Nucleaire, Ecole Polytechnique de Montreal, P.O. Box 6079, stn. Centre-ville, Montreal, H3C3A7 (Canada)], E-mail: guy.marleau@polymtl.ca
2008-03-15
Coolant void reactivity (CVR) is an important factor in reactor accident analysis. Here we study the adjustments of CVR at beginning of burnup cycle (BOC) and k{sub eff} at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice using the optimization and adjoint sensitivity techniques. The sensitivity coefficients are evaluated using the perturbation theory based on the integral neutron transport equations. The neutron and flux importance transport solutions are obtained by the method of cyclic characteristics (MOCC). Three sets of parameters for CVR-BOC and k{sub eff}-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR-BOC (CBCVR-BOC). To approximate the EOC sensitivity coefficient, we perform constant-power burnup/depletion calculations using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Our aim is to achieve a desired negative CVR-BOC of -2 mk and k{sub eff}-EOC of 0.900 for the first two cases, and a CBCVR-BOC of -2 mk and k{sub eff}-EOC of 0.900 for the last case. Sensitivity analyses of CVR and eigenvalue are also included in our study.
Metabolic Flux Analysis in Isotope Labeling Experiments Using the Adjoint Approach.
Mottelet, Stephane; Gaullier, Gil; Sadaka, Georges
2017-01-01
Comprehension of metabolic pathways is considerably enhanced by metabolic flux analysis (MFA-ILE) in isotope labeling experiments. The balance equations are given by hundreds of algebraic (stationary MFA) or ordinary differential equations (nonstationary MFA), and reducing the number of operations is therefore a crucial part of reducing the computation cost. The main bottleneck for deterministic algorithms is the computation of derivatives, particularly for nonstationary MFA. In this article, we explain how the overall identification process may be speeded up by using the adjoint approach to compute the gradient of the residual sum of squares. The proposed approach shows significant improvements in terms of complexity and computation time when it is compared with the usual (direct) approach. Numerical results are obtained for the central metabolic pathways of Escherichia coli and are validated against reference software in the stationary case. The methods and algorithms described in this paper are included in the sysmetab software package distributed under an Open Source license at http://forge.scilab.org/index.php/p/sysmetab/.
Komatitsch, Dimitri; Xie, Zhinan; Bozdağ, Ebru; de Andrade, Elliott Sales; Peter, Daniel; Liu, Qinya; Tromp, Jeroen
2016-01-01
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen
2016-09-01
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size
Hadjimichael, Yiannis
2016-09-08
Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.
Xu, Jun; Cudel, Christophe; Kohler, Sophie; Fontaine, Stéphane; Haeberlé, Olivier; Klotz, Marie-Louise
2012-04-01
Fabric's smoothness is a key factor in determining the quality of finished textile products and has great influence on the functionality of industrial textiles and high-end textile products. With popularization of the zero defect industrial concept, identifying and measuring defective material in the early stage of production is of great interest to the industry. In the current market, many systems are able to achieve automatic monitoring and control of fabric, paper, and nonwoven material during the entire production process, however online measurement of hairiness is still an open topic and highly desirable for industrial applications. We propose a computer vision approach to compute epipole by using variable homography, which can be used to measure emergent fiber length on textile fabrics. The main challenges addressed in this paper are the application of variable homography on textile monitoring and measurement, as well as the accuracy of the estimated calculation. We propose that a fibrous structure can be considered as a two-layer structure, and then we show how variable homography combined with epipolar geometry can estimate the length of the fiber defects. Simulations are carried out to show the effectiveness of this method. The true length of selected fibers is measured precisely using a digital optical microscope, and then the same fibers are tested by our method. Our experimental results suggest that smoothness monitored by variable homography is an accurate and robust method of quality control for important industrial fabrics.
A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method
Directory of Open Access Journals (Sweden)
Aiqian Zhang
2012-05-01
Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.
A Method of MPPT Control Based on Power Variable Step-size in Photovoltaic Converter System
Directory of Open Access Journals (Sweden)
Xu Hui-xiang
2016-01-01
Full Text Available Since the disadvantage of traditional MPPT algorithms of variable step-size, proposed power tracking based on variable step-size with the advantage method of the constant-voltage and the perturb-observe (P&O[1-3]. The control strategy modify the problem of voltage fluctuation caused by perturb-observe method, at the same time, introducing the advantage of constant-voltage method and simplify the circuit topology. With the theoretical derivation, control the output power of photovoltaic modules to change the duty cycle of main switch. Achieve the maximum power stabilization output, reduce the volatility of energy loss effectively, and improve the inversion efficiency[3,4]. Given the result of experimental test based theoretical derivation and the curve of MPPT when the prototype work.
International Nuclear Information System (INIS)
Xu, Yuenong; Smooke, M.D.
1993-01-01
In this paper we present a primitive variable Newton-based solution method with a block-line linear equation solver for the calculation of reacting flows. The present approach is compared with the stream function-vorticity Newton's method and the SIMPLER algorithm on the calculation of a system of fully elliptic equations governing an axisymmetric methane-air laminar diffusion flame. The chemical reaction is modeled by the flame sheet approximation. The numerical solution agrees well with experimental data in the major chemical species. The comparison of three sets of numerical results indicates that the stream function-vorticity solution using the approximate boundary conditions reported in the previous calculations predicts a longer flame length and a broader flame shape. With a new set of modified vorticity boundary conditions, we obtain agreement between the primitive variable and stream function-vorticity solutions. The primitive variable Newton's method converges much faster than the other two methods. Because of much less computer memory required for the block-line tridiagonal solver compared to a direct solver, the present approach makes it possible to calculate multidimensional flames with detailed reaction mechanisms. The SIMPLER algorithm shows a slow convergence rate compared to the other two methods in the present calculation
Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose
2018-06-01
An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.
Exploring the use of a deterministic adjoint flux calculation in criticality Monte Carlo simulations
International Nuclear Information System (INIS)
Jinaphanh, A.; Miss, J.; Richet, Y.; Martin, N.; Hebert, A.
2011-01-01
The paper presents a preliminary study on the use of a deterministic adjoint flux calculation to improve source convergence issues by reducing the number of iterations needed to reach the converged distribution in criticality Monte Carlo calculations. Slow source convergence in Monte Carlo eigenvalue calculations may lead to underestimate the effective multiplication factor or reaction rates. The convergence speed depends on the initial distribution and the dominance ratio. We propose using an adjoint flux estimation to modify the transition kernel according to the Importance Sampling technique. This adjoint flux is also used as the initial guess of the first generation distribution for the Monte Carlo simulation. Calculated Variance of a local estimator of current is being checked. (author)
Zwinderman, A. H.; Cleophas, T. J.
2005-01-01
BACKGROUND: Clinical investigators, although they are generally familiar with testing differences between averages, have difficulty testing differences between variabilities. OBJECTIVE: To give examples of situations where variability is more relevant than averages and to describe simple methods for
Selecting minimum dataset soil variables using PLSR as a regressive multivariate method
Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.
2017-04-01
Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP
Read margin analysis of crossbar arrays using the cell-variability-aware simulation method
Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon
2018-02-01
This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.
Biological variables for the site survey of surface ecosystems - existing data and survey methods
International Nuclear Information System (INIS)
Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt
2000-06-01
In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is
Biological variables for the site survey of surface ecosystems - existing data and survey methods
Energy Technology Data Exchange (ETDEWEB)
Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt [SwedPower AB, Stockholm (Sweden)
2000-06-01
In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is
International Nuclear Information System (INIS)
Nanty, Simon
2015-01-01
This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been
Propulsion and launching analysis of variable-mass rockets by analytical methods
Directory of Open Access Journals (Sweden)
D.D. Ganji
2013-09-01
Full Text Available In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM, homotopy perturbation method (HPM and least square method (LSM were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a parametric study is performed here which includes the effect of exhaust velocity (Ce, burn rate (BR of fuel and diameter of cylindrical rocket (d on the motion of a sample rocket, and contours for showing the sensitivity of these parameters are plotted. The main results indicate that the rocket velocity and altitude are increased with increasing the Ce and BR and decreased with increasing the rocket diameter and drag coefficient.
A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results
Larsen, Curtis E.; Irvine, Tom
2013-01-01
A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.
The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy
Energy Technology Data Exchange (ETDEWEB)
Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A. [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Rijn, Rick R. van; Henneman, Onno D.F. [Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Heijmans, Jarom [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Reitsma, Johannes B. [Academic Medical Centre, Department of Clinical Epidemiology and Biostatistics, Amsterdam (Netherlands)
2006-01-01
The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)
Use of a variable tracer infusion method to determine glucose turnover in humans
International Nuclear Information System (INIS)
Molina, J.M.; Baron, A.D.; Edelman, S.V.; Brechtel, G.; Wallace, P.; Olefsky, J.M.
1990-01-01
The single-compartment pool fraction model, when used with the hyperinsulinemic glucose clamp technique to measure rates of glucose turnover, sometimes underestimates true rates of glucose appearance (Ra) resulting in negative values for hepatic glucose output (HGO). We focused our attention on isotope discrimination and model error as possible explanations for this underestimation. We found no difference in [3-3H] glucose specific activity in samples obtained simultaneously from the femoral artery and vein (2,400 +/- 455 vs. 2,454 +/- 522 dpm/mg) in 6 men during a hyperinsulinemic euglycemic clamp study where insulin was infused at 40 mU.m-2.min-1 for 3 h; therefore, isotope discrimination did not occur. We compared the ability of a constant (0.6 microCi/min) vs. variable tracer infusion method (tracer added to the glucose infusate) to measure non-steady-state Ra during hyperinsulinemic clamp studies. Plasma specific activity fell during the constant tracer infusion studies but did not change from base line during the variable tracer infusion studies. By maintaining a constant plasma specific activity the variable tracer infusion method eliminates uncertainty about changes in glucose pool size. This overcame modeling error and more accurately measures non-steady-state Ra (P less than 0.001 by analysis of variance vs. constant infusion method). In conclusion, underestimation of Ra determined isotopically during hyperinsulinemic clamp studies is largely due to modeling error that can be overcome by use of the variable tracer infusion method. This method allows more accurate determination of Ra and HGO under non-steady-state conditions
BPS Center Vortices in Nonrelativistic SU(N) Gauge Models with Adjoint Higgs Fields
International Nuclear Information System (INIS)
Oxman, L. E.
2015-01-01
We propose a class of SU(N) Yang-Mills models, with adjoint Higgs fields, that accept BPS center vortex equations. The lack of a local magnetic flux that could serve as an energy bound is circumvented by including a new term in the energy functional. This term tends to align, in the Lie algebra, the magnetic field and one of the adjoint Higgs fields. Finally, a reduced set of equations for the center vortex profile functions is obtained (for N=2,3). In particular, Z(3) BPS vortices come in three colours and three anticolours, obtained from an ansatz based on the defining representation and its conjugate.
Adjoint shape optimization for fluid-structure interaction of ducted flows
Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.
2018-03-01
Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.
On the use of flux-adjoint condensed nuclear data for 1-group AGR kinetics
International Nuclear Information System (INIS)
Hutt, P.K.
1979-03-01
Following previous work on the differences between one and two neutron group AGR kinetics the possible advantages of flux-adjoint condensed lattice data over the simple flux condensation procedure are investigated. Analytic arguments are given for expecting flux-adjoint condensation to give a better representation of rod worth slopes and flux shape changes associated with partially rodded cores. These areas have previously been found to yield most of the one to two neutron group differences. The validity of these arguments is demonstrated comparing various calculations. (U.K.)
Mass anomalous dimension of Adjoint QCD at large N from twisted volume reduction
Pérez, Margarita García; Keegan, Liam; Okawa, Masanori
2015-01-01
In this work we consider the $SU(N)$ gauge theory with two Dirac fermions in the adjoint representation, in the limit of large $N$. In this limit the infinite-volume physics of this model can be studied by means of the corresponding twisted reduced model defined on a single site lattice. Making use of this strategy we study the reduced model for various values of $N$ up to 289. By analyzing the eigenvalue distribution of the adjoint Dirac operator we test the conformality of the theory and extract the corresponding mass anomalous dimension.
Mass anomalous dimension of adjoint QCD at large N from twisted volume reduction
Energy Technology Data Exchange (ETDEWEB)
Pérez, Margarita García [Instituto de Física Teórica UAM-CSIC, Nicolás Cabrera 13-15, Universidad Autónoma de Madrid,E-28049-Madrid (Spain); González-Arroyo, Antonio [Instituto de Física Teórica UAM-CSIC, Nicolás Cabrera 13-15, Universidad Autónoma de Madrid,E-28049-Madrid (Spain); Departamento de Física Teórica, C-XI, Universidad Autónoma de Madrid,E-28049-Madrid (Spain); Keegan, Liam [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland); Okawa, Masanori [Graduate School of Science, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Core of Research for the Energetic Universe, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan)
2015-08-07
In this work we consider the SU(N) gauge theory with two Dirac fermions in the adjoint representation, in the limit of large N. In this limit the infinite-volume physics of this model can be studied by means of the corresponding twisted reduced model defined on a single site lattice. Making use of this strategy we study the reduced model for various values of N up to 289. By analyzing the eigenvalue distribution of the adjoint Dirac operator we test the conformality of the theory and extract the corresponding mass anomalous dimension.
Application of adjoint sensitivity theory to performance assessment of hydrogeologic concerns
International Nuclear Information System (INIS)
Metcalfe, D.E.; Harper, W.V.
1986-01-01
Sensitivity and uncertainty analyses are important components of performance assessment activities for potential high-level radioactive waste repositories. The application of the adjoint sensitivity technique is demonstrated for the Leadville Limestone in the Paradox Basin, Utah. The adjoint technique is used sequentially to first assist in the calibration of the regional conceptual ground-water flow model to measured potentiometric data. Second, it is used to evaluate the sensitivities of the calculated pressures used to define local scale boundary conditions to regional parameters and boundary conditions
The relationship between glass ceiling and power distance as a cultural variable by a new method
Directory of Open Access Journals (Sweden)
Naide Jahangirov
2015-12-01
Full Text Available Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate this relationship in detail. The survey data were obtained from 109 people working at a research center which operated as a part of the non-profit private university in Ankara, Turkey. The relationship between the variables was revealed by a new method which was developed as an addition to the correlation in survey. The analysis revealed that the female staff perceived the glass ceiling and the power distance more intensely than the male staff. In addition, the medium level relation was determined between the power distance and the glass ceiling perception among female staff.
Improved flux calculations for viscous incompressible flow by the variable penalty method
International Nuclear Information System (INIS)
Kheshgi, H.; Luskin, M.
1985-01-01
The Navier-Stokes system for viscous, incompressible flow is considered, taking into account a replacement of the continuity equation by the perturbed continuity equation. The introduction of the approximation allows the pressure variable to be eliminated to obtain the system of equations for the approximate velocity. The penalty approximation is often applied to numerical discretizations since it provides a reduction in the size and band-width of the system of equations. Attention is given to error estimates, and to two numerical experiments which illustrate the error estimates considered. It is found that the variable penalty method provides an accurate solution for a much wider range of epsilon than the classical penalty method. 8 references
Directory of Open Access Journals (Sweden)
Hongfen Gao
2014-01-01
Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.
A new hydraulic regulation method on district heating system with distributed variable-speed pumps
International Nuclear Information System (INIS)
Wang, Hai; Wang, Haiying; Zhu, Tong
2017-01-01
Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the
Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro
It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.
Directory of Open Access Journals (Sweden)
Mário Mestria
2013-08-01
Full Text Available The Clustered Traveling Salesman Problem (CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. The CTSP is NP-hard and, in this context, we are proposed heuristic methods for the CTSP using GRASP, Path Relinking and Variable Neighborhood Descent (VND. The heuristic methods were tested using Euclidean instances with up to 2000 vertices and clusters varying between 4 to 150 vertices. The computational tests were performed to compare the performance of the heuristic methods with an exact algorithm using the Parallel CPLEX software. The computational results showed that the hybrid heuristic method using VND outperforms other heuristic methods.
Locating disease genes using Bayesian variable selection with the Haseman-Elston method
Directory of Open Access Journals (Sweden)
He Qimei
2003-12-01
Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.
Method of nuclear reactor control using a variable temperature load dependent set point
International Nuclear Information System (INIS)
Kelly, J.J.; Rambo, G.E.
1982-01-01
A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow
Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis
Directory of Open Access Journals (Sweden)
Ueki Masao
2012-05-01
Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.
Cumulative Mass and NIOSH Variable Lifting Index Method for Risk Assessment: Possible Relations.
Stucchi, Giulia; Battevi, Natale; Pandolfi, Monica; Galinotti, Luca; Iodice, Simona; Favero, Chiara
2018-02-01
Objective The aim of this study was to explore whether the Variable Lifting Index (VLI) can be corrected for cumulative mass and thus test its efficacy in predicting the risk of low-back pain (LBP). Background A validation study of the VLI method was published in this journal reporting promising results. Although several studies highlighted a positive correlation between cumulative load and LBP, cumulative mass has never been considered in any of the studies investigating the relationship between manual material handling and LBP. Method Both VLI and cumulative mass were calculated for 2,374 exposed subjects using a systematic approach. Due to high variability of cumulative mass values, a stratification within VLI categories was employed. Dummy variables (1-4) were assigned to each class and used as a multiplier factor for the VLI, resulting in a new index (VLI_CMM). Data on LBP were collected by occupational physicians at the study sites. Logistic regression was used to estimate the risk of acute LBP within levels of risk exposure when compared with a control group formed by 1,028 unexposed subjects. Results Data showed greatly variable values of cumulative mass across all VLI classes. The potential effect of cumulative mass on damage emerged as not significant ( p value = .6526). Conclusion When comparing VLI_CMM with raw VLI, the former failed to prove itself as a better predictor of LBP risk. Application To recognize cumulative mass as a modifier, especially for lumbar degenerative spine diseases, authors of future studies should investigate potential association between the VLI and other damage variables.
Method of collective variables with reference system for the grand canonical ensemble
International Nuclear Information System (INIS)
Yukhnovskii, I.R.
1989-01-01
A method of collective variables with special reference system for the grand canonical ensemble is presented. An explicit form is obtained for the basis sixth-degree measure density needed to describe the liquid-gas phase transition. Here the author presents the fundamentals of the method, which are as follows: (1) the functional form for the partition function in the grand canonical ensemble; (2) derivation of thermodynamic relations for the coefficients of the Jacobian; (3) transition to the problem on an adequate lattice; and (4) obtaining of the explicit form for the functional of the partition function
Application of Muskingum routing method with variable parameters in ungauged basin
Directory of Open Access Journals (Sweden)
Xiao-meng Song
2011-03-01
Full Text Available This paper describes a flood routing method applied in an ungauged basin, utilizing the Muskingum model with variable parameters of wave travel time K and weight coefficient of discharge x based on the physical characteristics of the river reach and flood, including the reach slope, length, width, and flood discharge. Three formulas for estimating parameters of wide rectangular, triangular, and parabolic cross sections are proposed. The influence of the flood on channel flow routing parameters is taken into account. The HEC-HMS hydrological model and the geospatial hydrologic analysis module HEC-GeoHMS were used to extract channel or watershed characteristics and to divide sub-basins. In addition, the initial and constant-rate method, user synthetic unit hydrograph method, and exponential recession method were used to estimate runoff volumes, the direct runoff hydrograph, and the baseflow hydrograph, respectively. The Muskingum model with variable parameters was then applied in the Louzigou Basin in Henan Province of China, and of the results, the percentages of flood events with a relative error of peak discharge less than 20% and runoff volume less than 10% are both 100%. They also show that the percentages of flood events with coefficients of determination greater than 0.8 are 83.33%, 91.67%, and 87.5%, respectively, for rectangular, triangular, and parabolic cross sections in 24 flood events. Therefore, this method is applicable to ungauged basins.
A symmetrized quasi-diffusion method for solving multidimensional transport problems
International Nuclear Information System (INIS)
Miften, M.M.; Larsen, E.W.
1992-01-01
In this paper, the authors propose a 'symmetrized' QD (SQD) method in which the non-self-adjoint QD diffusion problem is replaced by two self-adjoint diffusion problems. These problems are more easily discretized and more efficiently solved than in the standard QD method. They also give SQD calculational results for transport problems in x-y geometry
Houston, Lauren; Probst, Yasmine; Martin, Allison
2018-05-18
Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.
Self-adjointness and spectral properties of Dirac operators with magnetic links
DEFF Research Database (Denmark)
Portmann, Fabian; Sok, Jérémy; Solovej, Jan Philip
2018-01-01
We define Dirac operators on $\\mathbb{S}^3$ (and $\\mathbb{R}^3$) with magnetic fields supported on smooth, oriented links and prove self-adjointness of certain (natural) extensions. We then analyze their spectral properties and show, among other things, that these operators have discrete spectrum...
q-structure algebra of Uq(g-circumflex) from its adjoint action
International Nuclear Information System (INIS)
El Hassouni, A.; Hassouni, Y.; Zakkari, M.
1994-08-01
We prove that the adjoint action of the quantum affine Lie algebra U q (g-circumflex), where g is a simple finite dimensional Lie algebra, reproduces the q-commutation relationship of U q (g-circumflex) if and only if g is of type A n , n ≥ 1. (author). 4 refs
Feynman's Operational Calculi: Spectral Theory for Noncommuting Self-adjoint Operators
International Nuclear Information System (INIS)
Jefferies, Brian; Johnson, Gerald W.; Nielsen, Lance
2007-01-01
The spectral theorem for commuting self-adjoint operators along with the associated functional (or operational) calculus is among the most useful and beautiful results of analysis. It is well known that forming a functional calculus for noncommuting self-adjoint operators is far more problematic. The central result of this paper establishes a rich functional calculus for any finite number of noncommuting (i.e. not necessarily commuting) bounded, self-adjoint operators A 1 ,..., A n and associated continuous Borel probability measures μ 1 , ?, μ n on [0,1]. Fix A 1 ,..., A n . Then each choice of an n-tuple (μ 1 ,...,μ n ) of measures determines one of Feynman's operational calculi acting on a certain Banach algebra of analytic functions even when A 1 , ..., A n are just bounded linear operators on a Banach space. The Hilbert space setting along with self-adjointness allows us to extend the operational calculi well beyond the analytic functions. Using results and ideas drawn largely from the proof of our main theorem, we also establish a family of Trotter product type formulas suitable for Feynman's operational calculi
ADGEN: An automated adjoint code generator for large-scale sensitivity analysis
International Nuclear Information System (INIS)
Pin, F.G.; Oblow, E.M.; Horwedel, J.E.; Lucius, J.L.
1987-01-01
This paper describes a new automated system, named ADGEN, which makes use of the strengths of computer calculus to automate the costly and time-consuming calculation of derivatives in FORTRAN computer codes, and automatically generate adjoint solutions of computer codes
Adjoint de programme régional (h/f) | CRDI - Centre de recherches ...
International Development Research Centre (IDRC) Digital Library (Canada)
L'adjoint de programme doit établir les priorités parmi les multiples ... sur un système de contrôle, en établissant l'ordre prioritaire afin de respecter les ... Au besoin, aider les agents de gestion de programme à entretenir et à mettre à jour les ...
Spectral analysis of non-self-adjoint Jacobi operator associated with Jacobian elliptic functions
Czech Academy of Sciences Publication Activity Database
Siegl, Petr; Štampach, F.
2017-01-01
Roč. 11, č. 4 (2017), s. 901-928 ISSN 1846-3886 Grant - others:GA ČR(CZ) GA13-11058S Institutional support: RVO:61389005 Keywords : Non-self-adjoint Jacobi operator * Weyl m-function * Jacobian elliptic functions Subject RIV: BE - Theoretical Physics OBOR OECD: Pure mathematics Impact factor: 0.440, year: 2016
Self-adjoint Hamiltonians with a mass jump: General matching conditions
International Nuclear Information System (INIS)
Gadella, M.; Kuru, S.; Negro, J.
2007-01-01
The simplest position-dependent mass Hamiltonian in one dimension, where the mass has the form of a step function with a jump discontinuity at one point, is considered. The most general matching conditions at the jumping point for the solutions of the Schroedinger equation that provide a self-adjoint Hamiltonian are characterized
Bounded solutions of self-adjoint second order linear difference equations with periodic coeffients
Directory of Open Access Journals (Sweden)
Encinas A.M.
2018-02-01
Full Text Available In this work we obtain easy characterizations for the boundedness of the solutions of the discrete, self–adjoint, second order and linear unidimensional equations with periodic coefficients, including the analysis of the so-called discrete Mathieu equations as particular cases.
Modeling the solute transport by particle-tracing method with variable weights
Jiang, J.
2016-12-01
Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.
Assessing Mucoadhesion in Polymer Gels: The Effect of Method Type and Instrument Variables
Directory of Open Access Journals (Sweden)
Jéssica Bassi da Silva
2018-03-01
Full Text Available The process of mucoadhesion has been widely studied using a wide variety of methods, which are influenced by instrumental variables and experiment design, making the comparison between the results of different studies difficult. The aim of this work was to standardize the conditions of the detachment test and the rheological methods of mucoadhesion assessment for semisolids, and introduce a texture profile analysis (TPA method. A factorial design was developed to suggest standard conditions for performing the detachment force method. To evaluate the method, binary polymeric systems were prepared containing poloxamer 407 and Carbopol 971P®, Carbopol 974P®, or Noveon® Polycarbophil. The mucoadhesion of systems was evaluated, and the reproducibility of these measurements investigated. This detachment force method was demonstrated to be reproduceable, and gave different adhesion when mucin disk or ex vivo oral mucosa was used. The factorial design demonstrated that all evaluated parameters had an effect on measurements of mucoadhesive force, but the same was not observed for the work of adhesion. It was suggested that the work of adhesion is a more appropriate metric for evaluating mucoadhesion. Oscillatory rheology was more capable of investigating adhesive interactions than flow rheology. TPA method was demonstrated to be reproducible and can evaluate the adhesiveness interaction parameter. This investigation demonstrates the need for standardized methods to evaluate mucoadhesion and makes suggestions for a standard study design.
Directory of Open Access Journals (Sweden)
Mohammad Hadi Jalali
2018-01-01
Full Text Available Elastic stress analysis of rotating variable thickness annular disk made of functionally graded material (FGM is presented. Elasticity modulus, density, and thickness of the disk are assumed to vary radially according to a power-law function. Radial stress, circumferential stress, and radial deformation of the rotating FG annular disk of variable thickness with clamped-clamped (C-C, clamped-free (C-F, and free-free (F-F boundary conditions are obtained using the numerical finite difference method, and the effects of the graded index, thickness variation, and rotating speed on the stresses and deformation are evaluated. It is shown that using FG material could decrease the value of radial stress and increase the radial displacement in a rotating thin disk. It is also demonstrated that increasing the rotating speed can strongly increase the stress in the FG annular disk.
Gopalakrishnan, Ganesh
2013-07-01
An ocean state estimate has been developed for the Gulf of Mexico (GoM) using the MIT general circulation model and its adjoint. The estimate has been tested by forecasting loop current (LC) evolution and eddy shedding in the GoM. The adjoint (or four-dimensional variational) method was used to match the model evolution to observations by adjusting model temperature and salinity initial conditions, open boundary conditions, and atmospheric forcing fields. The model was fit to satellite-derived along-track sea surface height, separated into temporal mean and anomalies, and gridded sea surface temperature for 2 month periods. The optimized state at the end of the assimilation period was used to initialize the forecast for 2 months. Forecasts explore practical LC predictability and provide a cross-validation test of the state estimate by comparing it to independent future observations. The model forecast was tested for several LC eddy separation events, including Eddy Franklin in May 2010 during the deepwater horizon oil spill disaster in the GoM. The forecast used monthly climatological open boundary conditions, atmospheric forcing, and run-off fluxes. The model performance was evaluated by computing model-observation root-mean-square difference (rmsd) during both the hindcast and forecast periods. The rmsd metrics for the forecast generally outperformed persistence (keeping the initial state fixed) and reference (forecast initialized using assimilated Hybrid Coordinate Ocean Model 1/12° global analysis) model simulations during LC eddy separation events for a period of 1̃2 months.
Gopalakrishnan, Ganesh; Cornuelle, Bruce D.; Hoteit, Ibrahim; Rudnick, Daniel L.; Owens, W. Brechner
2013-01-01
An ocean state estimate has been developed for the Gulf of Mexico (GoM) using the MIT general circulation model and its adjoint. The estimate has been tested by forecasting loop current (LC) evolution and eddy shedding in the GoM. The adjoint (or four-dimensional variational) method was used to match the model evolution to observations by adjusting model temperature and salinity initial conditions, open boundary conditions, and atmospheric forcing fields. The model was fit to satellite-derived along-track sea surface height, separated into temporal mean and anomalies, and gridded sea surface temperature for 2 month periods. The optimized state at the end of the assimilation period was used to initialize the forecast for 2 months. Forecasts explore practical LC predictability and provide a cross-validation test of the state estimate by comparing it to independent future observations. The model forecast was tested for several LC eddy separation events, including Eddy Franklin in May 2010 during the deepwater horizon oil spill disaster in the GoM. The forecast used monthly climatological open boundary conditions, atmospheric forcing, and run-off fluxes. The model performance was evaluated by computing model-observation root-mean-square difference (rmsd) during both the hindcast and forecast periods. The rmsd metrics for the forecast generally outperformed persistence (keeping the initial state fixed) and reference (forecast initialized using assimilated Hybrid Coordinate Ocean Model 1/12° global analysis) model simulations during LC eddy separation events for a period of 1̃2 months.
Lung lesion doubling times: values and variability based on method of volume determination
International Nuclear Information System (INIS)
Eisenbud Quint, Leslie; Cheng, Joan; Schipper, Matthew; Chang, Andrew C.; Kalemkerian, Gregory
2008-01-01
Purpose: To determine doubling times (DTs) of lung lesions based on volumetric measurements from thin-section CT imaging. Methods: Previously untreated patients with ≥ two thin-section CT scans showing a focal lung lesion were identified. Lesion volumes were derived using direct volume measurements and volume calculations based on lesion area and diameter. Growth rates (GRs) were compared by tissue diagnosis and measurement technique. Results: 54 lesions were evaluated including 8 benign lesions, 10 metastases, 3 lymphomas, 15 adenocarcinomas, 11 squamous carcinomas, and 7 miscellaneous lung cancers. Using direct volume measurements, median DTs were 453, 111, 15, 181, 139 and 137 days, respectively. Lung cancer DTs ranged from 23-2239 days. There were no significant differences in GRs among the different lesion types. There was considerable variability among GRs using different volume determination methods. Conclusions: Lung cancer doubling times showed a substantial range, and different volume determination methods gave considerably different DTs
A variable pressure method for characterizing nanoparticle surface charge using pore sensors.
Vogel, Robert; Anderson, Will; Eldridge, James; Glossop, Ben; Willmott, Geoff
2012-04-03
A novel method using resistive pulse sensors for electrokinetic surface charge measurements of nanoparticles is presented. This method involves recording the particle blockade rate while the pressure applied across a pore sensor is varied. This applied pressure acts in a direction which opposes transport due to the combination of electro-osmosis, electrophoresis, and inherent pressure. The blockade rate reaches a minimum when the velocity of nanoparticles in the vicinity of the pore approaches zero, and the forces on typical nanoparticles are in equilibrium. The pressure applied at this minimum rate can be used to calculate the zeta potential of the nanoparticles. The efficacy of this variable pressure method was demonstrated for a range of carboxylated 200 nm polystyrene nanoparticles with different surface charge densities. Results were of the same order as phase analysis light scattering (PALS) measurements. Unlike PALS results, the sequence of increasing zeta potential for different particle types agreed with conductometric titration.
THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON
Directory of Open Access Journals (Sweden)
Gabriel da Silva Oliveira
2015-12-01
Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.
Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.
Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F
2015-05-01
Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method
The Effect of 4-week Difference Training Methods on Some Fitness Variables in Youth Handball Players
Directory of Open Access Journals (Sweden)
Abdolhossein a Parnow
2016-09-01
Full Text Available Handball is a team sport in which main activities such as sprinting, arm throwing, hitting, and so on involve. This Olympic team sport requires a standard of preparation in order to complete sixteen minutes of competitive play and to achieve success. This study, therefore, was done to determinate the effect of a 4-week different training on some physical fitness variables in youth Handball players. Thirty high-school students participated in the study and assigned into the Resistance Training (RT (n = 10: 16.75± 0.36 yr; 63.14± 4.19 kg; 174.8 ± 5.41 cm, Plyometric Training (PT (n = 10: 16.57± 0.26 yr; 65.52± 6.79 kg; 173.5 ± 5.44 cm, and Complex Training (CT (n=10, 16.23± 0.50 yr; 58.43± 10.50 kg; 175.2 ± 8.19 cm groups. Subjects were evaluated in anthropometric and physiological characteristics 48 hours before and after of a 4-week protocol. Because of study purposes, statistical analyses consisted of a repeated measure ANVOA and one-way ANOVA were used. In considering with pre to post test variables changes in the groups, data analysis showed BF, strength, speed, agility, and explosive power were affected by training protocols (P0.05. In conclusion, complex training result in advantageous effect on variables such as strength, explosive power, speed and agility in youth handball players compare with resistance and plyometric training although we also reported positive effect of these training methods. Coaches and players, therefore, could consider complex training as alternative method for other training methods.
Frank, Andrew A.
1984-01-01
A control system and method for a power delivery system, such as in an automotive vehicle, having an engine coupled to a continuously variable ratio transmission (CVT). Totally independent control of engine and transmission enable the engine to precisely follow a desired operating characteristic, such as the ideal operating line for minimum fuel consumption. CVT ratio is controlled as a function of commanded power or torque and measured load, while engine fuel requirements (e.g., throttle position) are strictly a function of measured engine speed. Fuel requirements are therefore precisely adjusted in accordance with the ideal characteristic for any load placed on the engine.
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
International Nuclear Information System (INIS)
Eaker, C.W.; Schatz, G.C.; De Leon, N.; Heller, E.J.
1984-01-01
Two methods for calculating the good action variables and semiclassical eigenvalues for coupled oscillator systems are presented, both of which relate the actions to the coefficients appearing in the Fourier representation of the normal coordinates and momenta. The two methods differ in that one is based on the exact expression for the actions together with the EBK semiclassical quantization condition while the other is derived from the Sorbie--Handy (SH) approximation to the actions. However, they are also very similar in that the actions in both methods are related to the same set of Fourier coefficients and both require determining the perturbed frequencies in calculating actions. These frequencies are also determined from the Fourier representations, which means that the actions in both methods are determined from information entirely contained in the Fourier expansion of the coordinates and momenta. We show how these expansions can very conveniently be obtained from fast Fourier transform (FFT) methods and that numerical filtering methods can be used to remove spurious Fourier components associated with the finite trajectory integration duration. In the case of the SH based method, we find that the use of filtering enables us to relax the usual periodicity requirement on the calculated trajectory. Application to two standard Henon--Heiles models is considered and both are shown to give semiclassical eigenvalues in good agreement with previous calculations for nondegenerate and 1:1 resonant systems. In comparing the two methods, we find that although the exact method is quite general in its ability to be used for systems exhibiting complex resonant behavior, it converges more slowly with increasing trajectory integration duration and is more sensitive to the algorithm for choosing perturbed frequencies than the SH based method
Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves
Yuan, Y. O.; Simons, F. J.; Bozdag, E.
2014-12-01
We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.
Directory of Open Access Journals (Sweden)
Jing Lei
2013-01-01
Full Text Available The paper considers the problem of variable structure control for nonlinear systems with uncertainty and time delays under persistent disturbance by using the optimal sliding mode surface approach. Through functional transformation, the original time-delay system is transformed into a delay-free one. The approximating sequence method is applied to solve the nonlinear optimal sliding mode surface problem which is reduced to a linear two-point boundary value problem of approximating sequences. The optimal sliding mode surface is obtained from the convergent solutions by solving a Riccati equation, a Sylvester equation, and the state and adjoint vector differential equations of approximating sequences. Then, the variable structure disturbance rejection control is presented by adopting an exponential trending law, where the state and control memory terms are designed to compensate the state and control delays, a feedforward control term is designed to reject the disturbance, and an adjoint compensator is designed to compensate the effects generated by the nonlinearity and the uncertainty. Furthermore, an observer is constructed to make the feedforward term physically realizable, and thus the dynamical observer-based dynamical variable structure disturbance rejection control law is produced. Finally, simulations are demonstrated to verify the effectiveness of the presented controller and the simplicity of the proposed approach.
A variable capacitance based modeling and power capability predicting method for ultracapacitor
Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang
2018-01-01
Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.
Gas permeation measurement under defined humidity via constant volume/variable pressure method
Jan Roman, Pauls
2012-02-01
Many industrial gas separations in which membrane processes are feasible entail high water vapour contents, as in CO 2-separation from flue gas in carbon capture and storage (CCS), or in biogas/natural gas processing. Studying the effect of water vapour on gas permeability through polymeric membranes is essential for materials design and optimization of these membrane applications. In particular, for amine-based CO 2 selective facilitated transport membranes, water vapour is necessary for carrier-complex formation (Matsuyama et al., 1996; Deng and Hägg, 2010; Liu et al., 2008; Shishatskiy et al., 2010) [1-4]. But also conventional polymeric membrane materials can vary their permeation behaviour due to water-induced swelling (Potreck, 2009) [5]. Here we describe a simple approach to gas permeability measurement in the presence of water vapour, in the form of a modified constant volume/variable pressure method (pressure increase method). © 2011 Elsevier B.V.
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
Study of input variables in group method of data handling methodology
International Nuclear Information System (INIS)
Pereira, Iraci Martinez; Bueno, Elaine Inacio
2013-01-01
The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a pre-selected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and ANN methodologies, and applied to the IPEN research Reactor IEA-1. The system performs the monitoring by comparing the GMDH and ANN calculated values with measured ones. As the GMDH is a self-organizing methodology, the input variables choice is made automatically. On the other hand, the results of ANN methodology are strongly dependent on which variables are used as neural network input. (author)
Tam, Vincent H; Kabbara, Samer
2006-10-01
Monte Carlo simulations (MCSs) are increasingly being used to predict the pharmacokinetic variability of antimicrobials in a population. However, various MCS approaches may differ in the accuracy of the predictions. We compared the performance of 3 different MCS approaches using a data set with known parameter values and dispersion. Ten concentration-time profiles were randomly generated and used to determine the best-fit parameter estimates. Three MCS methods were subsequently used to simulate the AUC(0-infinity) of the population, using the central tendency and dispersion of the following in the subject sample: 1) K and V; 2) clearance and V; 3) AUC(0-infinity). In each scenario, 10000 subject simulations were performed. Compared to true AUC(0-infinity) of the population, mean biases by various methods were 1) 58.4, 2) 380.7, and 3) 12.5 mg h L(-1), respectively. Our results suggest that the most realistic MCS approach appeared to be based on the variability of AUC(0-infinity) in the subject sample.
A Real-Time Analysis Method for Pulse Rate Variability Based on Improved Basic Scale Entropy
Directory of Open Access Journals (Sweden)
Yongxin Chou
2017-01-01
Full Text Available Base scale entropy analysis (BSEA is a nonlinear method to analyze heart rate variability (HRV signal. However, the time consumption of BSEA is too long, and it is unknown whether the BSEA is suitable for analyzing pulse rate variability (PRV signal. Therefore, we proposed a method named sliding window iterative base scale entropy analysis (SWIBSEA by combining BSEA and sliding window iterative theory. The blood pressure signals of healthy young and old subjects are chosen from the authoritative international database MIT/PhysioNet/Fantasia to generate PRV signals as the experimental data. Then, the BSEA and the SWIBSEA are used to analyze the experimental data; the results show that the SWIBSEA reduces the time consumption and the buffer cache space while it gets the same entropy as BSEA. Meanwhile, the changes of base scale entropy (BSE for healthy young and old subjects are the same as that of HRV signal. Therefore, the SWIBSEA can be used for deriving some information from long-term and short-term PRV signals in real time, which has the potential for dynamic PRV signal analysis in some portable and wearable medical devices.
Variability of bronchial measurements obtained by sequential CT using two computer-based methods
International Nuclear Information System (INIS)
Brillet, Pierre-Yves; Fetita, Catalin I.; Mitrea, Mihai; Preteux, Francoise; Capderou, Andre; Dreuil, Serge; Simon, Jean-Marc; Grenier, Philippe A.
2009-01-01
This study aimed to evaluate the variability of lumen (LA) and wall area (WA) measurements obtained on two successive MDCT acquisitions using energy-driven contour estimation (EDCE) and full width at half maximum (FWHM) approaches. Both methods were applied to a database of segmental and subsegmental bronchi with LA > 4 mm 2 containing 42 bronchial segments of 10 successive slices that best matched on each acquisition. For both methods, the 95% confidence interval between repeated MDCT was between -1.59 and 1.5 mm 2 for LA, and -3.31 and 2.96 mm 2 for WA. The values of the coefficient of measurement variation (CV 10 , i.e., percentage ratio of the standard deviation obtained from the 10 successive slices to their mean value) were strongly correlated between repeated MDCT data acquisitions (r > 0.72; p 2 , whereas WA values were lower for bronchi with WA 2 ; no systematic EDCE underestimation or overestimation was observed for thicker-walled bronchi. In conclusion, variability between CT examinations and assessment techniques may impair measurements. Therefore, new parameters such as CV 10 need to be investigated to study bronchial remodeling. Finally, EDCE and FWHM are not interchangeable in longitudinal studies. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)
2012-05-15
Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.
International Nuclear Information System (INIS)
Ghasemi, Jahan B.; Zolfonoun, Ehsan
2012-01-01
Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms
Salonen, K; Leisola, M; Eerikäinen, T
2009-01-01
Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Perkó, Z.
2015-01-01
This thesis presents novel adjoint and spectral methods for the sensitivity and uncertainty (S&U) analysis of multi-physics problems encountered in the field of reactor physics. The first part focuses on the steady state of reactors and extends the adjoint sensitivity analysis methods well
Zhai, Shixian; An, Xingqin; Zhao, Tianliang; Sun, Zhaobin; Wang, Wei; Hou, Qing; Guo, Zengyuan; Wang, Chao
2018-05-01
sensitivity simulations of the Models-3/CMAQ system. The two modeling approaches are highly comparable in their assessments of atmospheric pollution control schemes for critical emission regions, but the adjoint method has higher computational efficiency than the forward sensitivity method. The results also imply that critical regional emission reduction could be more efficient than individual peak emission control for improving regional PM2.5 air quality.
Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A
2015-03-15
The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.
Toward Capturing Momentary Changes of Heart Rate Variability by a Dynamic Analysis Method.
Directory of Open Access Journals (Sweden)
Haoshi Zhang
Full Text Available The analysis of heart rate variability (HRV has been performed on long-term electrocardiography (ECG recordings (12~24 hours and short-term recordings (2~5 minutes, which may not capture momentary change of HRV. In this study, we present a new method to analyze the momentary HRV (mHRV. The ECG recordings were segmented into a series of overlapped HRV analysis windows with a window length of 5 minutes and different time increments. The performance of the proposed method in delineating the dynamics of momentary HRV measurement was evaluated with four commonly used time courses of HRV measures on both synthetic time series and real ECG recordings from human subjects and dogs. Our results showed that a smaller time increment could capture more dynamical information on transient changes. Considering a too short increment such as 10 s would cause the indented time courses of the four measures, a 1-min time increment (4-min overlapping was suggested in the analysis of mHRV in the study. ECG recordings from human subjects and dogs were used to further assess the effectiveness of the proposed method. The pilot study demonstrated that the proposed analysis of mHRV could provide more accurate assessment of the dynamical changes in cardiac activity than the conventional measures of HRV (without time overlapping. The proposed method may provide an efficient means in delineating the dynamics of momentary HRV and it would be worthy performing more investigations.
International Nuclear Information System (INIS)
Hoogenboom, J. Eduard
2003-01-01
Adjoint Monte Carlo may be a useful alternative to regular Monte Carlo calculations in cases where a small detector inhibits an efficient Monte Carlo calculation as only very few particle histories will cross the detector. However, in general purpose Monte Carlo codes, normally only the multigroup form of adjoint Monte Carlo is implemented. In this article the general methodology for continuous-energy adjoint Monte Carlo neutron transport is reviewed and extended for photon and coupled neutron-photon transport. In the latter cases the discrete photons generated by annihilation or by neutron capture or inelastic scattering prevent a direct application of the general methodology. Two successive reaction events must be combined in the selection process to accommodate the adjoint analog of a reaction resulting in a photon with a discrete energy. Numerical examples illustrate the application of the theory for some simplified problems
Bergner, Georg; Piemonte, Stefano
2018-04-01
Non-Abelian gauge theories with fermions transforming in the adjoint representation of the gauge group (AdjQCD) are a fundamental ingredient of many models that describe the physics beyond the Standard Model. Two relevant examples are N =1 supersymmetric Yang-Mills (SYM) theory and minimal walking technicolor, which are gauge theories coupled to one adjoint Majorana and two adjoint Dirac fermions, respectively. While confinement is a property of N =1 SYM, minimal walking technicolor is expected to be infrared conformal. We study the propagators of ghost and gluon fields in the Landau gauge to compute the running coupling in the MiniMom scheme. We analyze several different ensembles of lattice Monte Carlo simulations for the SU(2) adjoint QCD with Nf=1 /2 ,1 ,3 /2 , and 2 Dirac fermions. We show how the running of the coupling changes as the number of interacting fermions is increased towards the conformal window.
Simon, Moritz
2014-11-14
© 2014, Springer Science+Business Media New York. With the target of optimizing CO2 sequestration in underground reservoirs, we investigate constrained optimal control problems with partially miscible two-phase flow in porous media. Our objective is to maximize the amount of trapped CO2 in an underground reservoir after a fixed period of CO2 injection, while time-dependent injection rates in multiple wells are used as control parameters. We describe the governing two-phase two-component Darcy flow PDE system, formulate the optimal control problem and derive the continuous adjoint equations. For the discretization we apply a variant of the so-called BOX method, a locally conservative control-volume FE method that we further stabilize by a periodic averaging feature to reduce oscillations. The timestep-wise Lagrange function of the control problem is implemented as a variational form in Sundance, a toolbox for rapid development of parallel FE simulations, which is part of the HPC software Trilinos. We discuss the BOX method and our implementation in Sundance. The MPI parallelized Sundance state and adjoint solvers are linked to the interior point optimization package IPOPT, using limited-memory BFGS updates for approximating second derivatives. Finally, we present and discuss different types of optimal control results.
Directory of Open Access Journals (Sweden)
Tomaž Vrtovec
2015-06-01
Full Text Available Objective measurement of coronal vertebral inclination (CVI is of significant importance for evaluating spinal deformities in the coronal plane. The purpose of this study is to systematically analyze and compare manual and computerized measurements of CVI in cross-sectional and volumetric computed tomography (CT images. Three observers independently measured CVI in 14 CT images of normal and 14 CT images of scoliotic vertebrae by using six manual and two computerized measurements. Manual measurements were obtained in coronal cross-sections by manually identifying the vertebral body corners, which served to measure CVI according to the superior and inferior tangents, left and right tangents, and mid-endplate and mid-wall lines. Computerized measurements were obtained in two dimensions (2D and in three dimensions (3D by manually initializing an automated method in vertebral centroids and then searching for the planes of maximal symmetry of vertebral anatomical structures. The mid-endplate lines were the most reproducible and reliable manual measurements (intra- and inter-observer variability of 0.7° and 1.2° standard deviation, SD, respectively. The computerized measurements in 3D were more reproducible and reliable (intra- and inter-observer variability of 0.5° and 0.7° SD, respectively, but were most consistent with the mid-wall lines (2.0° SD and 1.4° mean absolute difference. The manual CVI measurements based on mid-endplate lines and the computerized CVI measurements in 3D resulted in the lowest intra-observer and inter-observer variability, however, computerized CVI measurements reduce observer interaction.
International Nuclear Information System (INIS)
Balabin, Roman M.; Smirnov, Sergey V.
2011-01-01
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic
Methods for assessment of climate variability and climate changes in different time-space scales
International Nuclear Information System (INIS)
Lobanov, V.; Lobanova, H.
2004-01-01
Main problem of hydrology and design support for water projects connects with modern climate change and its impact on hydrological characteristics as observed as well as designed. There are three main stages of this problem: - how to extract a climate variability and climate change from complex hydrological records; - how to assess the contribution of climate change and its significance for the point and area; - how to use the detected climate change for computation of design hydrological characteristics. Design hydrological characteristic is the main generalized information, which is used for water management and design support. First step of a research is a choice of hydrological characteristic, which can be as a traditional one (annual runoff for assessment of water resources, maxima, minima runoff, etc) as well as a new one, which characterizes an intra-annual function or intra-annual runoff distribution. For this aim a linear model has been developed which has two coefficients connected with an amplitude and level (initial conditions) of seasonal function and one parameter, which characterizes an intensity of synoptic and macro-synoptic fluctuations inside a year. Effective statistical methods have been developed for a separation of climate variability and climate change and extraction of homogeneous components of three time scales from observed long-term time series: intra annual, decadal and centural. The first two are connected with climate variability and the last (centural) with climate change. Efficiency of new methods of decomposition and smoothing has been estimated by stochastic modeling and well as on the synthetic examples. For an assessment of contribution and statistical significance of modern climate change components statistical criteria and methods have been used. Next step has been connected with a generalization of the results of detected climate changes over the area and spatial modeling. For determination of homogeneous region with the same
International Nuclear Information System (INIS)
Zhang Jiefang; Dai Chaoqing; Zong Fengde
2007-01-01
In this paper, with the variable separation approach and based on the general reduction theory, we successfully generalize this extended tanh-function method to obtain new types of variable separation solutions for the following Nizhnik-Novikov-Veselov (NNV) equation. Among the solutions, two solutions are new types of variable separation solutions, while the last solution is similar to the solution given by Darboux transformation in Hu et al 2003 Chin. Phys. Lett. 20 1413
SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD
Krogh, F. T.
1994-01-01
The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.
International Nuclear Information System (INIS)
Do, Chuong; Hussey, Dennis; Wells, Daniel M.; Epperson, Kenny
2016-01-01
Optimization numerical method was implemented to determine several mass transfer coefficients in a crud-induced power shift risk assessment code. The approach was to utilize a multilevel strategy that targets different model parameters that first changes the major order variables, mass transfer inputs, then calibrates the minor order variables, crud source terms, according to available plant data. In this manner, the mass transfer inputs are effectively simplified as 'dependent' on the crud source terms. Two optimization studies were performed using DAKOTA, a design and analysis toolkit, with the difference between the runs, being the number of model runs using BOA, allowed for adjusting the crud source terms, therefore, reducing the uncertainty with calibration. The result of the first case showed that the current best estimated values for the mass transfer coefficients, which were derived from first principle analysis, can be considered an optimized set. When the run limit of BOA was increased for the second case, an improvement in the prediction was obtained with the results deviating slightly from the best estimated values. (author)
Energy Technology Data Exchange (ETDEWEB)
Baeza, A.; Corbacho, J.A. [LARUEX, Caceres (Spain). Environmental Radioactivity Lab.
2013-07-01
Determining the gross alpha activity concentration of water samples is one way to screen for waters whose radionuclide content is so high that its consumption could imply surpassing the Total Indicative Dose as defined in European Directive 98/83/EC. One of the most commonly used methods to prepare the sources to measure gross alpha activity in water samples is desiccation. Its main advantages are the simplicity of the procedure, the low cost of source preparation, and the possibility of simultaneously determining the gross beta activity. The preparation of the source, the construction of the calibration curves, and the measurement procedure itself involve, however, various factors that may introduce sufficient variability into the results to significantly affect the screening process. We here identify the main sources of this variability, and propose specific procedures to follow in the desiccation process that will reduce the uncertainties, and ensure that the result is indeed representative of the sum of the activities of the alpha emitters present in the sample. (orig.)
DEFF Research Database (Denmark)
Wiuf, Carsten; Pallesen, Jonatan; Foldager, Leslie
2016-01-01
variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method...... and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer...
Variable Camber Continuous Aerodynamic Control Surfaces and Methods for Active Wing Shaping Control
Nguyen, Nhan T. (Inventor)
2016-01-01
An aerodynamic control apparatus for an air vehicle improves various aerodynamic performance metrics by employing multiple spanwise flap segments that jointly form a continuous or a piecewise continuous trailing edge to minimize drag induced by lift or vortices. At least one of the multiple spanwise flap segments includes a variable camber flap subsystem having multiple chordwise flap segments that may be independently actuated. Some embodiments also employ a continuous leading edge slat system that includes multiple spanwise slat segments, each of which has one or more chordwise slat segment. A method and an apparatus for implementing active control of a wing shape are also described and include the determination of desired lift distribution to determine the improved aerodynamic deflection of the wings. Flap deflections are determined and control signals are generated to actively control the wing shape to approximate the desired deflection.
de Sá, Joceline Cássia Ferezini; Costa, Eduardo Caldas; da Silva, Ester; Azevedo, George Dantas
2013-09-01
Polycystic ovary syndrome (PCOS) is an endocrine disorder associated with several cardiometabolic risk factors, such as central obesity, insulin resistance, type 2 diabetes, metabolic syndrome, and hypertension. These factors are associated with adrenergic overactivity, which is an important prognostic factor for the development of cardiovascular disorders. Given the common cardiometabolic disturbances occurring in PCOS women, over the last years studies have investigated the cardiac autonomic control of these patients, mainly based on heart rate variability (HRV). Thus, in this review, we will discuss the recent findings of the studies that investigated the HRV of women with PCOS, as well as noninvasive methods of analysis of autonomic control starting from basic indexes related to this methodology.
Mustapha, K.
2017-06-03
Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.
Directory of Open Access Journals (Sweden)
Bai Shiye
2016-05-01
Full Text Available An objective function defined by minimum compliance of topology optimization for 3D continuum structure was established to search optimal material distribution constrained by the predetermined volume restriction. Based on the improved SIMP (solid isotropic microstructures with penalization model and the new sensitivity filtering technique, basic iteration equations of 3D finite element analysis were deduced and solved by optimization criterion method. All the above procedures were written in MATLAB programming language, and the topology optimization design examples of 3D continuum structure with reserved hole were examined repeatedly by observing various indexes, including compliance, maximum displacement, and density index. The influence of mesh, penalty factors, and filter radius on the topology results was analyzed. Computational results showed that the finer or coarser the mesh number was, the larger the compliance, maximum displacement, and density index would be. When the filtering radius was larger than 1.0, the topology shape no longer appeared as a chessboard problem, thus suggesting that the presented sensitivity filtering method was valid. The penalty factor should be an integer because iteration steps increased greatly when it is a noninteger. The above modified variable density method could provide technical routes for topology optimization design of more complex 3D continuum structures in the future.
Development and validation of a new fallout transport method using variable spectral winds
International Nuclear Information System (INIS)
Hopkins, A.T.
1984-01-01
A new method was developed to incorporate variable winds into fallout transport calculations. The method uses spectral coefficients derived by the National Meteorological Center. Wind vector components are computed with the coefficients along the trajectories of falling particles. Spectral winds are used in the two-step method to compute dose rate on the ground, downwind of a nuclear cloud. First, the hotline is located by computing trajectories of particles from an initial, stabilized cloud, through spectral winds to the ground. The connection of particle landing points is the hotline. Second, dose rate on and around the hotline is computed by analytically smearing the falling cloud's activity along the ground. The feasibility of using spectral winds for fallout particle transport was validated by computing Mount St. Helens ashfall locations and comparing calculations to fallout data. In addition, an ashfall equation was derived for computing volcanic ash mass/area on the ground. Ashfall data and the ashfall equation were used to back-calculate an aggregated particle size distribution for the Mount St. Helens eruption cloud
Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients
Energy Technology Data Exchange (ETDEWEB)
Bhaskar, Roy, E-mail: imbhaskarall@gmail.com [Indian Institute of Technology (India); University of Connecticut, Farmington, CT (United States); Ghatak, Sobhendu [Indian Institute of Technology (India)
2013-10-15
Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients.
Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients
International Nuclear Information System (INIS)
Bhaskar, Roy; Ghatak, Sobhendu
2013-01-01
Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients
Mustapha, K.; Furati, K.; Knio, Omar; Maitre, O. Le
2017-01-01
Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.
Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems
Asner, Liya; Tavener, Simon; Kay, David
2012-01-01
We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.
Adjoint Airfoil Optimization of Darrieus-Type Vertical Axis Wind Turbine
Fuchs, Roman; Nordborg, Henrik
2012-11-01
We present the feasibility of using an adjoint solver to optimize the torque of a Darrieus-type vertical axis wind turbine (VAWT). We start with a 2D cross section of a symmetrical airfoil and restrict us to low solidity ratios to minimize blade vortex interactions. The adjoint solver of the ANSYS FLUENT software package computes the sensitivities of airfoil surface forces based on a steady flow field. Hence, we find the torque of a full revolution using a weighted average of the sensitivities at different wind speeds and angles of attack. The weights are computed analytically, and the range of angles of attack is given by the tip speed ratio. Then the airfoil geometry is evolved, and the proposed methodology is evaluated by transient simulations.
Self-adjoint elliptic operators with boundary conditions on not closed hypersurfaces
Mantile, Andrea; Posilicano, Andrea; Sini, Mourad
2016-07-01
The theory of self-adjoint extensions of symmetric operators is used to construct self-adjoint realizations of a second-order elliptic differential operator on Rn with linear boundary conditions on (a relatively open part of) a compact hypersurface. Our approach allows to obtain Kreĭn-like resolvent formulae where the reference operator coincides with the ;free; operator with domain H2 (Rn); this provides an useful tool for the scattering problem from a hypersurface. Concrete examples of this construction are developed in connection with the standard boundary conditions, Dirichlet, Neumann, Robin, δ and δ‧-type, assigned either on a (n - 1) dimensional compact boundary Γ = ∂ Ω or on a relatively open part Σ ⊂ Γ. Schatten-von Neumann estimates for the difference of the powers of resolvents of the free and the perturbed operators are also proven; these give existence and completeness of the wave operators of the associated scattering systems.
Factorization of the 3d superconformal index with an adjoint matter
Energy Technology Data Exchange (ETDEWEB)
Hwang, Chiung [Department of Physics, POSTECH,Pohang 790-784 (Korea, Republic of); Park, Jaemo [Department of Physics, POSTECH,Pohang 790-784 (Korea, Republic of); Postech Center for Theoretical Physics (PCTP), POSTECH,Pohang 790-784 (Korea, Republic of)
2015-11-05
We work out the factorization of the 3d superconformal index for N=2U(N{sub c}) gauge theory with one adjoint chiral multiplet as well as N{sub f} fundamental, N{sub a} anti-fundamental chiral multiplets. Using the factorization, one can prove the Seiberg-like duality for N=4U(N{sub c}) theory with N{sub f} hypermultiplets at the index level. We explicitly show that monopole operators violating unitarity bound in a bad theory are mapped to free hypermultiplets in the dual side. For N=2U(N{sub c}) theory with one adjoint matter X, N{sub f} fundamental, N{sub a} anti-fundamental chiral multiplets with superpotential W=trX{sup n+1}, we work out Seiberg-like duality for this theory. The index computation provides combinatorial identities for a dual pair, which we carry out intensive numerical checks.
International Nuclear Information System (INIS)
Bakosi, Jozsef; Ristorcelli, Raymond J.
2010-01-01
Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.
Henze, D. K.; Seinfeld, J. H.; Shindell, D. T.
2009-08-01
Influences of specific sources of inorganic PM2.5 on peak and ambient aerosol concentrations in the US are evaluated using a combination of inverse modeling and sensitivity analysis. First, sulfate and nitrate aerosol measurements from the IMPROVE network are assimilated using the four-dimensional variational (4D-Var) method into the GEOS-Chem chemical transport model in order to constrain emissions estimates in four separate month-long inversions (one per season). Of the precursor emissions, these observations primarily constrain ammonia (NH3). While the net result is a decrease in estimated US~NH3 emissions relative to the original inventory, there is considerable variability in adjustments made to NH3 emissions in different locations, seasons and source sectors, such as focused decreases in the midwest during July, broad decreases throughout the US~in January, increases in eastern coastal areas in April, and an effective redistribution of emissions from natural to anthropogenic sources. Implementing these constrained emissions, the adjoint model is applied to quantify the influences of emissions on representative PM2.5 air quality metrics within the US. The resulting sensitivity maps display a wide range of spatial, sectoral and seasonal variability in the susceptibility of the air quality metrics to absolute emissions changes and the effectiveness of incremental emissions controls of specific source sectors. NH3 emissions near sources of sulfur oxides (SOx) are estimated to most influence peak inorganic PM2.5 levels in the East; thus, the most effective controls of NH3 emissions are often disjoint from locations of peak NH3 emissions. Controls of emissions from industrial sectors of SOx and NOx are estimated to be more effective than surface emissions, and changes to NH3 emissions in regions dominated by natural sources are disproportionately more effective than regions dominated by anthropogenic sources. NOx controls are most effective in northern states in
Directory of Open Access Journals (Sweden)
Mike D.R. Zhang
2001-01-01
Full Text Available In this paper, a method for analyzing the dynamic response of a structural system with variable mass, damping and stiffness is first presented. The dynamic equations of the structural system with variable mass and stiffness are derived according to the whole working process of a bridge bucket unloader. At the end of the paper, an engineering numerical example is given.
New Methods for Prosodic Transcription: Capturing Variability as a Source of Information
Directory of Open Access Journals (Sweden)
Jennifer Cole
2016-06-01
Full Text Available Understanding the role of prosody in encoding linguistic meaning and in shaping phonetic form requires the analysis of prosodically annotated speech drawn from a wide variety of speech materials. Yet obtaining accurate and reliable prosodic annotations for even small datasets is challenging due to the time and expertise required. We discuss several factors that make prosodic annotation difficult and impact its reliability, all of which relate to 'variability': in the patterning of prosodic elements (features and structures as they relate to the linguistic and discourse context, in the acoustic cues for those prosodic elements, and in the parameter values of the cues. We propose two novel methods for prosodic transcription that capture variability as a source of information relevant to the linguistic analysis of prosody. The first is 'Rapid Prosody Transcription '(RPT, which can be performed by non-experts using a simple set of unary labels to mark prominence and boundaries based on immediate auditory impression. Inter-transcriber variability is used to calculate continuous-valued prosody ‘scores’ that are assigned to each word and represent the perceptual salience of its prosodic features or structure. RPT can be used to model the relative influence of top-down factors and acoustic cues in prosody perception, and to model prosodic variation across many dimensions, including language variety,speech style, or speaker’s affect. The second proposed method is the identification of individual cues to the contrastive prosodic elements of an utterance. Cue specification provides a link between the contrastive symbolic categories of prosodic structures and the continuous-valued parameters in the acoustic signal, and offers a framework for investigating how factors related to the grammatical and situational context influence the phonetic form of spoken words and phrases. While cue specification as a transcription tool has not yet been explored as
Gradient flow coupling in the SU(2) gauge theory with two adjoint fermions
DEFF Research Database (Denmark)
Rantaharju, Jarno
2016-01-01
We study SU(2) gauge theory with two fermion flavors in the adjoint representation. Using a clover improved HEX smeared action and the gradient flow running coupling allows us to simulate with larger lattice size than before. We find an infrared fixed point after a continuum extrapolation in the ...... in the range 4.3g∗24.8. We also measure the mass anomalous dimension and find the value 0.25γ∗0.28 at the fixed point....
Green's matrix for a second-order self-adjoint matrix differential operator
International Nuclear Information System (INIS)
Sisman, Tahsin Cagri; Tekin, Bayram
2010-01-01
A systematic construction of the Green's matrix for a second-order self-adjoint matrix differential operator from the linearly independent solutions of the corresponding homogeneous differential equation set is carried out. We follow the general approach of extracting the Green's matrix from the Green's matrix of the corresponding first-order system. This construction is required in the cases where the differential equation set cannot be turned to an algebraic equation set via transform techniques.
International Nuclear Information System (INIS)
Bayramoglu, Mehmet; Tasci, Fatih; Zeynalov, Djafar
2004-01-01
We study the discrete part of spectrum of a singular non-self-adjoint second-order differential equation on a semiaxis with an operator coefficient. Its boundedness is proved. The result is applied to the Schroedinger boundary value problem -Δu+q(x)u=λ 2 u, u vertical bar ∂D =0, with a complex potential q(x) in an angular domain
Absence of singular continuous spectrum for certain self-adjoint operators
International Nuclear Information System (INIS)
Mourre, E.
1979-01-01
An adequate condition is given for a self-adjoint operator to show in the vinicity of a point E of its spectrum the following properties: its point spectrum is of finite size; its singular continuous spectrum is empty. In the way of new applications the absence of singular continuous spectrum is demonstrated in the following two cases: perturbations of pseudo-differential operators; Schroedinger operators of a three-body system [fr
Adjoint-based Sensitivity of Jet Noise to Near-nozzle Forcing
Chung, Seung Whan; Vishnampet, Ramanathan; Bodony, Daniel; Freund, Jonathan
2017-11-01
Past efforts have used optimal control theory, based on the numerical solution of the adjoint flow equations, to perturb turbulent jets in order to reduce their radiated sound. These efforts have been successful in that sound is reduced, with concomitant changes to the large-scale turbulence structures in the flow. However, they have also been inconclusive, in that the ultimate level of reduction seemed to depend upon the accuracy of the adjoint-based gradient rather than a physical limitation of the flow. The chaotic dynamics of the turbulence can degrade the smoothness of cost functional in the control-parameter space, which is necessary for gradient-based optimization. We introduce a route to overcoming this challenge, in part by leveraging the regularity and accuracy with a dual-consistent, discrete-exact adjoint formulation. We confirm its properties and use it to study the sensitivity and controllability of the acoustic radiation from a simulation of a M = 1.3 turbulent jet, whose statistics matches data. The smoothness of the cost functional over time is quantified by a minimum optimization step size beyond which the gradient cannot have a certain degree of accuracy. Based on this, we achieve a moderate level of sound reduction in the first few optimization steps. This material is based [in part] upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.
Classical gluon and graviton radiation from the bi-adjoint scalar double copy
Goldberger, Walter D.; Prabhu, Siddharth G.; Thompson, Jedidiah O.
2017-09-01
We find double-copy relations between classical radiating solutions in Yang-Mills theory coupled to dynamical color charges and their counterparts in a cubic bi-adjoint scalar field theory which interacts linearly with particles carrying bi-adjoint charge. The particular color-to-kinematics replacements we employ are motivated by the Bern-Carrasco-Johansson double-copy correspondence for on-shell amplitudes in gauge and gravity theories. They are identical to those recently used to establish relations between classical radiating solutions in gauge theory and in dilaton gravity. Our explicit bi-adjoint solutions are constructed to second order in a perturbative expansion, and map under the double copy onto gauge theory solutions which involve at most cubic gluon self-interactions. If the correspondence is found to persist to higher orders in perturbation theory, our results suggest the possibility of calculating gravitational radiation from colliding compact objects, directly from a scalar field with vastly simpler (purely cubic) Feynman vertices.
Methods to quantify variable importance: implications for theanalysis of noisy ecological data
Murray, Kim; Conner, Mary M.
2009-01-01
Determining the importance of independent variables is of practical relevance to ecologists and managers concerned with allocating limited resources to the management of natural systems. Although techniques that identify explanatory variables having the largest influence on the response variable are needed to design management actions effectively, the use of various indices to evaluate variable importance is poorly understood. Using Monte Carlo simulations, we compared six different indices c...
Kovatchev, Boris P; Clarke, William L; Breton, Marc; Brayman, Kenneth; McCall, Anthony
2005-12-01
Continuous glucose monitors (CGMs) collect detailed blood glucose (BG) time series, which carry significant information about the dynamics of BG fluctuations. In contrast, the methods for analysis of CGM data remain those developed for infrequent BG self-monitoring. As a result, important information about the temporal structure of the data is lost during the translation of raw sensor readings into clinically interpretable statistics and images. The following mathematical methods are introduced into the field of CGM data interpretation: (1) analysis of BG rate of change; (2) risk analysis using previously reported Low/High BG Indices and Poincare (lag) plot of risk associated with temporal BG variability; and (3) spatial aggregation of the process of BG fluctuations and its Markov chain visualization. The clinical application of these methods is illustrated by analysis of data of a patient with Type 1 diabetes mellitus who underwent islet transplantation and with data from clinical trials. Normative data [12,025 reference (YSI device, Yellow Springs Instruments, Yellow Springs, OH) BG determinations] in patients with Type 1 diabetes mellitus who underwent insulin and glucose challenges suggest that the 90%, 95%, and 99% confidence intervals of BG rate of change that could be maximally sustained over 15-30 min are [-2,2], [-3,3], and [-4,4] mg/dL/min, respectively. BG dynamics and risk parameters clearly differentiated the stages of transplantation and the effects of medication. Aspects of treatment were clearly visualized by graphs of BG rate of change and Low/High BG Indices, by a Poincare plot of risk for rapid BG fluctuations, and by a plot of the aggregated Markov process. Advanced analysis and visualization of CGM data allow for evaluation of dynamical characteristics of diabetes and reveal clinical information that is inaccessible via standard statistics, which do not take into account the temporal structure of the data. The use of such methods improves the
An Analysis of Variable-Speed Wind Turbine Power-Control Methods with Fluctuating Wind Speed
Directory of Open Access Journals (Sweden)
Seung-Il Moon
2013-07-01
Full Text Available Variable-speed wind turbines (VSWTs typically use a maximum power-point tracking (MPPT method to optimize wind-energy acquisition. MPPT can be implemented by regulating the rotor speed or by adjusting the active power. The former, termed speed-control mode (SCM, employs a speed controller to regulate the rotor, while the latter, termed power-control mode (PCM, uses an active power controller to optimize the power. They are fundamentally equivalent; however, since they use a different controller at the outer control loop of the machine-side converter (MSC controller, the time dependence of the control system differs depending on whether SCM or PCM is used. We have compared and analyzed the power quality and the power coefficient when these two different control modes were used in fluctuating wind speeds through computer simulations. The contrast between the two methods was larger when the wind-speed fluctuations were greater. Furthermore, we found that SCM was preferable to PCM in terms of the power coefficient, but PCM was superior in terms of power quality and system stability.
Energy Technology Data Exchange (ETDEWEB)
Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife
2001-07-01
Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.
Directory of Open Access Journals (Sweden)
Qian Wang
2017-01-01
Full Text Available Different configurations of coupling strategies influence greatly the accuracy and convergence of the simulation results in the hybrid atomistic-continuum method. This study aims to quantitatively investigate this effect and offer the guidance on how to choose the proper configuration of coupling strategies in the hybrid atomistic-continuum method. We first propose a hybrid molecular dynamics- (MD- continuum solver in LAMMPS and OpenFOAM that exchanges state variables between the atomistic region and the continuum region and evaluate different configurations of coupling strategies using the sudden start Couette flow, aiming to find the preferable configuration that delivers better accuracy and efficiency. The major findings are as follows: (1 the C→A region plays the most important role in the overlap region and the “4-layer-1” combination achieves the best precision with a fixed width of the overlap region; (2 the data exchanging operation only needs a few sampling points closer to the occasions of interactions and decreasing the coupling exchange operations can reduce the computational load with acceptable errors; (3 the nonperiodic boundary force model with a smoothing parameter of 0.1 and a finer parameter of 20 can not only achieve the minimum disturbance near the MD-continuum interface but also keep the simulation precision.
Energy Technology Data Exchange (ETDEWEB)
Frew, Bethany A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cole, Wesley J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Richards, James [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-08-01
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss common modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges
Platelet-rich plasma differs according to preparation method and human variability.
Mazzocca, Augustus D; McCarthy, Mary Beth R; Chowaniec, David M; Cote, Mark P; Romeo, Anthony A; Bradley, James P; Arciero, Robert A; Beitzel, Knut
2012-02-15
Varying concentrations of blood components in platelet-rich plasma preparations may contribute to the variable results seen in recently published clinical studies. The purposes of this investigation were (1) to quantify the level of platelets, growth factors, red blood cells, and white blood cells in so-called one-step (clinically used commercial devices) and two-step separation systems and (2) to determine the influence of three separate blood draws on the resulting components of platelet-rich plasma. Three different platelet-rich plasma (PRP) separation methods (on blood samples from eight subjects with a mean age [and standard deviation] of 31.6 ± 10.9 years) were used: two single-spin processes (PRPLP and PRPHP) and a double-spin process (PRPDS) were evaluated for concentrations of platelets, red and white blood cells, and growth factors. Additionally, the effect of three repetitive blood draws on platelet-rich plasma components was evaluated. The content and concentrations of platelets, white blood cells, and growth factors for each method of separation differed significantly. All separation techniques resulted in a significant increase in platelet concentration compared with native blood. Platelet and white blood-cell concentrations of the PRPHP procedure were significantly higher than platelet and white blood-cell concentrations produced by the so-called single-step PRPLP and the so-called two-step PRPDS procedures, although significant differences between PRPLP and PRPDS were not observed. Comparing the results of the three blood draws with regard to the reliability of platelet number and cell counts, wide variations of intra-individual numbers were observed. Single-step procedures are capable of producing sufficient amounts of platelets for clinical usage. Within the evaluated procedures, platelet numbers and numbers of white blood cells differ significantly. The intra-individual results of platelet-rich plasma separations showed wide variations in
Hauck, Yolande; Soler, Charles; Gérôme, Patrick; Vong, Rithy; Macnab, Christine; Appere, Géraldine; Vergnaud, Gilles; Pourcel, Christine
2015-07-01
Propionibacterium acnes plays a central role in the pathogenesis of acne and is responsible for severe opportunistic infections. Numerous typing schemes have been developed that allow the identification of phylotypes, but they are often insufficient to differentiate subtypes. To better understand the genetic diversity of this species and to perform epidemiological analyses, high throughput discriminant genotyping techniques are needed. Here we describe the development of a multiple locus variable number of tandem repeats (VNTR) analysis (MLVA) method. Thirteen VNTRs were identified in the genome of P. acnes and were used to genotype a collection of clinical isolates. In addition, publically available sequencing data for 102 genomes were analyzed in silico, providing an MLVA genotype. The clustering of MLVA data was in perfect congruence with whole genome based clustering. Analysis of the clustered regularly interspaced short palindromic repeat (CRISPR) element uncovered new spacers, a supplementary source of genotypic information. The present MLVA13 scheme and associated internet database represents a first line genotyping assay to investigate large number of isolates. Particular strains may then be submitted to full genome sequencing in order to better analyze their pathogenic potential. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.