International Nuclear Information System (INIS)
Flip, A.; Pang, H.F.; D'Angelo, A.
1995-01-01
Due to the persistent uncertainties: ∼ 5 % (the uncertainty, here and there after, is at 1σ) in the prediction of the 'reactivity scale' (β eff ) for a fast power reactor, an international project was recently initiated in the framework of the OECD/NEA activities for reevaluation, new measurements and integral benchmarking of delayed neutron (DN) data and related kinetic parameters (principally β eff ). Considering that the major part of this uncertainty is due to uncertainties in the DN yields (v d ) and the difficulty for further improvement of the precision in differential (e.g. Keepin's method) measurements, an international cooperative strategy was adopted aiming at extracting and consistently interpreting information from both differential (nuclear) and integral (in reactor) measurements. The main problem arises from the integral side; thus the idea was to realize β eff like measurements (both deterministic and noise) in 'clean' assemblies. The 'clean' calculational context permitted the authors to develop a theory allowing to link explicitly this integral experimental level with the differential one, via a unified 'Master Model' which relates v d and measurables quantities (on both levels) linearly. The combined error analysis is consequently largely simplified and the final uncertainty drastically reduced (theoretically, by a factor √3). On the other hand the same theoretical development leading to the 'Master Model', also resulted in a structured scheme of approximations of the general (stochastic) Boltzmann equation allowing a consistent analysis of the large range of measurements concerned (stochastic, dynamic, static ... ). This paper is focused on the main results of this theoretical development and its application to the analysis of the Preliminary results of the BERENICE program (β eff measurements in MASURCA, the first assembly in CADARACHE-FRANCE)
Integrating the Toda Lattice with Self-Consistent Source via Inverse Scattering Method
International Nuclear Information System (INIS)
Urazboev, Gayrat
2012-01-01
In this work, there is shown that the solutions of Toda lattice with self-consistent source can be found by the inverse scattering method for the discrete Sturm-Liuville operator. For the considered problem the one-soliton solution is obtained.
International Nuclear Information System (INIS)
Leyendecker, Sigrid; Betsch, Peter; Steinmann, Paul
2008-01-01
In the present work, the unified framework for the computational treatment of rigid bodies and nonlinear beams developed by Betsch and Steinmann (Multibody Syst. Dyn. 8, 367-391, 2002) is extended to the realm of nonlinear shells. In particular, a specific constrained formulation of shells is proposed which leads to the semi-discrete equations of motion characterized by a set of differential-algebraic equations (DAEs). The DAEs provide a uniform description for rigid bodies, semi-discrete beams and shells and, consequently, flexible multibody systems. The constraints may be divided into two classes: (i) internal constraints which are intimately connected with the assumption of rigidity of the bodies, and (ii) external constraints related to the presence of joints in a multibody framework. The present approach thus circumvents the use of rotational variables throughout the whole time discretization, facilitating the design of energy-momentum methods for flexible multibody dynamics. After the discretization has been completed a size-reduction of the discrete system is performed by eliminating the constraint forces. Numerical examples dealing with a spatial slider-crank mechanism and with intersecting shells illustrate the performance of the proposed method
Consistency among integral measurements of aggregate decay heat power
Energy Technology Data Exchange (ETDEWEB)
Takeuchi, H.; Sagisaka, M.; Oyamatsu, K.; Kukita, Y. [Nagoya Univ. (Japan)
1998-03-01
Persisting discrepancies between summation calculations and integral measurements force us to assume large uncertainties in the recommended decay heat power. In this paper, we develop a hybrid method to calculate the decay heat power of a fissioning system from those of different fissioning systems. Then, this method is applied to examine consistency among measured decay heat powers of {sup 232}Th, {sup 233}U, {sup 235}U, {sup 238}U and {sup 239}Pu at YAYOI. The consistency among the measured values are found to be satisfied for the {beta} component and fairly well for the {gamma} component, except for cooling times longer than 4000 s. (author)
Discrete differential geometry. Consistency as integrability
Bobenko, Alexander I.; Suris, Yuri B.
2005-01-01
A new field of discrete differential geometry is presently emerging on the border between differential and discrete geometry. Whereas classical differential geometry investigates smooth geometric shapes (such as surfaces), and discrete geometry studies geometric shapes with finite number of elements (such as polyhedra), the discrete differential geometry aims at the development of discrete equivalents of notions and methods of smooth surface theory. Current interest in this field derives not ...
A method for consistent precision radiation therapy
International Nuclear Information System (INIS)
Leong, J.
1985-01-01
Using a meticulous setup procedure in which repeated portal films were taken before each treatment until satisfactory portal verifications were obtained, a high degree of precision in patient positioning was achieved. A fluctuation from treatment to treatment, over 11 treatments, of less than +-0.10 cm (S.D.) for anatomical points inside the treatment field was obtained. This, however, only applies to specific anatomical points selected for this positioning procedure and does not apply to all points within the portal. We have generalized this procedure and have suggested a means by which any target volume can be consistently positioned which may approach this degree of precision. (orig.)
Consistency of differential and integral thermonuclear neutronics data
International Nuclear Information System (INIS)
Reupke, W.A.
1978-01-01
To increase the accuracy of the neutronics analysis of nuclear reactors, physicists and engineers have employed a variety of techniques, including the adjustment of multigroup differential data to improve consistency with integral data. Of the various adjustment strategies, a generalized least-squares procedure which adjusts the combined differential and integral data can significantly improve the accuracy of neutronics calculations compared to calculations employing only differential data. This investigation analyzes 14 MeV neutron-driven integral experiments, using a more extensively developed methodology and a newly developed computer code, to extend the domain of adjustment from the energy range of fission reactors to the energy range of fusion reactors
Integrated communications: From one look to normative consistency
DEFF Research Database (Denmark)
Torp, Simon
2009-01-01
ambitious interpretations of the concept the integration endeavour extends from the external integration of visual design to the internal integration of the organization's culture and "soul". Design/methodology/approach The paper is based on a critical and thematic reading of the integrated marketing...
Quasiparticle self-consistent GW method: a short summary
International Nuclear Information System (INIS)
Kotani, Takao; Schilfgaarde, Mark van; Faleev, Sergey V; Chantis, Athanasios
2007-01-01
We have developed a quasiparticle self-consistent GW method (QSGW), which is a new self-consistent method to calculate the electronic structure within the GW approximation. The method is formulated based on the idea of a self-consistent perturbation; the non-interacting Green function G 0 , which is the starting point for GWA to obtain G, is determined self-consistently so as to minimize the perturbative correction generated by GWA. After self-consistency is attained, we have G 0 , W (the screened Coulomb interaction) and G self-consistently. This G 0 can be interpreted as the optimum non-interacting propagator for the quasiparticles. We will summarize some theoretical discussions to justify QSGW. Then we will survey results which have been obtained up to now: e.g., band gaps for normal semiconductors are predicted to a precision of 0.1-0.3 eV; the self-consistency including the off-diagonal part is required for NiO and MnO; and so on. There are still some remaining disagreements with experiments; however, they are very systematic, and can be explained from the neglect of excitonic effects
Linear augmented plane wave method for self-consistent calculations
International Nuclear Information System (INIS)
Takeda, T.; Kuebler, J.
1979-01-01
O.K. Andersen has recently introduced a linear augmented plane wave method (LAPW) for the calculation of electronic structure that was shown to be computationally fast. A more general formulation of an LAPW method is presented here. It makes use of a freely disposable number of eigenfunctions of the radial Schroedinger equation. These eigenfunctions can be selected in a self-consistent way. The present formulation also results in a computationally fast method. It is shown that Andersen's LAPW is obtained in a special limit from the present formulation. Self-consistent test calculations for copper show the present method to be remarkably accurate. As an application, scalar-relativistic self-consistent calculations are presented for the band structure of FCC lanthanum. (author)
An algebraic method for constructing stable and consistent autoregressive filters
International Nuclear Information System (INIS)
Harlim, John; Hong, Hoon; Robbins, Jacob L.
2015-01-01
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides a discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern
Consistency or Discrepancy? Rethinking Schools from Organizational Hypocrisy to Integrity
Kiliçoglu, Gökhan
2017-01-01
Consistency in statements, decisions and practices is highly important for both organization members and the image of an organization. It is expected from organizations, especially from their administrators, to "walk the talk"--in other words, to try to practise what they preach. However, in the process of gaining legitimacy and adapting…
Two new integrable couplings of the soliton hierarchies with self-consistent sources
International Nuclear Information System (INIS)
Tie-Cheng, Xia
2010-01-01
A kind of integrable coupling of soliton equations hierarchy with self-consistent sources associated with s-tilde l(4) has been presented (Yu F J and Li L 2009 Appl. Math. Comput. 207 171; Yu F J 2008 Phys. Lett. A 372 6613). Based on this method, we construct two integrable couplings of the soliton hierarchy with self-consistent sources by using the loop algebra s-tilde l(4). In this paper, we also point out that there are some errors in these references and we have corrected these errors and set up new formula. The method can be generalized to other soliton hierarchy with self-consistent sources. (general)
Consistent forcing scheme in the cascaded lattice Boltzmann method
Fei, Linlin; Luo, Kai Hong
2017-11-01
In this paper, we give an alternative derivation for the cascaded lattice Boltzmann method (CLBM) within a general multiple-relaxation-time (MRT) framework by introducing a shift matrix. When the shift matrix is a unit matrix, the CLBM degrades into an MRT LBM. Based on this, a consistent forcing scheme is developed for the CLBM. The consistency of the nonslip rule, the second-order convergence rate in space, and the property of isotropy for the consistent forcing scheme is demonstrated through numerical simulations of several canonical problems. Several existing forcing schemes previously used in the CLBM are also examined. The study clarifies the relation between MRT LBM and CLBM under a general framework.
Consistent forcing scheme in the cascaded lattice Boltzmann method.
Fei, Linlin; Luo, Kai Hong
2017-11-01
In this paper, we give an alternative derivation for the cascaded lattice Boltzmann method (CLBM) within a general multiple-relaxation-time (MRT) framework by introducing a shift matrix. When the shift matrix is a unit matrix, the CLBM degrades into an MRT LBM. Based on this, a consistent forcing scheme is developed for the CLBM. The consistency of the nonslip rule, the second-order convergence rate in space, and the property of isotropy for the consistent forcing scheme is demonstrated through numerical simulations of several canonical problems. Several existing forcing schemes previously used in the CLBM are also examined. The study clarifies the relation between MRT LBM and CLBM under a general framework.
Consistent Posttest Calculations for LOCA Scenarios in LOBI Integral Facility
Directory of Open Access Journals (Sweden)
F. Reventós
2012-01-01
Full Text Available Integral test facilities (ITFs are one of the main tools for the validation of best estimate thermalhydraulic system codes. The experimental data are also of great value when compared to the experiment-scaled conditions in a full NPP. The LOBI was a single plus a triple-loop (simulated by one loop test facility electrically heated to simulate a 1300 MWe PWR. The scaling factor was 712 for the core power, volume, and mass flow. Primary and secondary sides contained all main active elements. Tests were performed for the characterization of phenomenologies relevant to large and small break LOCAs and special transients in PWRs. The paper presents the results of three posttest calculations of LOBI experiments. The selected experiments are BL-30, BL-44, and A1-84. They are LOCA scenarios of different break sizes and with different availability of safety injection components. The goal of the analysis is to improve the knowledge of the phenomena occurred in the facility in order to use it in further studies related to qualifying nodalizations of actual plants or to establish accuracy data bases for uncertainty methodologies. An example of procedure of implementing changes in a common nodalization valid for simulating tests occurred in a specific ITF is presented along with its confirmation based on posttests results.
Bootstrap embedding: An internally consistent fragment-based method
Energy Technology Data Exchange (ETDEWEB)
Welborn, Matthew; Tsuchimochi, Takashi; Van Voorhis, Troy [Department of Chemistry, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States)
2016-08-21
Strong correlation poses a difficult problem for electronic structure theory, with computational cost scaling quickly with system size. Fragment embedding is an attractive approach to this problem. By dividing a large complicated system into smaller manageable fragments “embedded” in an approximate description of the rest of the system, we can hope to ameliorate the steep cost of correlated calculations. While appealing, these methods often converge slowly with fragment size because of small errors at the boundary between fragment and bath. We describe a new electronic embedding method, dubbed “Bootstrap Embedding,” a self-consistent wavefunction-in-wavefunction embedding theory that uses overlapping fragments to improve the description of fragment edges. We apply this method to the one dimensional Hubbard model and a translationally asymmetric variant, and find that it performs very well for energies and populations. We find Bootstrap Embedding converges rapidly with embedded fragment size, overcoming the surface-area-to-volume-ratio error typical of many embedding methods. We anticipate that this method may lead to a low-scaling, high accuracy treatment of electron correlation in large molecular systems.
Statistically Consistent k-mer Methods for Phylogenetic Tree Reconstruction.
Allman, Elizabeth S; Rhodes, John A; Sullivant, Seth
2017-02-01
Frequencies of k-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the squared Euclidean distance between k-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences without gaps, which lead to consistent tree inference. The identifiability of model parameters from k-mer frequencies is also studied. Finally, we report simulations showing that the corrected distance outperforms many other k-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well since k-mer methods are usually the first step in constructing a guide tree for such algorithms.
Statistical Methods in Integrative Genomics
Richardson, Sylvia; Tseng, George C.; Sun, Wei
2016-01-01
Statistical methods in integrative genomics aim to answer important biology questions by jointly analyzing multiple types of genomic data (vertical integration) or aggregating the same type of data across multiple studies (horizontal integration). In this article, we introduce different types of genomic data and data resources, and then review statistical methods of integrative genomics, with emphasis on the motivation and rationale of these methods. We conclude with some summary points and future research directions. PMID:27482531
Diverse methods for integrable models
Fehér, G.
2017-01-01
This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.
A self-consistent nodal method in response matrix formalism for the multigroup diffusion equations
International Nuclear Information System (INIS)
Malambu, E.M.; Mund, E.H.
1996-01-01
We develop a nodal method for the multigroup diffusion equations, based on the transverse integration procedure (TIP). The efficiency of the method rests upon the convergence properties of a high-order multidimensional nodal expansion and upon numerical implementation aspects. The discrete 1D equations are cast in response matrix formalism. The derivation of the transverse leakage moments is self-consistent i.e. does not require additional assumptions. An outstanding feature of the method lies in the linear spatial shape of the local transverse leakage for the first-order scheme. The method is described in the two-dimensional case. The method is validated on some classical benchmark problems. (author)
Don't Trust the Cloud, Verify: Integrity and Consistency for Cloud Object Stores
Brandenburger, Marcus; Cachin, Christian; Knežević, Nikola
2015-01-01
Cloud services have turned remote computation into a commodity and enable convenient online collaboration. However, they require that clients fully trust the service provider in terms of confidentiality, integrity, and availability. Towards reducing this dependency, this paper introduces a protocol for verification of integrity and consistency for cloud object storage (VICOS), which enables a group of mutually trusting clients to detect data-integrity and consistency violations for a cloud ob...
Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics
Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.
2017-11-01
We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.
Fully consistent CFD methods for incompressible flow computations
DEFF Research Database (Denmark)
Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.
2014-01-01
Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...
Simulating variable-density flows with time-consistent integration of Navier-Stokes equations
Lu, Xiaoyi; Pantano, Carlos
2017-11-01
In this talk, we present several features of a high-order semi-implicit variable-density low-Mach Navier-Stokes solver. A new formulation to solve pressure Poisson-like equation of variable-density flows is highlighted. With this formulation of the numerical method, we are able to solve all variables with a uniform order of accuracy in time (consistent with the time integrator being used). The solver is primarily designed to perform direct numerical simulations for turbulent premixed flames. Therefore, we also address other important elements, such as energy-stable boundary conditions, synthetic turbulence generation, and flame anchoring method. Numerical examples include classical non-reacting constant/variable-density flows, as well as turbulent premixed flames.
Vujačić, Ivan; Dattner, Itai
In this paper we use the sieve framework to prove consistency of the ‘direct integral estimator’ of parameters for partially observed systems of ordinary differential equations, which are commonly used for modeling dynamic processes.
Efficient 3D/1D self-consistent integral-equation analysis of ICRH antennae
International Nuclear Information System (INIS)
Maggiora, R.; Vecchi, G.; Lancellotti, V.; Kyrytsya, V.
2004-01-01
This work presents a comprehensive account of the theory and implementation of a method for the self-consistent numerical analysis of plasma-facing ion-cyclotron resonance heating (ICRH) antenna arrays. The method is based on the integral-equation formulation of the boundary-value problem, solved via a weighted-residual scheme. The antenna geometry (including Faraday shield bars and a recess box) is fairly general and three-dimensional (3D), and the plasma is in the one-dimensional (1D) 'slab' approximation; finite-Larmor radius effects, as well as plasma density and temperature gradients, are considered. Feeding via the voltages in the access coaxial lines is self consistently accounted throughout and the impedance or scattering matrix of the antenna array obtained therefrom. The problem is formulated in both the dual space (physical) and spectral (wavenumber) domains, which allows the extraction and simple handling of the terms that slow the convergence in the spectral domain usually employed. This paper includes validation tests of the developed code against measured data, both in vacuo and in the presence of plasma. An example of application to a complex geometry is also given. (author)
Integral equation methods for electromagnetics
Volakis, John
2012-01-01
This text/reference is a detailed look at the development and use of integral equation methods for electromagnetic analysis, specifically for antennas and radar scattering. Developers and practitioners will appreciate the broad-based approach to understanding and utilizing integral equation methods and the unique coverage of historical developments that led to the current state-of-the-art. In contrast to existing books, Integral Equation Methods for Electromagnetics lays the groundwork in the initial chapters so students and basic users can solve simple problems and work their way up to the mo
Method used to test the imaging consistency of binocular camera's left-right optical system
Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui
2016-09-01
To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.
Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.
2003-01-01
The amount of data available on the Internet is continuously increasing, consequentially there is a growing need for tools that help to analyse the data. Testing of consistency among data received from different sources is made difficult by the number of different languages and schemas being used....... In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling......, an important part of this technique is attaching of OCL expressions to special boolean class attributes that we call consistency attributes. The resulting integration model can be used for automatic consistency testing of two instances of the legacy models by automatically instantiate the whole integration...
Thermodynamically self-consistent integral equations and the structure of liquid metals
International Nuclear Information System (INIS)
Pastore, G.; Kahl, G.
1987-01-01
We discuss the application of the new thermodynamically self-consistent integral equations for the determination of the structural properties of liquid metals. We present a detailed comparison of the structure (S(q) and g(r)) for models of liquid alkali metals as obtained from two thermodynamically self-consistent integral equations and some published exact computer simulation results; the range of states extends from the triple point to the expanded metal. The theories which only impose thermodynamic self-consistency without any fitting of external data show an excellent agreement with the simulation results, thus demonstrating that this new type of integral equation is definitely superior to the conventional ones (hypernetted chain, Percus-Yevick, mean spherical approximation, etc). (author)
Energy Technology Data Exchange (ETDEWEB)
Myrzakulov, R.; Mamyrbekova, G.K.; Nugmanova, G.N.; Yesmakhanova, K.R. [Eurasian International Center for Theoretical Physics and Department of General and Theoretical Physics, Eurasian National University, Astana 010008 (Kazakhstan); Lakshmanan, M., E-mail: lakshman@cnld.bdu.ac.in [Centre for Nonlinear Dynamics, School of Physics, Bharathidasan University, Tiruchirapalli 620 024 (India)
2014-06-13
Motion of curves and surfaces in R{sup 3} lead to nonlinear evolution equations which are often integrable. They are also intimately connected to the dynamics of spin chains in the continuum limit and integrable soliton systems through geometric and gauge symmetric connections/equivalence. Here we point out the fact that a more general situation in which the curves evolve in the presence of additional self-consistent vector potentials can lead to interesting generalized spin systems with self-consistent potentials or soliton equations with self-consistent potentials. We obtain the general form of the evolution equations of underlying curves and report specific examples of generalized spin chains and soliton equations. These include principal chiral model and various Myrzakulov spin equations in (1+1) dimensions and their geometrically equivalent generalized nonlinear Schrödinger (NLS) family of equations, including Hirota–Maxwell–Bloch equations, all in the presence of self-consistent potential fields. The associated gauge equivalent Lax pairs are also presented to confirm their integrability. - Highlights: • Geometry of continuum spin chain with self-consistent potentials explored. • Mapping on moving space curves in R{sup 3} in the presence of potential fields carried out. • Equivalent generalized nonlinear Schrödinger (NLS) family of equations identified. • Integrability of identified nonlinear systems proved by deducing appropriate Lax pairs.
Efficient orbit integration by manifold correction methods.
Fukushima, Toshio
2005-12-01
Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.
Covariant and consistent anomalies in two dimensions in path-integral formulation
International Nuclear Information System (INIS)
Joglekar, S.D.; Saini, G.
1993-01-01
We give a definition of a one-parameter family of regularized chiral currents in a chiral non-Abelian gauge theory in two dimensions in path-integral formulation. We show that covariant and consistent currents are obtained from this family by selecting two specific values of the free parameter, and thus our regularization interpolates between these two. Our procedure uses chiral bases constructed from eigenfunctions of the same operator for ψ L and anti ψ L . Definition of integration measure and regularization is done in terms of the same Hermitian operator D α =∂+iαA. Covariant and consistent currents (and indeed the entire family) are classically conserved. Difference with previous works are explained, in particular, that an anomaly in the general basis does differ from the Jacobian contribution. (orig.)
International Nuclear Information System (INIS)
Galán, J; Verleysen, P; Lebensohn, R A
2014-01-01
A new algorithm for the solution of the deformation of a polycrystalline material using a self-consistent scheme, and its integration as part of the finite element software Abaqus/Standard are presented. The method is based on the original VPSC formulation by Lebensohn and Tomé and its integration with Abaqus/Standard by Segurado et al. The new algorithm has been implemented as a set of Fortran 90 modules, to be used either from a standalone program or from Abaqus subroutines. The new implementation yields the same results as VPSC7, but with a significantly better performance, especially when used in multicore computers. (paper)
ASPECTS OF INTEGRATION MANAGEMENT METHODS
Directory of Open Access Journals (Sweden)
Artemy Varshapetian
2015-10-01
Full Text Available For manufacturing companies to succeed in today's unstable economic environment, it is necessary to restructure the main components of its activities: designing innovative product, production using modern reconfigurable manufacturing systems, a business model that takes into account the global strategy and management methods using modern management models and tools. The first three components are discussed in numerous publications, for example, (Koren, 2010 and is therefore not considered in the article. A large number of publications devoted to the methods and tools of production management, for example (Halevi, 2007. On the basis of what was said in the article discusses the possibility of the integration of only three methods have received in recent years, the most widely used, namely: Six Sigma method - SS (George et al., 2005 and supplements its-Design for six sigm? - DFSS (Taguchi, 2003; Lean production transformed with the development to the "Lean management" and further to the "Lean thinking" - Lean (Hirano et al., 2006; Theory of Constraints, developed E.Goldratt - TOC (Dettmer, 2001. The article investigates some aspects of this integration: applications in diverse fields, positive features, changes in management structure, etc.
Road Service Performance Based On Integrated Road Design Consistency (IC Along Federal Road F0023
Directory of Open Access Journals (Sweden)
Zainal Zaffan Farhana
2017-01-01
Full Text Available Road accidents are one of the world’s largest public health and injury prevention problems. In Malaysia, the west coast area of Malaysia been stated as the highest motorcycle fatalities and road accidents are one of the factors that cause of death and injuries in this country. The most common fatal accident is between a motorcycle and passenger car. The most of the fatal accidents happened on Federal roads with 44 fatal accidents reported, which is equal to 29%. Lacks of road geometric designs consistency where the drivers make mistakes errors due to the road geometric features causes the accident kept rising in Malaysia. Hence, models are based on operating speed to calculate design consistency of road. The profiles were obtained by continuous speed profile using GPS data. The continuous operating speed profile models were plotted based on operating speed model (85th percentile. The study was conduct at F0023 from km 16 until km 20. The purpose of design consistency is to know the relationship between the operating speed and elements of geometric design on the road. As a result, the integrated design consistency motorcycle and cars along a segment at F0023, the threshold shows poor design quality for motorcycles and cars.
Methods for enhancing numerical integration
International Nuclear Information System (INIS)
Doncker, Elise de
2003-01-01
We give a survey of common strategies for numerical integration (adaptive, Monte-Carlo, Quasi-Monte Carlo), and attempt to delineate their realm of applicability. The inherent accuracy and error bounds for basic integration methods are given via such measures as the degree of precision of cubature rules, the index of a family of lattice rules, and the discrepancy of uniformly distributed point sets. Strategies incorporating these basic methods often use paradigms to reduce the error by, e.g., increasing the number of points in the domain or decreasing the mesh size, locally or uniformly. For these processes the order of convergence of the strategy is determined by the asymptotic behavior of the error, and may be too slow in practice for the type of problem at hand. For certain problem classes we may be able to improve the effectiveness of the method or strategy by such techniques as transformations, absorbing a difficult part of the integrand into a weight function, suitable partitioning of the domain, transformations and extrapolation or convergence acceleration. Situations warranting the use of these techniques (possibly in an 'automated' way) are described and illustrated by sample applications
Comment on the consistency of truncated nonlinear integral equation based theories of freezing
International Nuclear Information System (INIS)
Cerjan, C.; Bagchi, B.; Rice, S.A.
1985-01-01
We report the results of two studies of aspects of the consistency of truncated nonlinear integral equation based theories of freezing: (i) We show that the self-consistent solutions to these nonlinear equations are unfortunately sensitive to the level of truncation. For the hard sphere system, if the Wertheim--Thiele representation of the pair direct correlation function is used, the inclusion of part but not all of the triplet direct correlation function contribution, as has been common, worsens the predictions considerably. We also show that the convergence of the solutions found, with respect to number of reciprocal lattice vectors kept in the Fourier expansion of the crystal singlet density, is slow. These conclusions imply great sensitivity to the quality of the pair direct correlation function employed in the theory. (ii) We show the direct correlation function based and the pair correlation function based theories of freezing can be cast into a form which requires solution of isomorphous nonlinear integral equations. However, in the pair correlation function theory the usual neglect of the influence of inhomogeneity of the density distribution on the pair correlation function is shown to be inconsistent to the lowest order in the change of density on freezing, and to lead to erroneous predictions
An integrating factor matrix method to find first integrals
International Nuclear Information System (INIS)
Saputra, K V I; Quispel, G R W; Van Veen, L
2010-01-01
In this paper we develop an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two- and three-dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.
International Nuclear Information System (INIS)
Lino, A.T.; Takahashi, E.K.; Leite, J.R.; Ferraz, A.C.
1988-01-01
The band structure of metallic sodium is calculated, using for the first time the self-consistent field variational cellular method. In order to implement the self-consistency in the variational cellular theory, the crystal electronic charge density was calculated within the muffin-tin approximation. The comparison between our results and those derived from other calculations leads to the conclusion that the proposed self-consistent version of the variational cellular method is fast and accurate. (author) [pt
Study of impurity effects on CFETR steady-state scenario by self-consistent integrated modeling
Shi, Nan; Chan, Vincent S.; Jian, Xiang; Li, Guoqiang; Chen, Jiale; Gao, Xiang; Shi, Shengyu; Kong, Defeng; Liu, Xiaoju; Mao, Shifeng; Xu, Guoliang
2017-12-01
Impurity effects on fusion performance of China fusion engineering test reactor (CFETR) due to extrinsic seeding are investigated. An integrated 1.5D modeling workflow evolves plasma equilibrium and all transport channels to steady state. The one modeling framework for integrated tasks framework is used to couple the transport solver, MHD equilibrium solver, and source and sink calculations. A self-consistent impurity profile constructed using a steady-state background plasma, which satisfies quasi-neutrality and true steady state, is presented for the first time. Studies are performed based on an optimized fully non-inductive scenario with varying concentrations of Argon (Ar) seeding. It is found that fusion performance improves before dropping off with increasing {{Z}\\text{eff}} , while the confinement remains at high level. Further analysis of transport for these plasmas shows that low-k ion temperature gradient modes dominate the turbulence. The decrease in linear growth rate and resultant fluxes of all channels with increasing {{Z}\\text{eff}} can be traced to impurity profile change by transport. The improvement in confinement levels off at higher {{Z}\\text{eff}} . Over the regime of study there is a competition between the suppressed transport and increasing radiation that leads to a peak in the fusion performance at {{Z}\\text{eff}} (~2.78 for CFETR). Extrinsic impurity seeding to control divertor heat load will need to be optimized around this value for best fusion performance.
Directory of Open Access Journals (Sweden)
Liyan Zhang
2017-01-01
Full Text Available The paper studies multiresolution traffic flow simulation model of urban expressway. Firstly, compared with two-level hybrid model, three-level multiresolution hybrid model has been chosen. Then, multiresolution simulation framework and integration strategies are introduced. Thirdly, the paper proposes an urban expressway multiresolution traffic simulation model by asynchronous integration strategy based on Set Theory, which includes three submodels: macromodel, mesomodel, and micromodel. After that, the applicable conditions and derivation process of the three submodels are discussed in detail. In addition, in order to simulate and evaluate the multiresolution model, “simple simulation scenario” of North-South Elevated Expressway in Shanghai has been established. The simulation results showed the following. (1 Volume-density relationships of three submodels are unanimous with detector data. (2 When traffic density is high, macromodel has a high precision and smaller error and the dispersion of results is smaller. Compared with macromodel, simulation accuracies of micromodel and mesomodel are lower but errors are bigger. (3 Multiresolution model can simulate characteristics of traffic flow, capture traffic wave, and keep the consistency of traffic state transition. Finally, the results showed that the novel multiresolution model can have higher simulation accuracy and it is feasible and effective in the real traffic simulation scenario.
A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method
Jose Miguel Albala-Bertrand
2001-01-01
There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.
International Nuclear Information System (INIS)
Pitkaenen, P.; Loefman, J.; Korkealaakso, J.; Koskinen, L.; Ruotsalainen, P.; Hautojaervi, A.; Aeikaes, T.
1999-01-01
In the assessment of the suitability and safety of a geological repository for radioactive waste the understanding of the fluid flow at a site is essential. In order to build confidence in the assessment of the hydrogeological performance of a site in various conditions, integration of hydrological and hydrogeochemical methods and studies provides the primary method for investigating the evolution that has taken place in the past, and for predicting future conditions at the potential disposal site. A systematic geochemical sampling campaign was started since the beginning of 1990's in the Finnish site investigation programme. This enabled the initiating of integration and evaluation of site scale hydrogeochemical and groundwater flow models. Hydrogeochemical information has been used to screen relevant external processes and variables for definition of the initial and boundary conditions in hydrological simulations. The results obtained from interpretation and modelling hydrogeochemical evolution have been employed in testing the hydrogeochemical consistency of conceptual flow models. Integration and testing of flow models with hydrogeochemical information are considered to improve significantly the hydrogeological understanding of a site and increases confidence in conceptual hydrogeological models. (author)
Iterative quantum-classical path integral with dynamically consistent state hopping
Energy Technology Data Exchange (ETDEWEB)
Walters, Peter L.; Makri, Nancy [Department of Chemistry, University of Illinois, Urbana, Illinois 61801 (United States)
2016-01-28
We investigate the convergence of iterative quantum-classical path integral calculations in sluggish environments strongly coupled to a quantum system. The number of classical trajectories, thus the computational cost, grows rapidly (exponentially, unless filtering techniques are employed) with the memory length included in the calculation. We argue that the choice of the (single) trajectory branch during the time preceding the memory interval can significantly affect the memory length required for convergence. At short times, the trajectory branch associated with the reactant state improves convergence by eliminating spurious memory. We also introduce an instantaneous population-based probabilistic scheme which introduces state-to-state hops in the retained pre-memory trajectory branch, and which is designed to choose primarily the trajectory branch associated with the reactant at early times, but to favor the product state more as the reaction progresses to completion. Test calculations show that the dynamically consistent state hopping scheme leads to accelerated convergence and a dramatic reduction of computational effort.
Integral methods in low-frequency electromagnetics
Solin, Pavel; Karban, Pavel; Ulrych, Bohus
2009-01-01
A modern presentation of integral methods in low-frequency electromagnetics This book provides state-of-the-art knowledge on integral methods in low-frequency electromagnetics. Blending theory with numerous examples, it introduces key aspects of the integral methods used in engineering as a powerful alternative to PDE-based models. Readers will get complete coverage of: The electromagnetic field and its basic characteristics An overview of solution methods Solutions of electromagnetic fields by integral expressions Integral and integrodifferential methods
Measuring consistency in translation memories: a mixed-methods case study
Moorkens, Joss
2012-01-01
Introduced in the early 1990s, translation memory (TM) tools have since become widely used as an aid to human translation based on commonly‐held assumptions that they save time, reduce cost, and maximise consistency. The purpose of this research is twofold: it aims to develop a method for measuring consistency in TMs; and it aims to use this method to interrogate selected TMs from the localisation industry in order to find out whether the use of TM tools does, in fact, promote consistency in ...
An Economical Approach to Estimate a Benchmark Capital Stock. An Optimal Consistency Method
Jose Miguel Albala-Bertrand
2003-01-01
There are alternative methods of estimating capital stock for a benchmark year. However, these methods are costly and time-consuming, requiring the gathering of much basic information as well as the use of some convenient assumptions and guesses. In addition, a way is needed of checking whether the estimated benchmark is at the correct level. This paper proposes an optimal consistency method (OCM), which enables a capital stock to be estimated for a benchmark year, and which can also be used ...
Automatic numerical integration methods for Feynman integrals through 3-loop
International Nuclear Information System (INIS)
De Doncker, E; Olagbemi, O; Yuasa, F; Ishikawa, T; Kato, K
2015-01-01
We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)
On Consistency Test Method of Expert Opinion in Ecological Security Assessment.
Gong, Zaiwu; Wang, Lihong
2017-09-04
To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert's individual judgment level, ability and the consistency of the expert's overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment.
Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties
DEFF Research Database (Denmark)
Frank, Lars; Ulslev Pedersen, Rasmus
2014-01-01
In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... is inconsistent. It is also important that disconnected locations can operate in a meaningful way in socalled disconnected mode. A database is DBMS consistent if its data complies with the consistency rules of the DBMS's metadata. If the database is DBMS consistent both when a transaction starts and when it has...
Butsick, Andrew J; Wood, Jonathan S; Jovanis, Paul P
2017-09-01
The Highway Safety Manual provides multiple methods that can be used to identify sites with promise (SWiPs) for safety improvement. However, most of these methods cannot be used to identify sites with specific problems. Furthermore, given that infrastructure funding is often specified for use related to specific problems/programs, a method for identifying SWiPs related to those programs would be very useful. This research establishes a method for Identifying SWiPs with specific issues. This is accomplished using two safety performance functions (SPFs). This method is applied to identifying SWiPs with geometric design consistency issues. Mixed effects negative binomial regression was used to develop two SPFs using 5 years of crash data and over 8754km of two-lane rural roadway. The first SPF contained typical roadway elements while the second contained additional geometric design consistency parameters. After empirical Bayes adjustments, sites with promise (SWiPs) were identified. The disparity between SWiPs identified by the two SPFs was evident; 40 unique sites were identified by each model out of the top 220 segments. By comparing sites across the two models, candidate road segments can be identified where a lack design consistency may be contributing to an increase in expected crashes. Practitioners can use this method to more effectively identify roadway segments suffering from reduced safety performance due to geometric design inconsistency, with detailed engineering studies of identified sites required to confirm the initial assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
RELIABILITY ASSESSMENT OF ENTROPY METHOD FOR SYSTEM CONSISTED OF IDENTICAL EXPONENTIAL UNITS
Institute of Scientific and Technical Information of China (English)
Sun Youchao; Shi Jun
2004-01-01
The reliability assessment of unit-system near two levels is the most important content in the reliability multi-level synthesis of complex systems. Introducing the information theory into system reliability assessment, using the addible characteristic of information quantity and the principle of equivalence of information quantity, an entropy method of data information conversion is presented for the system consisted of identical exponential units. The basic conversion formulae of entropy method of unit test data are derived based on the principle of information quantity equivalence. The general models of entropy method synthesis assessment for system reliability approximate lower limits are established according to the fundamental principle of the unit reliability assessment. The applications of the entropy method are discussed by way of practical examples. Compared with the traditional methods, the entropy method is found to be valid and practicable and the assessment results are very satisfactory.
Consistency analysis of subspace identification methods based on a linear regression approach
DEFF Research Database (Denmark)
Knudsen, Torben
2001-01-01
In the literature results can be found which claim consistency for the subspace method under certain quite weak assumptions. Unfortunately, a new result gives a counter example showing inconsistency under these assumptions and then gives new more strict sufficient assumptions which however does n...... not include important model structures as e.g. Box-Jenkins. Based on a simple least squares approach this paper shows the possible inconsistency under the weak assumptions and develops only slightly stricter assumptions sufficient for consistency and which includes any model structure...
Energy Technology Data Exchange (ETDEWEB)
Tingjin, Liu; Zhengjun, Sun [Chinese Nuclear Data Center, Beijing, BJ (China)
1996-06-01
To meet the requirement of nuclear engineering, especially nuclear fusion reactor, now the data in the major evaluated libraries are given not only for natural element but also for its isotopes. Inconsistency between element and its isotopes data is one of the main problem in present evaluated neutron libraries. The formulas for adjusting to satisfy simultaneously the two kinds of consistent relationships were derived by means of least square method, the program system CABEI were developed. This program was tested by calculating the Fe data in CENDL-2.1. The results show that adjusted values satisfy the two kinds of consistent relationships.
An eigenvalue approach to quantum plasmonics based on a self-consistent hydrodynamics method.
Ding, Kun; Chan, C T
2018-02-28
Plasmonics has attracted much attention not only because it has useful properties such as strong field enhancement, but also because it reveals the quantum nature of matter. To handle quantum plasmonics effects, ab initio packages or empirical Feibelman d-parameters have been used to explore the quantum correction of plasmonic resonances. However, most of these methods are formulated within the quasi-static framework. The self-consistent hydrodynamics model offers a reliable approach to study quantum plasmonics because it can incorporate the quantum effect of the electron gas into classical electrodynamics in a consistent manner. Instead of the standard scattering method, we formulate the self-consistent hydrodynamics method as an eigenvalue problem to study quantum plasmonics with electrons and photons treated on the same footing. We find that the eigenvalue approach must involve a global operator, which originates from the energy functional of the electron gas. This manifests the intrinsic nonlocality of the response of quantum plasmonic resonances. Our model gives the analytical forms of quantum corrections to plasmonic modes, incorporating quantum electron spill-out effects and electrodynamical retardation. We apply our method to study the quantum surface plasmon polariton for a single flat interface.
Fan, Linjun; Tang, Jun; Ling, Yunxiang; Li, Benxian
2014-01-01
This paper is concerned with the dynamic evolution analysis and quantitative measurement of primary factors that cause service inconsistency in service-oriented distributed simulation applications (SODSA). Traditional methods are mostly qualitative and empirical, and they do not consider the dynamic disturbances among factors in service's evolution behaviors such as producing, publishing, calling, and maintenance. Moreover, SODSA are rapidly evolving in terms of large-scale, reusable, compositional, pervasive, and flexible features, which presents difficulties in the usage of traditional analysis methods. To resolve these problems, a novel dynamic evolution model extended hierarchical service-finite state automata (EHS-FSA) is constructed based on finite state automata (FSA), which formally depict overall changing processes of service consistency states. And also the service consistency evolution algorithms (SCEAs) based on EHS-FSA are developed to quantitatively assess these impact factors. Experimental results show that the bad reusability (17.93% on average) is the biggest influential factor, the noncomposition of atomic services (13.12%) is the second biggest one, and the service version's confusion (1.2%) is the smallest one. Compared with previous qualitative analysis, SCEAs present good effectiveness and feasibility. This research can guide the engineers of service consistency technologies toward obtaining a higher level of consistency in SODSA.
Directory of Open Access Journals (Sweden)
Linjun Fan
2014-01-01
Full Text Available This paper is concerned with the dynamic evolution analysis and quantitative measurement of primary factors that cause service inconsistency in service-oriented distributed simulation applications (SODSA. Traditional methods are mostly qualitative and empirical, and they do not consider the dynamic disturbances among factors in service’s evolution behaviors such as producing, publishing, calling, and maintenance. Moreover, SODSA are rapidly evolving in terms of large-scale, reusable, compositional, pervasive, and flexible features, which presents difficulties in the usage of traditional analysis methods. To resolve these problems, a novel dynamic evolution model extended hierarchical service-finite state automata (EHS-FSA is constructed based on finite state automata (FSA, which formally depict overall changing processes of service consistency states. And also the service consistency evolution algorithms (SCEAs based on EHS-FSA are developed to quantitatively assess these impact factors. Experimental results show that the bad reusability (17.93% on average is the biggest influential factor, the noncomposition of atomic services (13.12% is the second biggest one, and the service version’s confusion (1.2% is the smallest one. Compared with previous qualitative analysis, SCEAs present good effectiveness and feasibility. This research can guide the engineers of service consistency technologies toward obtaining a higher level of consistency in SODSA.
Genome scale models of yeast: towards standardized evaluation and consistent omic integration
DEFF Research Database (Denmark)
Sanchez, Benjamin J.; Nielsen, Jens
2015-01-01
Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published and are curre......Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... in which all levels of omics data (from gene expression to flux) have been integrated in yeast GEMs. Relevant conclusions and current challenges for both GEM evaluation and omic integration are highlighted....
Assessing the consistency of optical properties measured in four integrating spheres
Czech Academy of Sciences Publication Activity Database
Lukeš, Petr; Homolová, Lucie; Navrátil, M.; Hanuš, Jan
2017-01-01
Roč. 38, č. 13 (2017), s. 3817-3830 ISSN 0143-1161 R&D Projects: GA MŠk(CZ) LO1415; GA MŠk(CZ) LM2015061 Institutional support: RVO:67179843 Keywords : Artificial material * Canopy radiative transfer * Directional hemispherical reflectances * Integrating spheres * Leaf optical property * Measurement protocol * Standard deviation * Statistically significant difference Subject RIV: EH - Ecology, Behaviour OBOR OECD: Environmental sciences (social aspects to be 5.7) Impact factor: 1.724, year: 2016
Subdomain Precise Integration Method for Periodic Structures
Directory of Open Access Journals (Sweden)
F. Wu
2014-01-01
Full Text Available A subdomain precise integration method is developed for the dynamical responses of periodic structures comprising many identical structural cells. The proposed method is based on the precise integration method, the subdomain scheme, and the repeatability of the periodic structures. In the proposed method, each structural cell is seen as a super element that is solved using the precise integration method, considering the repeatability of the structural cells. The computational efforts and the memory size of the proposed method are reduced, while high computational accuracy is achieved. Therefore, the proposed method is particularly suitable to solve the dynamical responses of periodic structures. Two numerical examples are presented to demonstrate the accuracy and efficiency of the proposed method through comparison with the Newmark and Runge-Kutta methods.
Homogenization of Periodic Masonry Using Self-Consistent Scheme and Finite Element Method
Kumar, Nitin; Lambadi, Harish; Pandey, Manoj; Rajagopal, Amirtham
2016-01-01
Masonry is a heterogeneous anisotropic continuum, made up of the brick and mortar arranged in a periodic manner. Obtaining the effective elastic stiffness of the masonry structures has been a challenging task. In this study, the homogenization theory for periodic media is implemented in a very generic manner to derive the anisotropic global behavior of the masonry, through rigorous application of the homogenization theory in one step and through a full three-dimensional behavior. We have considered the periodic Eshelby self-consistent method and the finite element method. Two representative unit cells that represent the microstructure of the masonry wall exactly are considered for calibration and numerical application of the theory.
Self-consistent collective coordinate method for large amplitude collective motions
International Nuclear Information System (INIS)
Sakata, F.; Hashimoto, Y.; Marumori, T.; Une, T.
1982-01-01
A recent development of the self-consistent collective coordinate method is described. The self-consistent collective coordinate method was proposed on the basis of the fundamental principle called the invariance principle of the Schroedinger equation. If this is formulated within a framework of the time dependent Hartree Fock (TDHF) theory, a classical version of the theory is obtained. A quantum version of the theory is deduced by formulating it within a framework of the unitary transformation method with auxiliary bosons. In this report, the discussion is concentrated on a relation between the classical theory and the quantum theory, and an applicability of the classical theory. The aim of the classical theory is to extract a maximally decoupled collective subspace out of a huge dimensional 1p - 1h parameter space introduced by the TDHF theory. An intimate similarity between the classical theory and a full quantum boson expansion method (BEM) was clarified. Discussion was concentrated to a simple Lipkin model. Then a relation between the BEM and the unitary transformation method with auxiliary bosons was discussed. It became clear that the quantum version of the theory had a strong relation to the BEM, and that the BEM was nothing but a quantum analogue of the present classical theory. The present theory was compared with the full TDHF calculation by using a simple model. (Kato, T.)
Simplified DFT methods for consistent structures and energies of large systems
Caldeweyher, Eike; Gerit Brandenburg, Jan
2018-05-01
Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.
A numerical method for resonance integral calculations
International Nuclear Information System (INIS)
Tanbay, Tayfun; Ozgener, Bilge
2013-01-01
A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)
Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo
2018-01-01
This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.
Yang, Yuyi; Wei, Buqing; Zhao, Yuhua; Wang, Jun
2013-02-01
Azo dyes are toxic and carcinogenic and are often present in industrial effluents. In this research, azoreductase and glucose 1-dehydrogenase were coupled for both continuous generation of the cofactor NADH and azo dye removal. The results show that 85% maximum relative activity of azoreductase in an integrated enzyme system was obtained at the conditions: 1U azoreductase:10U glucose 1-dehydrogenase, 250mM glucose, 1.0mM NAD(+) and 150μM methyl red. Sensitivity analysis of the factors in the enzyme system affecting dye removal examined by an artificial neural network model shows that the relative importance of enzyme ratio between azoreductase and glucose 1-dehydrogenase was 22%, followed by dye concentration (27%), NAD(+) concentration (23%) and glucose concentration (22%), indicating none of the variables could be ignored in the enzyme system. Batch results show that the enzyme system has application potential for dye removal. Copyright © 2012 Elsevier Ltd. All rights reserved.
SELF-CONSISTENT,INTEGRATED,ADVANCED TOKAMAK OPERATION ON DIII-D
International Nuclear Information System (INIS)
WADE, MR; MURAKAMI, M; LUCE, TC; FERRON, JR; PETTY, CC; BRENNAN, DP; GAROFALO, AM; GREENFIELD, CM; HYATT, AW; JAYAKUMAR, R; LAHAYE, RJ; LAO, LL; LOHR, J; POLITZER, PA; PRATER, R; STRAIT, EJ
2002-01-01
Recent experiments on DIII-D have demonstrated the ability to sustain plasma conditions that integrate and sustain the key ingredients of Advanced Tokamak (AT) operation: high β with q min >> 1, good energy confinement, and high current drive efficiency. Utilizing off-axis (ρ 0.4) electron cyclotron current drive (ECCD) to modify the current density profile in a plasma operating near the no-wall ideal stability limit with q min > 2.0, plasmas with β = 2.9% and 90% of the plasma current driven non-inductively have been sustained for nearly 2 s (limited only by the duration of the ECCD pulse). Separate experiments have demonstrated the ability to sustain a steady current density profile using ECCD for periods as long as 1 s with β = 3.3% and > 90% of the current driven non-inductively
International Nuclear Information System (INIS)
Yang Deshan; Li Hua; Low, Daniel A; Deasy, Joseph O; Naqa, Issam El
2008-01-01
Deformable image registration is widely used in various radiation therapy applications including daily treatment planning adaptation to map planned tissue or dose to changing anatomy. In this work, a simple and efficient inverse consistency deformable registration method is proposed with aims of higher registration accuracy and faster convergence speed. Instead of registering image I to a second image J, the two images are symmetrically deformed toward one another in multiple passes, until both deformed images are matched and correct registration is therefore achieved. In each pass, a delta motion field is computed by minimizing a symmetric optical flow system cost function using modified optical flow algorithms. The images are then further deformed with the delta motion field in the positive and negative directions respectively, and then used for the next pass. The magnitude of the delta motion field is forced to be less than 0.4 voxel for every pass in order to guarantee smoothness and invertibility for the two overall motion fields that are accumulating the delta motion fields in both positive and negative directions, respectively. The final motion fields to register the original images I and J, in either direction, are calculated by inverting one overall motion field and combining the inversion result with the other overall motion field. The final motion fields are inversely consistent and this is ensured by the symmetric way that registration is carried out. The proposed method is demonstrated with phantom images, artificially deformed patient images and 4D-CT images. Our results suggest that the proposed method is able to improve the overall accuracy (reducing registration error by 30% or more, compared to the original and inversely inconsistent optical flow algorithms), reduce the inverse consistency error (by 95% or more) and increase the convergence rate (by 100% or more). The overall computation speed may slightly decrease, or increase in most cases
Classification Method in Integrated Information Network Using Vector Image Comparison
Directory of Open Access Journals (Sweden)
Zhou Yuan
2014-05-01
Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.
RPA method based on the self-consistent cranking model for 168Er and 158Dy
International Nuclear Information System (INIS)
Kvasil, J.; Cwiok, S.; Chariev, M.M.; Choriev, B.
1983-01-01
The low-lying nuclear states in 168 Er and 158 Dy are analysed within the random phase approximation (RPA) method based on the self-consistent cranking model (SCCM). The moment of inertia, the value of chemical potential, and the strength constant k 1 have been obtained from the symmetry condition. The pairing strength constants Gsub(tau) have been determined from the experimental values of neutron and proton pairing energies for nonrotating nuclei. A quite good agreement with experimental energies of states with positive parity was obtained without introducing the two-phonon vibrational states
Self-consistent study of nuclei far from stability with the energy density method
Tondeur, F
1981-01-01
The self-consistent energy density method has been shown to give good results with a small number of parameters for the calculation of nuclear masses, radii, deformations, neutron skins, shell and sub- shell effects. It is here used to study the properties of nuclei far from stability, like densities, shell structure, even-odd mass differences, single-particle potentials and nuclear deformations. A few possible consequences of the results for astrophysical problems are briefly considered. The predictions of the model in the super- heavy region are summarised. (34 refs).
Analytical free energy gradient for the molecular Ornstein-Zernike self-consistent-field method
Directory of Open Access Journals (Sweden)
N.Yoshida
2007-09-01
Full Text Available An analytical free energy gradient for the molecular Ornstein-Zernike self-consistent-field (MOZ-SCF method is presented. MOZ-SCF theory is one of the theories to considering the solvent effects on the solute electronic structure in solution. [Yoshida N. et al., J. Chem. Phys., 2000, 113, 4974] Molecular geometries of water, formaldehyde, acetonitrile and acetone in water are optimized by analytical energy gradient formula. The results are compared with those from the polarizable continuum model (PCM, the reference interaction site model (RISM-SCF and the three dimensional (3D RISM-SCF.
Quasiparticle self-consistent GW method for the spectral properties of complex materials.
Bruneval, Fabien; Gatti, Matteo
2014-01-01
The GW approximation to the formally exact many-body perturbation theory has been applied successfully to materials for several decades. Since the practical calculations are extremely cumbersome, the GW self-energy is most commonly evaluated using a first-order perturbative approach: This is the so-called G 0 W 0 scheme. However, the G 0 W 0 approximation depends heavily on the mean-field theory that is employed as a basis for the perturbation theory. Recently, a procedure to reach a kind of self-consistency within the GW framework has been proposed. The quasiparticle self-consistent GW (QSGW) approximation retains some positive aspects of a self-consistent approach, but circumvents the intricacies of the complete GW theory, which is inconveniently based on a non-Hermitian and dynamical self-energy. This new scheme allows one to surmount most of the flaws of the usual G 0 W 0 at a moderate calculation cost and at a reasonable implementation burden. In particular, the issues of small band gap semiconductors, of large band gap insulators, and of some transition metal oxides are then cured. The QSGW method broadens the range of materials for which the spectral properties can be predicted with confidence.
A Dynamic Linear Hashing Method for Redundancy Management in Train Ethernet Consist Network
Directory of Open Access Journals (Sweden)
Xiaobo Nie
2016-01-01
Full Text Available Massive transportation systems like trains are considered critical systems because they use the communication network to control essential subsystems on board. Critical system requires zero recovery time when a failure occurs in a communication network. The newly published IEC62439-3 defines the high-availability seamless redundancy protocol, which fulfills this requirement and ensures no frame loss in the presence of an error. This paper adopts these for train Ethernet consist network. The challenge is management of the circulating frames, capable of dealing with real-time processing requirements, fast switching times, high throughout, and deterministic behavior. The main contribution of this paper is the in-depth analysis it makes of network parameters imposed by the application of the protocols to train control and monitoring system (TCMS and the redundant circulating frames discarding method based on a dynamic linear hashing, using the fastest method in order to resolve all the issues that are dealt with.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-10-01
We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
International Nuclear Information System (INIS)
Kutepov, A. L.
2017-01-01
We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.
Subramanian, Ramanathan Vishnampet Ganapathi
, and can be tailored to achieve global conservation up to arbitrary orders of accuracy. We again confirm that the sensitivity gradient for turbulent jet noise computed using our dual-consistent method is only limited by computing precision.
Directory of Open Access Journals (Sweden)
Karl W. Steininger
2016-03-01
Full Text Available Climate change triggers manifold impacts at the national to local level, which in turn have various economy-wide implications (e.g. on welfare, employment, or tax revenues. In its response, society needs to prioritize which of these impacts to address and what share of resources to spend on each respective adaptation. A prerequisite to achieving that end is an economic impact analysis that is consistent across sectors and acknowledges intersectoral and economy-wide feedback effects. Traditional Integrated Assessment Models (IAMs are usually operating at a level too aggregated for this end, while bottom-up impact models most often are not fully comprehensive, focusing on only a subset of climate sensitive sectors and/or a subset of climate change impact chains. Thus, we develop here an approach which applies climate and socioeconomic scenario analysis, harmonized economic costing, and sector explicit bandwidth analysis in a coupled framework of eleven (biophysical impact assessment models and a uniform multi-sectoral computable general equilibrium model. In applying this approach to the alpine country of Austria, we find that macroeconomic feedbacks can magnify sectoral climate damages up to fourfold, or that by mid-century costs of climate change clearly outweigh benefits, with net costs rising two- to fourfold above current damage cost levels. The resulting specific impact information – differentiated by climate and economic drivers – can support sector-specific adaptation as well as adaptive capacity building. Keywords: climate impact, local impact, economic evaluation, adaptation
A consistent modelling methodology for secondary settling tanks: a reliable numerical method.
Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena
2013-01-01
The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.
Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames
Heye, Colin; Raman, Venkat
2012-11-01
A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.
Consistent calculation of the polarization electric dipole moment by the shell-correction method
International Nuclear Information System (INIS)
Denisov, V.Yu.
1992-01-01
Macroscopic calculations of the polarization electric dipole moment which arises in nuclei with an octupole deformation are discussed in detail. This dipole moment is shown to depend on the position of the center of gravity. The conditions of consistency of the radii of the proton and neutron potentials and the radii of the proton and neutron surfaces, respectively, are discussed. These conditions must be incorporated in a shell-correction calculation of this dipole moment. A correct calculation of this moment by the shell-correction method is carried out. Dipole transitions between (on the one hand) levels belonging to an octupole vibrational band and (on the other) the ground state in rare-earth nuclei with a large quadrupole deformation are studied. 19 refs., 3 figs
Integral Methods in Science and Engineering
Constanda, Christian
2011-01-01
An enormous array of problems encountered by scientists and engineers are based on the design of mathematical models using many different types of ordinary differential, partial differential, integral, and integro-differential equations. Accordingly, the solutions of these equations are of great interest to practitioners and to science in general. Presenting a wealth of cutting-edge research by a diverse group of experts in the field, Integral Methods in Science and Engineering: Computational and Analytic Aspects gives a vivid picture of both the development of theoretical integral techniques
Method of manufacturing Josephson junction integrated circuits
International Nuclear Information System (INIS)
Jillie, D.W. Jr.; Smith, L.N.
1985-01-01
Josephson junction integrated circuits of the current injection type and magnetically controlled type utilize a superconductive layer that forms both Josephson junction electrode for the Josephson junction devices on the integrated circuit as well as a ground plane for the integrated circuit. Large area Josephson junctions are utilized for effecting contact to lower superconductive layers and islands are formed in superconductive layers to provide isolation between the groudplane function and the Josephson junction electrode function as well as to effect crossovers. A superconductor-barrier-superconductor trilayer patterned by local anodization is also utilized with additional layers formed thereover. Methods of manufacturing the embodiments of the invention are disclosed
Variational method for integrating radial gradient field
Legarda-Saenz, Ricardo; Brito-Loeza, Carlos; Rivera, Mariano; Espinosa-Romero, Arturo
2014-12-01
We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.
Egidi, Giovanna; Caramazza, Alfonso
2014-12-01
According to recent research on language comprehension, the semantic features of a text are not the only determinants of whether incoming information is understood as consistent. Listeners' pre-existing affective states play a crucial role as well. The current fMRI experiment examines the effects of happy and sad moods during comprehension of consistent and inconsistent story endings, focusing on brain regions previously linked to two integration processes: inconsistency detection, evident in stronger responses to inconsistent endings, and fluent processing (accumulation), evident in stronger responses to consistent endings. The analysis evaluated whether differences in the BOLD response for consistent and inconsistent story endings correlated with self-reported mood scores after a mood induction procedure. Mood strongly affected regions previously associated with inconsistency detection. Happy mood increased sensitivity to inconsistency in regions specific for inconsistency detection (e.g., left IFG, left STS), whereas sad mood increased sensitivity to inconsistency in regions less specific for language processing (e.g., right med FG, right SFG). Mood affected more weakly regions involved in accumulation of information. These results show that mood can influence activity in areas mediating well-defined language processes, and highlight that integration is the result of context-dependent mechanisms. The finding that language comprehension can involve different networks depending on people's mood highlights the brain's ability to reorganize its functions. Copyright © 2014 Elsevier Inc. All rights reserved.
Consistency analysis of Keratograph and traditional methods to evaluate tear film function
Directory of Open Access Journals (Sweden)
Pei-Yang Shen
2015-05-01
Full Text Available AIM: To investigate repeatability and accuracy of a latest Keratograph for evaluating the tear film stability and to compare its measurements with that of traditional examination methods. METHODS: The results of noninvasive tear film break-up time(NI-BUTincluding the first tear film break-up time(BUT-fand the average tear film break-up time(BUT-avewere measured by Keratograph. The repeatability of the measurements was evaluated by coefficient of variation(CVand intraclass correlation coefficient(ICC. Wilcoxon Signed-Rank test was used to compare NI-BUT with fluorescein tear film break-up time(FBUTto confirm the correlation between NI-BUT and FBUT, Schirmer I test values. Bland-Altman analysis was used to evaluate consistency. RESULTS: The study recruited 48 subjects(48 eyes(mean age 38.7±15.2 years. The CV and ICC of BUT-f were respectively 12.6% and 0.95, those of BUT-ave were 9.8% and 0.96. The value of BUT-f was lower than that of FBUT. The difference had statistical significance(6.16±2.46s vs 7.46±1.92s, PPCONCLUSION: Keratograph can provide NI-BUT data that has a better repeatability and reliability, which has great application prospects in diagnosis and treatment of dry eye and refractive corneal surgery.
Mining method selection by integrated AHP and PROMETHEE method.
Bogdanovic, Dejan; Nikolic, Djordje; Ilic, Ivana
2012-03-01
Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integrated method can be successfully used in solving mining engineering problems.
Chang, Chih-Hao; Deng, Xiaolong; Theofanous, Theo G.
2013-06-01
We present a conservative and consistent numerical method for solving the Navier-Stokes equations in flow domains that may be separated by any number of material interfaces, at arbitrarily-high density/viscosity ratios and acoustic-impedance mismatches, subjected to strong shock waves and flow speeds that can range from highly supersonic to near-zero Mach numbers. A principal aim is prediction of interfacial instabilities under superposition of multiple potentially-active modes (Rayleigh-Taylor, Kelvin-Helmholtz, Richtmyer-Meshkov) as found for example with shock-driven, immersed fluid bodies (locally oblique shocks)—accordingly we emphasize fidelity supported by physics-based validation, including experiments. Consistency is achieved by satisfying the jump discontinuities at the interface within a conservative 2nd-order scheme that is coupled, in a conservative manner, to the bulk-fluid motions. The jump conditions are embedded into a Riemann problem, solved exactly to provide the pressures and velocities along the interface, which is tracked by a level set function to accuracy of O(Δx5, Δt4). Subgrid representation of the interface is achieved by allowing curvature of its constituent interfacial elements to obtain O(Δx3) accuracy in cut-cell volume, with attendant benefits in calculating cell- geometric features and interface curvature (O(Δx3)). Overall the computation converges at near-theoretical O(Δx2). Spurious-currents are down to machine error and there is no time-step restriction due to surface tension. Our method is built upon a quadtree-like adaptive mesh refinement infrastructure. When necessary, this is supplemented by body-fitted grids to enhance resolution of the gas dynamics, including flow separation, shear layers, slip lines, and critical layers. Comprehensive comparisons with exact solutions for the linearized Rayleigh-Taylor and Kelvin-Helmholtz problems demonstrate excellent performance. Sample simulations of liquid drops subjected to
International Nuclear Information System (INIS)
Sakata, Fumihiko; Marumori, Toshio; Hashimoto, Yukio; Une, Tsutomu.
1983-05-01
The geometry of the self-consistent collective-coordinate (SCC) method formulated within the framework of the time-dependent Hartree-Fock (TDHF) theory is investigated by associating the variational parameters with a symplectic manifold (a TDHF manifold). With the use of a canonical-variables parametrization, it is shown that the TDHF equation is equivalent to the canonical equations of motion in classical mechanics in the TDHF manifold. This enables us to investigate geometrical structure of the SCC method in the language of the classical mechanics. The SCC method turns out to give a prescription how to dynamically extract a ''maximally-decoupled'' collective submanifold (hypersurface) out of the TDHF manifold, in such a way that a certain kind of trajectories corresponding to the large-amplitude collective motion under consideration can be reproduced on the hypersurface as precisely as possible. The stability of the hypersurface at each point on it is investigated, in order to see whether the hypersurface obtained by the SCC method is really an approximate integral surface in the TDHF manifold or not. (author)
Nonlinear structural analysis using integrated force method
Indian Academy of Sciences (India)
A new formulation termed the Integrated Force Method (IFM) was proposed by Patnaik ... nated ``Structure (nY m)'' where (nY m) are the force and displacement degrees of ..... Patnaik S N, Yadagiri S 1976 Frequency analysis of structures.
International Nuclear Information System (INIS)
Buck, John W.; McDonald, John P.; Taira, Randal Y.
2002-01-01
To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders
Indirect methods for wake potential integration
International Nuclear Information System (INIS)
Zagorodnov, I.
2006-05-01
The development of the modern accelerator and free-electron laser projects requires to consider wake fields of very short bunches in arbitrary three dimensional structures. To obtain the wake numerically by direct integration is difficult, since it takes a long time for the scattered fields to catch up to the bunch. On the other hand no general algorithm for indirect wake field integration is available in the literature so far. In this paper we review the know indirect methods to compute wake potentials in rotationally symmetric and cavity-like three dimensional structures. For arbitrary three dimensional geometries we introduce several new techniques and test them numerically. (Orig.)
Numerical methods for engine-airframe integration
International Nuclear Information System (INIS)
Murthy, S.N.B.; Paynter, G.C.
1986-01-01
Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment
Permutation statistical methods an integrated approach
Berry, Kenneth J; Johnston, Janis E
2016-01-01
This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...
Analysis Method for Integrating Components of Product
Energy Technology Data Exchange (ETDEWEB)
Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)
2017-04-15
This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.
Analysis Method for Integrating Components of Product
International Nuclear Information System (INIS)
Choi, Jun Ho; Lee, Kun Sang
2017-01-01
This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.
First integral method for an oscillator system
Directory of Open Access Journals (Sweden)
Xiaoqian Gong
2013-04-01
Full Text Available In this article, we consider the nonlinear Duffing-van der Pol-type oscillator system by means of the first integral method. This system has physical relevance as a model in certain flow-induced structural vibration problems, which includes the van der Pol oscillator and the damped Duffing oscillator etc as particular cases. Firstly, we apply the Division Theorem for two variables in the complex domain, which is based on the ring theory of commutative algebra, to explore a quasi-polynomial first integral to an equivalent autonomous system. Then, through solving an algebraic system we derive the first integral of the Duffing-van der Pol-type oscillator system under certain parametric condition.
Minezawa, Noriyuki; Kato, Shigeki
2007-02-07
The authors present an implementation of the three-dimensional reference interaction site model self-consistent-field (3D-RISM-SCF) method. First, they introduce a robust and efficient algorithm for solving the 3D-RISM equation. The algorithm is a hybrid of the Newton-Raphson and Picard methods. The Jacobian matrix is analytically expressed in a computationally useful form. Second, they discuss the solute-solvent electrostatic interaction. For the solute to solvent route, the electrostatic potential (ESP) map on a 3D grid is constructed directly from the electron density. The charge fitting procedure is not required to determine the ESP. For the solvent to solute route, the ESP acting on the solute molecule is derived from the solvent charge distribution obtained by solving the 3D-RISM equation. Matrix elements of the solute-solvent interaction are evaluated by the direct numerical integration. A remarkable reduction in the computational time is observed in both routes. Finally, the authors implement the first derivatives of the free energy with respect to the solute nuclear coordinates. They apply the present method to "solute" water and formaldehyde in aqueous solvent using the simple point charge model, and the results are compared with those from other methods: the six-dimensional molecular Ornstein-Zernike SCF, the one-dimensional site-site RISM-SCF, and the polarizable continuum model. The authors also calculate the solvatochromic shifts of acetone, benzonitrile, and nitrobenzene using the present method and compare them with the experimental and other theoretical results.
Guo, Wei-Li; Huang, De-Shuang
2017-08-22
Transcription factors (TFs) are DNA-binding proteins that have a central role in regulating gene expression. Identification of DNA-binding sites of TFs is a key task in understanding transcriptional regulation, cellular processes and disease. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) enables genome-wide identification of in vivo TF binding sites. However, it is still difficult to map every TF in every cell line owing to cost and biological material availability, which poses an enormous obstacle for integrated analysis of gene regulation. To address this problem, we propose a novel computational approach, TFBSImpute, for predicting additional TF binding profiles by leveraging information from available ChIP-seq TF binding data. TFBSImpute fuses the dataset to a 3-mode tensor and imputes missing TF binding signals via simultaneous completion of multiple TF binding matrices with positional consistency. We show that signals predicted by our method achieve overall similarity with experimental data and that TFBSImpute significantly outperforms baseline approaches, by assessing the performance of imputation methods against observed ChIP-seq TF binding profiles. Besides, motif analysis shows that TFBSImpute preforms better in capturing binding motifs enriched in observed data compared with baselines, indicating that the higher performance of TFBSImpute is not simply due to averaging related samples. We anticipate that our approach will constitute a useful complement to experimental mapping of TF binding, which is beneficial for further study of regulation mechanisms and disease.
Multistep Methods for Integrating the Solar System
1988-07-01
Technical Report 1055 [Multistep Methods for Integrating the Solar System 0 Panayotis A. Skordos’ MIT Artificial Intelligence Laboratory DTIC S D g8...RMA ELEENT. PROECT. TASK Artific ial Inteligence Laboratory ARE1A G WORK UNIT NUMBERS 545 Technology Square Cambridge, MA 02139 IL. CONTROLLING...describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology, supported by the Advanced Research Projects
International Nuclear Information System (INIS)
Carmo, E.G.D. do; Galeao, A.C.N.R.
1986-01-01
A new method specially designed to solve highly convective transport problems is proposed. Using a variational approach it is shown that this weighted residual method belongs to a class of Petrov-Galerkin's approximation. Some examples are presented in order to demonstrate the adequacy of this method in predicting internal or external boundary layers. (Author) [pt
International Nuclear Information System (INIS)
Garcia, A.L.; Alexander, F.J.; Alder, B.J.
1997-01-01
The consistent Boltzmann algorithm (CBA) for dense, hard-sphere gases is generalized to obtain the van der Waals equation of state and the corresponding exact viscosity at all densities except at the highest temperatures. A general scheme for adjusting any transport coefficients to higher values is presented
Bosons system with finite repulsive interaction: self-consistent field method
International Nuclear Information System (INIS)
Renatino, M.M.B.
1983-01-01
Some static properties of a boson system (T = zero degree Kelvin), under the action of a repulsive potential are studied. For the repulsive potential, a model was adopted consisting of a region where it is constant (r c ), and a decay as 1/r (r > r c ). The self-consistent field approximation used takes into account short range correlations through a local field corrections, which leads to an effective field. The static structure factor S(q-vector) and the effective potential ψ(q-vector) are obtained through a self-consistent calculation. The pair-correlation function g(r-vector) and the energy of the collective excitations E(q-vector) are also obtained, from the structure factor. The density of the system and the parameters of the repulsive potential, that is, its height and the size of the constant region were used as variables for the problem. The results obtained for S(q-vector), g(r-vector) and E(q-vector) for a fixed ratio r o /r c and a variable λ, indicates the raising of a system structure, which is more noticeable when the potential became more repulsive. (author)
High sensitive quench detection method using an integrated test wire
International Nuclear Information System (INIS)
Fevrier, A.; Tavergnier, J.P.; Nithart, H.; Kiblaire, M.; Duchateau, J.L.
1981-01-01
A high sensitive quench detection method which works even in the presence of an external perturbing magnetic field is reported. The quench signal is obtained from the difference in voltages at the superconducting winding terminals and at the terminals at a secondary winding strongly coupled to the primary. The secondary winding could consist of a ''zero-current strand'' of the superconducting cable not connected to one of the winding terminals or an integrated normal test wire inside the superconducting cable. Experimental results on quench detection obtained by this method are described. It is shown that the integrated test wire method leads to efficient and sensitive quench detection, especially in the presence of an external perturbing magnetic field
A RTS-based method for direct and consistent calculating intermittent peak cooling loads
International Nuclear Information System (INIS)
Chen Tingyao; Cui, Mingxian
2010-01-01
The RTS method currently recommended by ASHRAE Handbook is based on continuous operation. However, most of air-conditioning systems, if not all, in commercial buildings, are intermittently operated in practice. The application of the current RTS method to intermittent air-conditioning in nonresidential buildings could result in largely underestimated design cooling loads, and inconsistently sized air-conditioning systems. Improperly sized systems could seriously deteriorate the performance of system operation and management. Therefore, a new method based on both the current RTS method and the principles of heat transfer has been developed. The first part of the new method is the same as the current RTS method in principle, but its calculation procedure is simplified by the derived equations in a close form. The technical data available in the current RTS method can be utilized to compute zone responses to a change in space air temperature so that no efforts are needed for regenerating new technical data. Both the overall RTS coefficients and the hourly cooling loads computed in the first part are used to estimate the additional peak cooling load due to a change from continuous operation to intermittent operation. It only needs one more step after the current RTS method to determine the intermittent peak cooling load. The new RTS-based method has been validated by EnergyPlus simulations. The root mean square deviation (RMSD) between the relative additional peak cooling loads (RAPCLs) computed by the two methods is 1.8%. The deviation of the RAPCL varies from -3.0% to 5.0%, and the mean deviation is 1.35%.
Steultjens, M.P.M.; Dekker, J.; Baar, M.E. van; Oostendorp, R.A.B.; Bijlsma, J.W.J.
1999-01-01
Objective: To establish the internal consistency of validity of an observational method for assessing diasbility in mobility in patients with osteoarthritis (OA), Methods: Data were obtained from 198 patients with OA of the hip or knee. Results of the observational method were compared with results
Consistent method of truncating the electron self-energy in nonperturbative QED
International Nuclear Information System (INIS)
Rembiesa, P.
1986-01-01
A nonperturbative method of solving the Dyson-Schwinger equations for the fermion propagator is considered. The solution satisfies the Ward-Takahashi identity, allows multiplicative regularization, and exhibits a physical-mass pole
Continual integration method in the polaron model
International Nuclear Information System (INIS)
Kochetov, E.A.; Kuleshov, S.P.; Smondyrev, M.A.
1981-01-01
The article is devoted to the investigation of a polaron system on the base of a variational approach formulated on the language of continuum integration. The variational method generalizing the Feynman one for the case of the system pulse different from zero has been formulated. The polaron state has been investigated at zero temperature. A problem of the bound state of two polarons exchanging quanta of a scalar field as well as a problem of polaron scattering with an external field in the Born approximation have been considered. Thermodynamics of the polaron system has been investigated, namely, high-temperature expansions for mean energy and effective polaron mass have been studied [ru
Directory of Open Access Journals (Sweden)
Ching-Lin Hsiao
Full Text Available Advances in biotechnology have resulted in large-scale studies of DNA methylation. A differentially methylated region (DMR is a genomic region with multiple adjacent CpG sites that exhibit different methylation statuses among multiple samples. Many so-called "supervised" methods have been established to identify DMRs between two or more comparison groups. Methods for the identification of DMRs without reference to phenotypic information are, however, less well studied. An alternative "unsupervised" approach was proposed, in which DMRs in studied samples were identified with consideration of nature dependence structure of methylation measurements between neighboring probes from tiling arrays. Through simulation study, we investigated effects of dependencies between neighboring probes on determining DMRs where a lot of spurious signals would be produced if the methylation data were analyzed independently of the probe. In contrast, our newly proposed method could successfully correct for this effect with a well-controlled false positive rate and a comparable sensitivity. By applying to two real datasets, we demonstrated that our method could provide a global picture of methylation variation in studied samples. R source codes to implement the proposed method were freely available at http://www.csjfann.ibms.sinica.edu.tw/eag/programlist/ICDMR/ICDMR.html.
Directory of Open Access Journals (Sweden)
S. Ceccherini
2007-01-01
Full Text Available The retrieval of concentration vertical profiles of atmospheric constituents from spectroscopic measurements is often an ill-conditioned problem and regularization methods are frequently used to improve its stability. Recently a new method, that provides a good compromise between precision and vertical resolution, was proposed to determine analytically the value of the regularization parameter. This method is applied for the first time to real measurements with its implementation in the operational retrieval code of the satellite limb-emission measurements of the MIPAS instrument and its performances are quantitatively analyzed. The adopted regularization improves the stability of the retrieval providing smooth profiles without major degradation of the vertical resolution. In the analyzed measurements the retrieval procedure provides a vertical resolution that, in the troposphere and low stratosphere, is smaller than the vertical field of view of the instrument.
DEFF Research Database (Denmark)
Gavnholt, Jeppe; Olsen, Thomas; Engelund, Mads
2008-01-01
is a density-functional method closely resembling standard density-functional theory (DFT), the only difference being that in Delta SCF one or more electrons are placed in higher lying Kohn-Sham orbitals instead of placing all electrons in the lowest possible orbitals as one does when calculating the ground......-state energy within standard DFT. We extend the Delta SCF method by allowing excited electrons to occupy orbitals which are linear combinations of Kohn-Sham orbitals. With this extra freedom it is possible to place charge locally on adsorbed molecules in the calculations, such that resonance energies can...... be estimated, which is not possible in traditional Delta SCF because of very delocalized Kohn-Sham orbitals. The method is applied to N2, CO, and NO adsorbed on different metallic surfaces and compared to ordinary Delta SCF without our modification, spatially constrained DFT, and inverse...
Huan, L N; Tejani, A M; Egan, G
2014-10-01
An increasing amount of recently published literature has implicated outcome reporting bias (ORB) as a major contributor to skewing data in both randomized controlled trials and systematic reviews; however, little is known about the current methods in place to detect ORB. This study aims to gain insight into the detection and management of ORB by biomedical journals. This was a cross-sectional analysis involving standardized questions via email or telephone with the top 30 biomedical journals (2012) ranked by impact factor. The Cochrane Database of Systematic Reviews was excluded leaving 29 journals in the sample. Of 29 journals, 24 (83%) responded to our initial inquiry of which 14 (58%) answered our questions and 10 (42%) declined participation. Five (36%) of the responding journals indicated they had a specific method to detect ORB, whereas 9 (64%) did not have a specific method in place. The prevalence of ORB in the review process seemed to differ with 4 (29%) journals indicating ORB was found commonly, whereas 7 (50%) indicated ORB was uncommon or never detected by their journal previously. The majority (n = 10/14, 72%) of journals were unwilling to report or make discrepancies found in manuscripts available to the public. Although the minority, there were some journals (n = 4/14, 29%) which described thorough methods to detect ORB. Many journals seemed to lack a method with which to detect ORB and its estimated prevalence was much lower than that reported in literature suggesting inadequate detection. There exists a potential for overestimation of treatment effects of interventions and unclear risks. Fortunately, there are journals within this sample which appear to utilize comprehensive methods for detection of ORB, but overall, the data suggest improvements at the biomedical journal level for detecting and minimizing the effect of this bias are needed. © 2014 John Wiley & Sons Ltd.
International Nuclear Information System (INIS)
Didong, M.
1976-01-01
The extend generator-coordinated method is discussed and a procedure is given for the solution of the Hill-Wheeler equation. The HFB-theory, the particle-number and angular-momentum projections necessary for symmetry, and the modified surprice delta interaction are discussed. The described procedures are used to calculate 72 Ge, 70 Zn and 74 Ge properties. (BJ) [de
Improvements of the integral transport theory method
International Nuclear Information System (INIS)
Kavenoky, A.; Lam-Hime, M.; Stankovski, Z.
1979-01-01
The integral transport theory is widely used in practical reactor design calculations however it is computer time consuming for two dimensional calculations of large media. In the first part of this report a new treatment is presented; it is based on the Galerkin method: inside each region the total flux is expanded over a three component basis. Numerical comparison shows that this method can considerably reduce the computing time. The second part of the this report is devoted to homogeneization theory: a straightforward calculation of the fundamental mode for an heterogeneous cell is presented. At first general presentation of the problem is given, then it is simplified to plane geometry and numerical results are presented
Collaborative teaching of an integrated methods course
Directory of Open Access Journals (Sweden)
George Zhou
2011-03-01
Full Text Available With an increasing diversity in American schools, teachers need to be able to collaborate in teaching. University courses are widely considered as a stage to demonstrate or model the ways of collaboration. To respond to this call, three authors team taught an integrated methods course at an urban public university in the city of New York. Following a qualitative research design, this study explored both instructors‟ and pre-service teachers‟ experiences with this course. Study findings indicate that collaborative teaching of an integrated methods course is feasible and beneficial to both instructors and pre-service teachers. For instructors, this collaborative teaching was a reciprocal learning process where they were engaged in thinking about teaching in a broader and innovative way. For pre-service teachers, this collaborative course not only helped them understand how three different subjects could be related to each other, but also provided opportunities for them to actually see how collaboration could take place in teaching. Their understanding of collaborative teaching was enhanced after the course.
Li, Haibin; He, Yun; Nie, Xiaobo
2018-01-01
Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.
Khalil, Shahid Akbar; Zamir, Roshan; Ahmad, Nisar
2014-01-01
Stevia rebaudiana (Bert.) is an emerging sugar alternative and anti-diabetic plant in Pakistan. That is why people did not know the exact time of propagation. The main objective of the present study was to establish feasible propagation methods for healthy biomass production. In the present study, seed germination, stem cuttings and micropropagation were investigated for higher productivity. Fresh seeds showed better germination (25.51–40%) but lost viability after a few days of storage. In o...
Bootstrapping the economy -- a non-parametric method of generating consistent future scenarios
Müller, Ulrich A; Bürgi, Roland; Dacorogna, Michel M
2004-01-01
The fortune and the risk of a business venture depends on the future course of the economy. There is a strong demand for economic forecasts and scenarios that can be applied to planning and modeling. While there is an ongoing debate on modeling economic scenarios, the bootstrapping (or resampling) approach presented here has several advantages. As a non-parametric method, it directly relies on past market behaviors rather than debatable assumptions on models and parameters. Simultaneous dep...
An Integrated Method of Supply Chains Vulnerability Assessment
Directory of Open Access Journals (Sweden)
Jiaguo Liu
2016-01-01
Full Text Available Supply chain vulnerability identification and evaluation are extremely important to mitigate the supply chain risk. We present an integrated method to assess the supply chain vulnerability. The potential failure mode of the supply chain vulnerability is analyzed through the SCOR model. Combining the fuzzy theory and the gray theory, the correlation degree of each vulnerability indicator can be calculated and the target improvements can be carried out. In order to verify the effectiveness of the proposed method, we use Kendall’s tau coefficient to measure the effect of different methods. The result shows that the presented method has the highest consistency in the assessment compared with the other two methods.
International Nuclear Information System (INIS)
Lee, Jay Min; Yang, Dong-Seok
2007-01-01
Inverse problem solving computation was performed for solving PDF (pair distribution function) from simulated data EXAFS based on data FEFF. For a realistic comparison with experimental data, we chose a model of the first sub-shell Mn-0 pair showing the Jahn Teller distortion in crystalline LaMnO3. To restore the Fourier filtering signal distortion, involved in the first sub-shell information isolated from higher shell contents, relevant distortion matching function was computed initially from the proximity model, and iteratively from the prior-guess during consecutive regularization computation. Adaptive computation of EXAFS background correction is an issue of algorithm development, but our preliminary test was performed under the simulated background correction perfectly excluding the higher shell interference. In our numerical result, efficient convergence of iterative solution indicates a self-consistent tendency that a true PDF solution is convinced as a counterpart of genuine chi-data, provided that a background correction function is iteratively solved using an extended algorithm of MEPP (Matched EXAFS PDF Projection) under development
Methods in Entrepreneurship Education Research: A Review and Integrative Framework
DEFF Research Database (Denmark)
Blenker, Per; Trolle Elmholdt, Stine; Frederiksen, Signe Hedeboe
2014-01-01
is fragmented both conceptually and methodologically. Findings suggest that the methods applied in entrepreneurship education research cluster in two groups: 1. quantitative studies of the extent and effect of entrepreneurship education, and 2. qualitative single case studies of different courses and programmes....... It integrates qualitative and quantitative techniques, the use of research teams consisting of insiders (teachers studying their own teaching) and outsiders (research collaborators studying the education) as well as multiple types of data. To gain both in-depth and analytically generalizable studies...... a variety of helpful methods, explore the potential relation between insiders and outsiders in the research process, and discuss how different types of data can be combined. The integrated framework urges researchers to extend investments in methodological efforts and to enhance the in-depth understanding...
Nonaka, Andrew; Day, Marcus S.; Bell, John B.
2018-01-01
We present a numerical approach for low Mach number combustion that conserves both mass and energy while remaining on the equation of state to a desired tolerance. We present both unconfined and confined cases, where in the latter the ambient pressure changes over time. Our overall scheme is a projection method for the velocity coupled to a multi-implicit spectral deferred corrections (SDC) approach to integrate the mass and energy equations. The iterative nature of SDC methods allows us to incorporate a series of pressure discrepancy corrections naturally that lead to additional mass and energy influx/outflux in each finite volume cell in order to satisfy the equation of state. The method is second order, and satisfies the equation of state to a desired tolerance with increasing iterations. Motivated by experimental results, we test our algorithm on hydrogen flames with detailed kinetics. We examine the morphology of thermodiffusively unstable cylindrical premixed flames in high-pressure environments for confined and unconfined cases. We also demonstrate that our algorithm maintains the equation of state for premixed methane flames and non-premixed dimethyl ether jet flames.
Steultjens, M. P.; Dekker, J.; van Baar, M. E.; Oostendorp, R. A.; Bijlsma, J. W.
1999-01-01
To establish the internal consistency and validity of an observational method for assessing disability in mobility in patients with osteoarthritis (OA). Data were obtained from 198 patients with OA of the hip or knee. Results of the observational method were compared with results of self-report
A design method for two-layer beams consisting of normal and fibered high strength concrete
International Nuclear Information System (INIS)
Iskhakov, I.; Ribakov, Y.
2007-01-01
Two-layer fibered concrete beams can be analyzed using conventional methods for composite elements. The compressed zone of such beam section is made of high strength concrete (HSC), and the tensile one of normal strength concrete (NSC). The problems related to such type of beams are revealed and studied. An appropriate depth of each layer is prescribed. Compatibility conditions between HSC and NSC layers are found. It is based on the shear deformations equality on the layers border in a section with maximal depth of the compression zone. For the first time a rigorous definition of HSC is given using a comparative analysis of deformability and strength characteristics of different concrete classes. According to this definition, HSC has no download branch in the stress-strain diagram, the stress-strain function has minimum exponent, the ductility parameter is minimal and the concrete tensile strength remains constant with an increase in concrete compression strength. The application fields of two-layer concrete beams based on different static schemes and load conditions make known. It is known that the main disadvantage of HSCs is their low ductility. In order to overcome this problem, fibers are added to the HSC layer. Influence of different fiber volume ratios on structural ductility is discussed. An upper limit of the required fibers volume ratio is found based on compatibility equation of transverse tensile concrete deformations and deformations of fibers
Tsuyuki, Kiyomi; Gipson, Jessica D; Barbosa, Regina Maria; Urada, Lianne A; Morisky, Donald E
2017-12-12
Syndemic Zika virus, HIV and unintended pregnancy call for an urgent understanding of dual method (condoms with another modern non-barrier contraceptive) and consistent condom use. Multinomial and logistic regression analysis using data from the Pesquisa Nacional de Demografia e Saúde da Criança e da Mulher (PNDS), a nationally representative household survey of reproductive-aged women in Brazil, identified the socio-demographic, fertility and relationship context correlates of exclusive non-barrier contraception, dual method use and condom use consistency. Among women in marital and civil unions, half reported dual protection (30% condoms, 20% dual methods). In adjusted models, condom use was associated with older age and living in the northern region of Brazil or in urban areas, whereas dual method use (versus condom use) was associated with younger age, living in the southern region of Brazil, living in non-urban areas and relationship age homogamy. Among condom users, consistent condom use was associated with reporting Afro-religion or other religion, not wanting (more) children and using condoms only (versus dual methods). Findings highlight that integrated STI prevention and family planning services should target young married/in union women, couples not wanting (more) children and heterogamous relationships to increase dual method use and consistent condom use.
Parallel Jacobi EVD Methods on Integrated Circuits
Directory of Open Access Journals (Sweden)
Chi-Chia Sun
2014-01-01
Full Text Available Design strategies for parallel iterative algorithms are presented. In order to further study different tradeoff strategies in design criteria for integrated circuits, A 10 × 10 Jacobi Brent-Luk-EVD array with the simplified μ-CORDIC processor is used as an example. The experimental results show that using the μ-CORDIC processor is beneficial for the design criteria as it yields a smaller area, faster overall computation time, and less energy consumption than the regular CORDIC processor. It is worth to notice that the proposed parallel EVD method can be applied to real-time and low-power array signal processing algorithms performing beamforming or DOA estimation.
Khalil, Shahid Akbar; Zamir, Roshan; Ahmad, Nisar
2014-01-01
Stevia rebaudiana (Bert.) is an emerging sugar alternative and anti-diabetic plant in Pakistan. That is why people did not know the exact time of propagation. The main objective of the present study was to establish feasible propagation methods for healthy biomass production. In the present study, seed germination, stem cuttings and micropropagation were investigated for higher productivity. Fresh seeds showed better germination (25.51–40%) but lost viability after a few days of storage. In order to improve the germination percentage, seeds were irradiated with 2.5, 5.0, 7.5 and 10 Gy gamma doses. But gamma irradiation did not show any significant change in seed germination. A great variation in survival of stem cutting was observed in each month of 2012. October and November were found the most suitable months for stem cutting survival (60%). In order to enhance survival, stem cuttings were also dipped in different plant growth regulators (PGRs) solution. Only indole butyric acid (IBA; 1000 ppm) treated cutting showed a higher survival (33%) than control (11.1%). Furthermore, simple and feasible indirect regeneration system was established from leaf explants. Best callus induction (84.6%) was observed on MS-medium augmented with 6-benzyladenine (BA) and 2,4-dichlorophenoxyacetic acid (2,4-D; 2.0 mg l−1). For the first time, we obtained the highest number of shoots (106) on a medium containing BA (1.5 mg l−1) and gibberellic acid (GA3; 0.5 mg l−1). Plantlets were successfully acclimatized in plastic pots. The current results preferred micropropagation (85%) over seed germination (25.51–40%) and stem cutting (60%). PMID:25473365
Khalil, Shahid Akbar; Zamir, Roshan; Ahmad, Nisar
2014-12-01
Stevia rebaudiana (Bert.) is an emerging sugar alternative and anti-diabetic plant in Pakistan. That is why people did not know the exact time of propagation. The main objective of the present study was to establish feasible propagation methods for healthy biomass production. In the present study, seed germination, stem cuttings and micropropagation were investigated for higher productivity. Fresh seeds showed better germination (25.51-40%) but lost viability after a few days of storage. In order to improve the germination percentage, seeds were irradiated with 2.5, 5.0, 7.5 and 10 Gy gamma doses. But gamma irradiation did not show any significant change in seed germination. A great variation in survival of stem cutting was observed in each month of 2012. October and November were found the most suitable months for stem cutting survival (60%). In order to enhance survival, stem cuttings were also dipped in different plant growth regulators (PGRs) solution. Only indole butyric acid (IBA; 1000 ppm) treated cutting showed a higher survival (33%) than control (11.1%). Furthermore, simple and feasible indirect regeneration system was established from leaf explants. Best callus induction (84.6%) was observed on MS-medium augmented with 6-benzyladenine (BA) and 2,4-dichlorophenoxyacetic acid (2,4-D; 2.0 mg l(-1)). For the first time, we obtained the highest number of shoots (106) on a medium containing BA (1.5 mg l(-1)) and gibberellic acid (GA3; 0.5 mg l(-1)). Plantlets were successfully acclimatized in plastic pots. The current results preferred micropropagation (85%) over seed germination (25.51-40%) and stem cutting (60%).
Directory of Open Access Journals (Sweden)
V. S. Zarubin
2015-01-01
Full Text Available The rational use of composites as structural materials, while perceiving the thermal and mechanical loads, to a large extent determined by their thermoelastic properties. From the presented review of works devoted to the analysis of thermoelastic characteristics of composites, it follows that the problem of estimating these characteristics is important. Among the thermoelastic properties of composites occupies an important place its temperature coefficient of linear expansion.Along with fiber composites are widely used in the technique of dispersion hardening composites, in which the role of inclusions carry particles of high-strength and high-modulus materials, including nanostructured elements. Typically, the dispersed particles have similar dimensions in all directions, which allows the shape of the particles in the first approximation the ball.In an article for the composite with isotropic spherical inclusions of a plurality of different materials by the self-produced design formulas relating the temperature coefficient of linear expansion with volume concentration of inclusions and their thermoelastic characteristics, as well as the thermoelastic properties of the matrix of the composite. Feature of the method is the self-accountability thermomechanical interaction of a single inclusion or matrix particles with a homogeneous isotropic medium having the desired temperature coefficient of linear expansion. Averaging over the volume of the composite arising from such interaction perturbation strain and stress in the inclusions and the matrix particles and makes it possible to obtain such calculation formulas.For the validation of the results of calculations of the temperature coefficient of linear expansion of the composite of this type used two-sided estimates that are based on the dual variational formulation of linear thermoelasticity problem in an inhomogeneous solid containing two alternative functional (such as Lagrange and Castigliano
Numerov iteration method for second order integral-differential equation
International Nuclear Information System (INIS)
Zeng Fanan; Zhang Jiaju; Zhao Xuan
1987-01-01
In this paper, Numerov iterative method for second order integral-differential equation and system of equations are constructed. Numerical examples show that this method is better than direct method (Gauss elimination method) in CPU time and memoy requireing. Therefore, this method is an efficient method for solving integral-differential equation in nuclear physics
An Integrated Method for Airfoil Optimization
Okrent, Joshua B.
Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal
METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS
Directory of Open Access Journals (Sweden)
A. Lasher
2013-09-01
example, this research proved the sustainability of the proposed integrated optimization parameters of transport systems. This approach could be applied not only for MTS, but also for other transport systems. Originality. The bases of the complex optimization of transport presented are the new system of universal scientific methods and approaches that ensure high accuracy and authenticity of calculations with the simulation of transport systems and transport networks taking into account the dynamics of their development. Practical value. The development of the theoretical and technological bases of conducting the complex optimization of transport makes it possible to create the scientific tool, which ensures the fulfillment of the automated simulation and calculating of technical and economic structure and technology of the work of different objects of transport, including its infrastructure.
Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods
Energy Technology Data Exchange (ETDEWEB)
Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Phillips, Jason J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms (ATF), Huntsville, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2013-03-25
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.
International Nuclear Information System (INIS)
Fort, E.; Darrouzet, M.; Derrien, H.; Hammer, P.; Martin-Deidier, L.
1979-01-01
In this evaluation both integral and microscopic data are considered as reference data. Calculations are performed with SLBW formalism in the resolved resonance region and statistical formalism elsewhere. Neutron penetrabilities are obtained from coupled channel calculations, considering 241 Am as a symmetric rotationnal nucleus. For fission the agreement is excellent between evaluated and integral data and is confirmed by the most recent microscopic measurements. High values for capture cross-sections are supported by integral measurements
National Research Council Canada - National Science Library
Collier, Craig
2005-01-01
This SBIR report maintains that reliable pretest predictions and efficient certification are suffering from inconsistent structural integrity that is prevalent throughout a project's design maturity...
Directory of Open Access Journals (Sweden)
Seiya Nishiyama
2009-01-01
Full Text Available The maximally-decoupled method has been considered as a theory to apply an basic idea of an integrability condition to certain multiple parametrized symmetries. The method is regarded as a mathematical tool to describe a symmetry of a collective submanifold in which a canonicity condition makes the collective variables to be an orthogonal coordinate-system. For this aim we adopt a concept of curvature unfamiliar in the conventional time-dependent (TD self-consistent field (SCF theory. Our basic idea lies in the introduction of a sort of Lagrange manner familiar to fluid dynamics to describe a collective coordinate-system. This manner enables us to take a one-form which is linearly composed of a TD SCF Hamiltonian and infinitesimal generators induced by collective variable differentials of a canonical transformation on a group. The integrability condition of the system read the curvature C = 0. Our method is constructed manifesting itself the structure of the group under consideration. To go beyond the maximaly-decoupled method, we have aimed to construct an SCF theory, i.e., υ (external parameter-dependent Hartree-Fock (HF theory. Toward such an ultimate goal, the υ-HF theory has been reconstructed on an affine Kac-Moody algebra along the soliton theory, using infinite-dimensional fermion. An infinite-dimensional fermion operator is introduced through a Laurent expansion of finite-dimensional fermion operators with respect to degrees of freedom of the fermions related to a υ-dependent potential with a Υ-periodicity. A bilinear equation for the υ-HF theory has been transcribed onto the corresponding τ-function using the regular representation for the group and the Schur-polynomials. The υ-HF SCF theory on an infinite-dimensional Fock space F∞ leads to a dynamics on an infinite-dimensional Grassmannian Gr∞ and may describe more precisely such a dynamics on the group manifold. A finite-dimensional Grassmannian is identified with a Gr
International Nuclear Information System (INIS)
Freitag, Joerg; Kosuge, Hitoshi; Schmelzer, Juergen P.; Kato, Satoru
2015-01-01
Highlights: • We use a new, simple static cell vapor phase manual sampling method (SCVMS) for VLE (x, y, T) measurement. • The method is applied to non-azeotropic, asymmetric and two-liquid phase forming azeotropic binaries. • The method is approved by a data consistency test, i.e., a plot of the polarity exclusion factor vs. pressure. • The consistency test reveals that with the new SCVMS method accurate VLE near ambient temperature can be measured. • Moreover, the consistency test approves that the effect of air in the SCVMS system is negligible. - Abstract: A new static cell vapor phase manual sampling (SCVMS) method is used for the simple measurement of constant temperature x, y (vapor + liquid) equilibria (VLE). The method was applied to the VLE measurements of the (methanol + water) binary at T/K = (283.2, 298.2, 308.2 and 322.9), asymmetric (acetone + 1-butanol) binary at T/K = (283.2, 295.2, 308.2 and 324.2) and two-liquid phase forming azeotropic (water + 1-butanol) binary at T/K = (283.2 and 298.2). The accuracy of the experimental data was approved by a data consistency test, that is, an empirical plot of the polarity exclusion factor, β, vs. the system pressure, P. The SCVMS data are accurate, because the VLE data converge to the same lnβ vs. lnP straight line determined from conventional distillation-still method and a headspace gas chromatography method
International Nuclear Information System (INIS)
Kobayasi, Masato; Matsuyanagi, Kenichi; Nakatsukasa, Takashi; Matsuo, Masayuki
2003-01-01
The adiabatic self-consistent collective coordinate method is applied to an exactly solvable multi-O(4) model that is designed to describe nuclear shape coexistence phenomena. The collective mass and dynamics of large amplitude collective motion in this model system are analyzed, and it is shown that the method yields a faithful description of tunneling motion through a barrier between the prolate and oblate local minima in the collective potential. The emergence of the doublet pattern is clearly described. (author)
DEFF Research Database (Denmark)
Troldborg, Niels; Sørensen, Niels N.; Réthoré, Pierre-Elouan
2015-01-01
This paper describes a consistent algorithm for eliminating the numerical wiggles appearing when solving the finite volume discretized Navier-Stokes equations with discrete body forces in a collocated grid arrangement. The proposed method is a modification of the Rhie-Chow algorithm where the for...
Boundary integral methods for unsaturated flow
International Nuclear Information System (INIS)
Martinez, M.J.; McTigue, D.F.
1990-01-01
Many large simulations may be required to assess the performance of Yucca Mountain as a possible site for the nations first high level nuclear waste repository. A boundary integral equation method (BIEM) is described for numerical analysis of quasilinear steady unsaturated flow in homogeneous material. The applicability of the exponential model for the dependence of hydraulic conductivity on pressure head is discussed briefly. This constitutive assumption is at the heart of the quasilinear transformation. Materials which display a wide distribution in pore-size are described reasonably well by the exponential. For materials with a narrow range in pore-size, the exponential is suitable over more limited ranges in pressure head. The numerical implementation of the BIEM is used to investigate the infiltration from a strip source to a water table. The net infiltration of moisture into a finite-depth layer is well-described by results for a semi-infinite layer if αD > 4, where α is the sorptive number and D is the depth to the water table. the distribution of moisture exhibits a similar dependence on αD. 11 refs., 4 figs.,
Integral Equation Methods for Electromagnetic and Elastic Waves
Chew, Weng; Hu, Bin
2008-01-01
Integral Equation Methods for Electromagnetic and Elastic Waves is an outgrowth of several years of work. There have been no recent books on integral equation methods. There are books written on integral equations, but either they have been around for a while, or they were written by mathematicians. Much of the knowledge in integral equation methods still resides in journal papers. With this book, important relevant knowledge for integral equations are consolidated in one place and researchers need only read the pertinent chapters in this book to gain important knowledge needed for integral eq
Analytic methods to generate integrable mappings
Indian Academy of Sciences (India)
essential integrability features of an integrable differential equation is a .... With this in mind we first write x3(t) as a cubic polynomial in (xn−1,xn,xn+1) and then ..... coefficients, the quadratic equation in xn+N has real and distinct roots which in ...
Directory of Open Access Journals (Sweden)
Zhongqiang Xiong
2018-01-01
Full Text Available In this work, trying to avoid difficulty of application due to the irregular filler shapes in experiments, self-consistent and differential self-consistent methods were combined to obtain a decoupled equation. The combined method suggests a tenor γ independent of filler-contents being an important connection between high and low filler-contents. On one hand, the constant parameter can be calculated by Eshelby’s inclusion theory or the Mori–Tanaka method to predict effective properties of composites coinciding with its hypothesis. On the other hand, the parameter can be calculated with several experimental results to estimate the effective properties of prepared composites of other different contents. In addition, an evaluation index σ f ′ of the interactional strength between matrix and fillers is proposed based on experiments. In experiments, a hyper-dispersant was synthesized to prepare polypropylene/calcium carbonate (PP/CaCO3 composites up to 70 wt % of filler-content with dispersion, whose dosage was only 5 wt % of the CaCO3 contents. Based on several verifications, it is hoped that the combined self-consistent method is valid for other two-phase composites in experiments with the same application progress as in this work.
The reduced basis method for the electric field integral equation
International Nuclear Information System (INIS)
Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.
2011-01-01
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.
DFTB3: Extension of the self-consistent-charge density-functional tight-binding method (SCC-DFTB).
Gaus, Michael; Cui, Qiang; Elstner, Marcus
2012-04-10
The self-consistent-charge density-functional tight-binding method (SCC-DFTB) is an approximate quantum chemical method derived from density functional theory (DFT) based on a second-order expansion of the DFT total energy around a reference density. In the present study we combine earlier extensions and improve them consistently with, first, an improved Coulomb interaction between atomic partial charges, and second, the complete third-order expansion of the DFT total energy. These modifications lead us to the next generation of the DFTB methodology called DFTB3, which substantially improves the description of charged systems containing elements C, H, N, O, and P, especially regarding hydrogen binding energies and proton affinities. As a result, DFTB3 is particularly applicable to biomolecular systems. Remaining challenges and possible solutions are also briefly discussed.
International Nuclear Information System (INIS)
Wood, R.T.; Knee, H.E.; Mullens, J.A.; Munro, J.K. Jr.; Swail, B.K.; Tapp, P.A.
1993-01-01
The increasing use of computer technology in the US nuclear power industry has greatly expanded the capability to obtain, analyze, and present data about the plant to station personnel. Data concerning a power plant's design, configuration, operational and maintenance histories, and current status, and the information that can be derived from them, provide the link between the plant and plant staff. It is through this information bridge that operations, maintenance and engineering personnel understand and manage plant performance. However, it is necessary to transform the vast quantity of data available from various computer systems and across communications networks into clear, concise, and coherent information. In addition, it is important to organize this information into a consolidated, structured form within an integrated environment so that various users throughout the plant have ready access at their local station to knowledge necessary for their tasks. Thus, integrated workstations are needed to provide the inquired information and proper software tools, in a manner that can be easily understood and used, to the proper users throughout the plant. An effort is underway at the Oak Ridge National Laboratory to address this need by developing Integrated Workstation functional requirements and implementing a limited-scale prototype demonstration. The integrated Workstation requirements will define a flexible, expandable computer environment that permits a tailored implementation of workstation capabilities and facilitates future upgrades to add enhanced applications. The functionality to be supported by the integrated workstation and inherent capabilities to be provided by the workstation environment win be described. In addition, general technology areas which are to be addressed in the Integrated Workstation functional requirements will be discussed
Directory of Open Access Journals (Sweden)
Jiang Ying
2017-01-01
Full Text Available In this work, we study the (2+1-D Broer-Kaup equation. The composite periodic breather wave, the exact composite kink breather wave and the solitary wave solutions are obtained by using the coupled degradation technique and the consistent Riccati expansion method. These results may help us to investigate some complex dynamical behaviors and the interaction between composite non-linear waves in high dimensional models
Self-consistent DFT +U method for real-space time-dependent density functional theory calculations
Tancogne-Dejean, Nicolas; Oliveira, Micael J. T.; Rubio, Angel
2017-12-01
We implemented various DFT+U schemes, including the Agapito, Curtarolo, and Buongiorno Nardelli functional (ACBN0) self-consistent density-functional version of the DFT +U method [Phys. Rev. X 5, 011006 (2015), 10.1103/PhysRevX.5.011006] within the massively parallel real-space time-dependent density functional theory (TDDFT) code octopus. We further extended the method to the case of the calculation of response functions with real-time TDDFT+U and to the description of noncollinear spin systems. The implementation is tested by investigating the ground-state and optical properties of various transition-metal oxides, bulk topological insulators, and molecules. Our results are found to be in good agreement with previously published results for both the electronic band structure and structural properties. The self-consistent calculated values of U and J are also in good agreement with the values commonly used in the literature. We found that the time-dependent extension of the self-consistent DFT+U method yields improved optical properties when compared to the empirical TDDFT+U scheme. This work thus opens a different theoretical framework to address the nonequilibrium properties of correlated systems.
International Nuclear Information System (INIS)
Ceylan, C; Heide, U A van der; Bol, G H; Lagendijk, J J W; Kotte, A N T J
2005-01-01
Registration of different imaging modalities such as CT, MRI, functional MRI (fMRI), positron (PET) and single photon (SPECT) emission tomography is used in many clinical applications. Determining the quality of any automatic registration procedure has been a challenging part because no gold standard is available to evaluate the registration. In this note we present a method, called the 'multiple sub-volume registration' (MSR) method, for assessing the consistency of a rigid registration. This is done by registering sub-images of one data set on the other data set, performing a crude non-rigid registration. By analysing the deviations (local deformations) of the sub-volume registrations from the full registration we get a measure of the consistency of the rigid registration. Registration of 15 data sets which include CT, MR and PET images for brain, head and neck, cervix, prostate and lung was performed utilizing a rigid body registration with normalized mutual information as the similarity measure. The resulting registrations were classified as good or bad by visual inspection. The resulting registrations were also classified using our MSR method. The results of our MSR method agree with the classification obtained from visual inspection for all cases (p < 0.02 based on ANOVA of the good and bad groups). The proposed method is independent of the registration algorithm and similarity measure. It can be used for multi-modality image data sets and different anatomic sites of the patient. (note)
International Nuclear Information System (INIS)
Karriem, Z.; Ivanov, K.; Zamonsky, O.
2011-01-01
This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)
International Nuclear Information System (INIS)
Wang, C.S.; Freeman, A.J.
1979-01-01
We present the self-consistent numerical-basis-set linear combination of atomic orbitals (LCAO) discrete variational method for treating the electronic structure of thin films. As in the case of bulk solids, this method provides for thin films accurate solutions of the one-particle local density equations with a non-muffin-tin potential. Hamiltonian and overlap matrix elements are evaluated accurately by means of a three-dimensional numerical Diophantine integration scheme. Application of this method is made to the self-consistent solution of one-, three-, and five-layer Ni(001) unsupported films. The LCAO Bloch basis set consists of valence orbitals (3d, 4s, and 4p states for transition metals) orthogonalized to the frozen-core wave functions. The self-consistent potential is obtained iteratively within the superposition of overlapping spherical atomic charge density model with the atomic configurations treated as adjustable parameters. Thus the crystal Coulomb potential is constructed as a superposition of overlapping spherically symmetric atomic potentials and, correspondingly, the local density Kohn-Sham (α = 2/3) potential is determined from a superposition of atomic charge densities. At each iteration in the self-consistency procedure, the crystal charge density is evaluated using a sampling of 15 independent k points in (1/8)th of the irreducible two-dimensional Brillouin zone. The total density of states (DOS) and projected local DOS (by layer plane) are calculated using an analytic linear energy triangle method (presented as an Appendix) generalized from the tetrahedron scheme for bulk systems. Distinct differences are obtained between the surface and central plane local DOS. The central plane DOS is found to converge rapidly to the DOS of bulk paramagnetic Ni obtained by Wang and Callaway. Only a very small surplus charge (0.03 electron/atom) is found on the surface planes, in agreement with jellium model calculations
Integrated circuit and method of arbitration in a network on an integrated circuit.
2011-01-01
The invention relates to an integrated circuit and to a method of arbitration in a network on an integrated circuit. According to the invention, a method of arbitration in a network on an integrated circuit is provided, the network comprising a router unit, the router unit comprising a first input
Integrals of Frullani type and the method of brackets
Directory of Open Access Journals (Sweden)
Bravo Sergio
2017-01-01
Full Text Available The method of brackets is a collection of heuristic rules, some of which have being made rigorous, that provide a flexible, direct method for the evaluation of definite integrals. The present work uses this method to establish classical formulas due to Frullani which provide values of a specific family of integrals. Some generalizations are established.
Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang
2015-01-01
We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...
Weißenberger, Barbara E.; Angelkort, Hendrik
2009-01-01
To provide accounting information for management control purposes, two fundamental options exist: (a) The financial records can be used as a database for management accounting (integrated accounting system design), or (b) the management accounting system used by controllers can be based upon a so-called third set of books besides the financial and tax accounting records. Whereas the latter approach had been typical for firms in German-speaking countries until the 1980s, since then an increasi...
Accurate Electromagnetic Modeling Methods for Integrated Circuits
Sheng, Z.
2010-01-01
The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on
Adaptive integral equation methods in transport theory
International Nuclear Information System (INIS)
Kelley, C.T.
1992-01-01
In this paper, an adaptive multilevel algorithm for integral equations is described that has been developed with the Chandrasekhar H equation and its generalizations in mind. The algorithm maintains good performance when the Frechet derivative of the nonlinear map is singular at the solution, as happens in radiative transfer with conservative scattering and in critical neutron transport. Numerical examples that demonstrate the algorithm's effectiveness are presented
A symplectic integration method for elastic filaments
Ladd, Tony; Misra, Gaurav
2009-03-01
Elastic rods are a ubiquitous coarse-grained model of semi-flexible biopolymers such as DNA, actin, and microtubules. The Worm-Like Chain (WLC) is the standard numerical model for semi-flexible polymers, but it is only a linearized approximation to the dynamics of an elastic rod, valid for small deflections; typically the torsional motion is neglected as well. In the standard finite-difference and finite-element formulations of an elastic rod, the continuum equations of motion are discretized in space and time, but it is then difficult to ensure that the Hamiltonian structure of the exact equations is preserved. Here we discretize the Hamiltonian itself, expressed as a line integral over the contour of the filament. This discrete representation of the continuum filament can then be integrated by one of the explicit symplectic integrators frequently used in molecular dynamics. The model systematically approximates the continuum partial differential equations, but has the same level of computational complexity as molecular dynamics and is constraint free. Numerical tests show that the algorithm is much more stable than a finite-difference formulation and can be used for high aspect ratio filaments, such as actin. We present numerical results for the deterministic and stochastic motion of single filaments.
Banker, J.G.; Anderson, R.C.
1975-10-21
A method and apparatus are provided for preparing a composite structure consisting of filamentary material within a metal matrix. The method is practiced by the steps of confining the metal for forming the matrix in a first chamber, heating the confined metal to a temperature adequate to effect melting thereof, introducing a stream of inert gas into the chamber for pressurizing the atmosphere in the chamber to a pressure greater than atmospheric pressure, confining the filamentary material in a second chamber, heating the confined filamentary material to a temperature less than the melting temperature of the metal, evacuating the second chamber to provide an atmosphere therein at a pressure, placing the second chamber in registry with the first chamber to provide for the forced flow of the molten metal into the second chamber to effect infiltration of the filamentary material with the molten metal, and thereafter cooling the metal infiltrated-filamentary material to form said composite structure.
International Nuclear Information System (INIS)
Banker, J.G.; Anderson, R.C.
1975-01-01
A method and apparatus are provided for preparing a composite structure consisting of filamentary material within a metal matrix. The method is practiced by the steps of confining the metal for forming the matrix in a first chamber, heating the confined metal to a temperature adequate to effect melting thereof, introducing a stream of inert gas into the chamber for pressurizing the atmosphere in the chamber to a pressure greater than atmospheric pressure, confining the filamentary material in a second chamber, heating the confined filamentary material to a temperature less than the melting temperature of the metal, evacuating the second chamber to provide an atmosphere therein at a pressure, placing the second chamber in registry with the first chamber to provide for the forced flow of the molten metal into the second chamber to effect infiltration of the filamentary material with the molten metal, and thereafter cooling the metal infiltrated-filamentary material to form said composite structure
Integral Method of Boundary Characteristics: Neumann Condition
Kot, V. A.
2018-05-01
A new algorithm, based on systems of identical equalities with integral and differential boundary characteristics, is proposed for solving boundary-value problems on the heat conduction in bodies canonical in shape at a Neumann boundary condition. Results of a numerical analysis of the accuracy of solving heat-conduction problems with variable boundary conditions with the use of this algorithm are presented. The solutions obtained with it can be considered as exact because their errors comprise hundredths and ten-thousandths of a persent for a wide range of change in the parameters of a problem.
Nahar, S. N.
2003-01-01
Most astrophysical plasmas entail a balance between ionization and recombination. We present new results from a unified method for self-consistent and ab initio calculations for the inverse processes of photoionization and (e + ion) recombination. The treatment for (e + ion) recombination subsumes the non-resonant radiative recombination and the resonant dielectronic recombination processes in a unified scheme (S.N. Nahar and A.K. Pradhan, Phys. Rev. A 49, 1816 (1994);H.L. Zhang, S.N. Nahar, and A.K. Pradhan, J.Phys.B, 32,1459 (1999)). Calculations are carried out using the R-matrix method in the close coupling approximation using an identical wavefunction expansion for both processes to ensure self-consistency. The results for photoionization and recombination cross sections may also be compared with state-of-the-art experiments on synchrotron radiation sources for photoionization, and on heavy ion storage rings for recombination. The new experiments display heretofore unprecedented detail in terms of resonances and background cross sections and thereby calibrate the theoretical data precisely. We find a level of agreement between theory and experiment at about 10% for not only the ground state but also the metastable states. The recent experiments therefore verify the estimated accuracy of the vast amount of photoionization data computed under the OP, IP and related works. features. Present work also reports photoionization cross sections including relativistic effects in the Breit-Pauli R-matrix (BPRM) approximation. Detailed features in the calculated cross sections exhibit the missing resonances due to fine structure. Self-consistent datasets for photoionization and recombination have so far been computed for approximately 45 atoms and ions. These are being reported in a continuing series of publications in Astrophysical J. Supplements (e.g. references below). These data will also be available from the electronic database TIPTOPBASE (http://heasarc.gsfc.nasa.gov)
Selective Integration in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Lars; Andersen, Søren; Damkilde, Lars
2009-01-01
The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....
Numerical method of singular problems on singular integrals
International Nuclear Information System (INIS)
Zhao Huaiguo; Mou Zongze
1992-02-01
As first part on the numerical research of singular problems, a numerical method is proposed for singular integrals. It is shown that the procedure is quite powerful for solving physics calculation with singularity such as the plasma dispersion function. Useful quadrature formulas for some class of the singular integrals are derived. In general, integrals with more complex singularities can be dealt by this method easily
Energy Technology Data Exchange (ETDEWEB)
Wang, Hanyu; Wang, Xu; Yu, Junsheng, E-mail: luzhiyun@scu.edu.cn, E-mail: jsyu@uestc.edu.cn [State Key Laboratory of Electronic Thin Films and Integrated Devices, School of Optoelectronic Information, University of Electronic Science and Technology of China, Chengdu 610054 (China); Zhou, Jie; Lu, Zhiyun, E-mail: luzhiyun@scu.edu.cn, E-mail: jsyu@uestc.edu.cn [College of Chemistry, Sichuan University, Chengdu 610064 (China)
2014-08-11
A high performance organic integrated device (OID) with ultraviolet photodetective and electroluminescent (EL) properties was fabricated by using a charge-transfer-featured naphthalimide derivative of 6-(3,5-bis-[9-(4-t-butylphenyl)-9H-carbazol-3-yl]-phenoxy)-2- (4-t-butylphenyl)-benzo[de]isoquinoline-1,3-dione (CzPhONI) as the active layer. The results showed that the OID had a high detectivity of 1.5 × 10{sup 11} Jones at −3 V under the UV-350 nm illumination with an intensity of 0.6 mW/cm{sup 2}, and yielded an exciplex EL light emission with a maximum brightness of 1437 cd/m{sup 2}. Based on the energy band diagram, both the charge transfer feature of CzPhONI and matched energy level alignment were responsible for the dual ultraviolet photodetective and EL functions of OID.
Energy Technology Data Exchange (ETDEWEB)
Das, Sanjoy Kumar, E-mail: sanjoydasju@gmail.com; Khanam, Jasmina; Nanda, Arunabha
2016-12-01
In the present investigation, simplex lattice mixture design was applied for formulation development and optimization of a controlled release dosage form of ketoprofen microspheres consisting polymers like ethylcellulose and Eudragit{sup ®}RL 100; when those were formed by oil-in-oil emulsion solvent evaporation method. The investigation was carried out to observe the effects of polymer amount, stirring speed and emulsifier concentration (% w/w) on percentage yield, average particle size, drug entrapment efficiency and in vitro drug release in 8 h from the microspheres. Analysis of variance (ANOVA) was used to estimate the significance of the models. Based on the desirability function approach numerical optimization was carried out. Optimized formulation (KTF-O) showed close match between actual and predicted responses with desirability factor 0.811. No adverse reaction between drug and polymers were observed on the basis of Fourier transform infrared (FTIR) spectroscopy and Differential scanning calorimetric (DSC) analysis. Scanning electron microscopy (SEM) was carried out to show discreteness of microspheres (149.2 ± 1.25 μm) and their surface conditions during pre and post dissolution operations. The drug release pattern from KTF-O was best explained by Korsmeyer-Peppas and Higuchi models. The batch of optimized microspheres were found with maximum entrapment (~ 90%), minimum loss (~ 10%) and prolonged drug release for 8 h (91.25%) which may be considered as favourable criteria of controlled release dosage form. - Graphical abstract: Optimization of preparation method for ketoprofen-loaded microspheres consisting polymeric blends using simplex lattice mixture design. - Highlights: • Simplex lattice design was used to optimize ketoprofen-loaded microspheres. • Polymeric blend (Ethylcellulose and Eudragit® RL 100) was used. • Microspheres were prepared by oil-in-oil emulsion solvent evaporation method. • Optimized formulation depicted favourable
Directory of Open Access Journals (Sweden)
Georgia Doxani
2015-10-01
Full Text Available The Sentinel missions have been designed to support the operational services of the Copernicus program, ensuring long-term availability of data for a wide range of spectral, spatial and temporal resolutions. In particular, Sentinel-2 (S-2 data with improved high spatial resolution and higher revisit frequency (five days with the pair of satellites in operation will play a fundamental role in recording land cover types and monitoring land cover changes at regular intervals. Nevertheless, cloud coverage usually hinders the time series availability and consequently the continuous land surface monitoring. In an attempt to alleviate this limitation, the synergistic use of instruments with different features is investigated, aiming at the future synergy of the S-2 MultiSpectral Instrument (MSI and Sentinel-3 (S-3 Ocean and Land Colour Instrument (OLCI. To that end, an unmixing model is proposed with the intention of integrating the benefits of the two Sentinel missions, when both in orbit, in one composite image. The main goal is to fill the data gaps in the S-2 record, based on the more frequent information of the S-3 time series. The proposed fusion model has been applied on MODIS (MOD09GA L2G and SPOT4 (Take 5 data and the experimental results have demonstrated that the approach has high potential. However, the different acquisition characteristics of the sensors, i.e. illumination and viewing geometry, should be taken into consideration and bidirectional effects correction has to be performed in order to reduce noise in the reflectance time series.
Momentum integral network method for thermal-hydraulic transient analysis
International Nuclear Information System (INIS)
Van Tuyle, G.J.
1983-01-01
A new momentum integral network method has been developed, and tested in the MINET computer code. The method was developed in order to facilitate the transient analysis of complex fluid flow and heat transfer networks, such as those found in the balance of plant of power generating facilities. The method employed in the MINET code is a major extension of a momentum integral method reported by Meyer. Meyer integrated the momentum equation over several linked nodes, called a segment, and used a segment average pressure, evaluated from the pressures at both ends. Nodal mass and energy conservation determined nodal flows and enthalpies, accounting for fluid compression and thermal expansion
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm(-1). Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
International Nuclear Information System (INIS)
Bjorgaard, J. A.; Velizhanin, K. A.; Tretiak, S.
2015-01-01
This study describes variational energy expressions and analytical excited state energy gradients for time-dependent self-consistent field methods with polarizable solvent effects. Linear response, vertical excitation, and state-specific solventmodels are examined. Enforcing a variational ground stateenergy expression in the state-specific model is found to reduce it to the vertical excitation model. Variational excited state energy expressions are then provided for the linear response and vertical excitation models and analytical gradients are formulated. Using semiempiricalmodel chemistry, the variational expressions are verified by numerical and analytical differentiation with respect to a static external electric field. Lastly, analytical gradients are further tested by performing microcanonical excited state molecular dynamics with p-nitroaniline
Ding, Kun; Chan, C. T.
2018-04-01
The calculation of optical force density distribution inside a material is challenging at the nanoscale, where quantum and nonlocal effects emerge and macroscopic parameters such as permittivity become ill-defined. We demonstrate that the microscopic optical force density of nanoplasmonic systems can be defined and calculated using the microscopic fields generated using a self-consistent hydrodynamics model that includes quantum, nonlocal, and retardation effects. We demonstrate this technique by calculating the microscopic optical force density distributions and the optical binding force induced by external light on nanoplasmonic dimers. This approach works even in the limit when the nanoparticles are close enough to each other so that electron tunneling occurs, a regime in which classical electromagnetic approach fails completely. We discover that an uneven distribution of optical force density can lead to a light-induced spinning torque acting on individual particles. The hydrodynamics method offers us an accurate and efficient approach to study optomechanical behavior for plasmonic systems at the nanoscale.
Achieving Integration in Mixed Methods Designs—Principles and Practices
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-01-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participato...
International Nuclear Information System (INIS)
Baechle, R.-D.; Hehn, G.; Pfister, G.; Perlini, G.; Matthes, W.
1984-01-01
Single material benchmark experiments are designed to check neutron and gamma cross-sections of importance for deep penetration problems. At various penetration depths a large number of activation detectors and spectrometers are placed to measure the radiation field as completely as possible. The large amount of measured data in benchmark experiments can be evaluated best by the global detector concept applied to nuclear data adjustment. A new iteration procedure is presented for adjustment of a large number of multigroup cross sections, which has been implemented now in the modular adjustment code ADJUST-EUR. A theoretical test problem has been deviced to check the total program system with high precision. The method and code are going to be applied for validating the new European Data Files (JEF and EFF) in progress. (Auth.)
Integrated management of thesis using clustering method
Astuti, Indah Fitri; Cahyadi, Dedy
2017-02-01
Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.
IMP: Integrated method for power analysis
Energy Technology Data Exchange (ETDEWEB)
1989-03-01
An integrated, easy to use, economical package of microcomputer programs has been developed which can be used by small hydro developers to evaluate potential sites for small scale hydroelectric plants in British Columbia. The programs enable evaluation of sites located far from the nearest stream gauging station, for which streamflow data are not available. For each of the province's 6 hydrologic regions, a streamflow record for one small watershed is provided in the data base. The program can then be used to generate synthetic streamflow records and to compare results obtained by the modelling procedure with the actual data. The program can also be used to explore the significance of modelling parameters and to develop a detailed appreciation for the accuracy which can be obtained under various circumstances. The components of the program are an atmospheric model of precipitation; a watershed model that will generate a continuous series of streamflow data, based on information from the atmospheric model; a flood frequency analysis system that uses site-specific topographic data plus information from the atmospheric model to generate a flood frequency curve; a hydroelectric power simulation program which determines daily energy output for a run-of-river or reservoir storage site based on selected generation facilities and the time series generated in the watershed model; and a graphic analysis package that provides direct visualization of data and modelling results. This report contains a description of the programs, a user guide, the theory behind the model, the modelling methodology, and results from a workshop that reviewed the program package. 32 refs., 16 figs., 18 tabs.
Deterministic methods to solve the integral transport equation in neutronic
International Nuclear Information System (INIS)
Warin, X.
1993-11-01
We present a synthesis of the methods used to solve the integral transport equation in neutronic. This formulation is above all used to compute solutions in 2D in heterogeneous assemblies. Three kinds of methods are described: - the collision probability method; - the interface current method; - the current coupling collision probability method. These methods don't seem to be the most effective in 3D. (author). 9 figs
International Nuclear Information System (INIS)
Kussmann, Jörg; Luenser, Arne; Beer, Matthias; Ochsenfeld, Christian
2015-01-01
An analytical method to calculate the molecular vibrational Hessian matrix at the self-consistent field level is presented. By analysis of the multipole expansions of the relevant derivatives of Coulomb-type two-electron integral contractions, we show that the effect of the perturbation on the electronic structure due to the displacement of nuclei decays at least as r −2 instead of r −1 . The perturbation is asymptotically local, and the computation of the Hessian matrix can, in principle, be performed with O(N) complexity. Our implementation exhibits linear scaling in all time-determining steps, with some rapid but quadratic-complexity steps remaining. Sample calculations illustrate linear or near-linear scaling in the construction of the complete nuclear Hessian matrix for sparse systems. For more demanding systems, scaling is still considerably sub-quadratic to quadratic, depending on the density of the underlying electronic structure
Directory of Open Access Journals (Sweden)
Yuanbin Yu
2016-01-01
Full Text Available This paper presents a new method for battery degradation estimation using a power-energy (PE function in a battery/ultracapacitor hybrid energy storage system (HESS, and the integrated optimization which concerns both parameters matching and control for HESS has been done as well. A semiactive topology of HESS with double-layer capacitor (EDLC coupled directly with DC-link is adopted for a hybrid electric city bus (HECB. In the purpose of presenting the quantitative relationship between system parameters and battery serving life, the data during a 37-minute driving cycle has been collected and decomposed into discharging/charging fragments firstly, and then the optimal control strategy which is supposed to maximally use the available EDLC energy is presented to decompose the power between battery and EDLC. Furthermore, based on a battery degradation model, the conversion of power demand by PE function and PE matrix is applied to evaluate the relationship between the available energy stored in HESS and the serving life of battery pack. Therefore, according to the approach which could decouple parameters matching and optimal control of the HESS, the process of battery degradation and its serving life estimation for HESS has been summed up.
Quadratic algebras in the noncommutative integration method of wave equation
International Nuclear Information System (INIS)
Varaksin, O.L.
1995-01-01
The paper deals with the investigation of applications of the method of noncommutative integration of linear differential equations by partial derivatives. Nontrivial example was taken for integration of three-dimensions wave equation with the use of non-Abelian quadratic algebras
New method for calculation of integral characteristics of thermal plumes
DEFF Research Database (Denmark)
Zukowska, Daria; Popiolek, Zbigniew; Melikov, Arsen Krikor
2008-01-01
A method for calculation of integral characteristics of thermal plumes is proposed. The method allows for determination of the integral parameters of plumes based on speed measurements performed with omnidirectional low velocity thermoanemometers. The method includes a procedure for calculation...... of the directional velocity (upward component of the mean velocity). The method is applied for determination of the characteristics of an asymmetric thermal plume generated by a sitting person. The method was validated in full-scale experiments in a climatic chamber with a thermal manikin as a simulator of a sitting...
INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES
Directory of Open Access Journals (Sweden)
H. Shen
2012-08-01
Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.
DEFF Research Database (Denmark)
Staunstrup, Jørgen
1998-01-01
This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....
Alternative containment integrity test methods, an overview of possible techniques
International Nuclear Information System (INIS)
Spletzer, B.L.
1986-01-01
A study is being conducted to develop and analyze alternative methods for testing of containment integrity. The study is focused on techniques for continuously monitoring containment integrity to provide rapid detection of existing leaks, thus providing greater certainty of the integrity of the containment at any time. The study is also intended to develop techniques applicable to the currently required Type A integrated leakage rate tests. A brief discussion of the range of alternative methods currently being considered is presented. The methods include applicability to all major containment types, operating and shutdown plant conditions, and quantitative and qualitative leakage measurements. The techniques are analyzed in accordance with the current state of knowledge of each method. The bulk of the techniques discussed are in the conceptual stage, have not been tested in actual plant conditions, and are presented here as a possible future direction for evaluating containment integrity. Of the methods considered, no single method provides optimum performance for all containment types. Several methods are limited in the types of containment for which they are applicable. The results of the study to date indicate that techniques for continuous monitoring of containment integrity exist for many plants and may be implemented at modest cost
International Nuclear Information System (INIS)
Edgar, S.B.
1990-01-01
The structures of the N.P. and G.H.P formalisms are reviewed in order to understand and demonstrate the important role played by the commutator equations in the associated integration procedures. Particular attention is focused on how the commutator equations are to be satisfied, or checked for consistency. It is shown that Held's integration method will only guarantee genuine solutions of Einstein's equations when all the commutator equations are correctly and completely satisfied. (authors)
Two pricing methods for solving an integrated commercial fishery ...
African Journals Online (AJOL)
a model (Hasan and Raffensperger, 2006) to solve this problem: the integrated ... planning and labour allocation for that processing firm, but did not consider any fleet- .... the DBONP method actually finds such price information, and uses it.
Critical Analysis of Methods for Integrating Economic and Environmental Indicators
Huguet Ferran, Pau; Heijungs, Reinout; Vogtländer, Joost G.
2018-01-01
The application of environmental strategies requires scoring and evaluation methods that provide an integrated vision of the economic and environmental performance of systems. The vector optimisation, ratio and weighted addition of indicators are the three most prevalent techniques for addressing
A simple flow-concentration modelling method for integrating water ...
African Journals Online (AJOL)
A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...
APPLICATION OF BOUNDARY INTEGRAL EQUATION METHOD FOR THERMOELASTICITY PROBLEMS
Directory of Open Access Journals (Sweden)
Vorona Yu.V.
2015-12-01
Full Text Available Boundary Integral Equation Method is used for solving analytically the problems of coupled thermoelastic spherical wave propagation. The resulting mathematical expressions coincide with the solutions obtained in a conventional manner.
New Approaches to Aluminum Integral Foam Production with Casting Methods
Directory of Open Access Journals (Sweden)
Ahmet Güner
2015-08-01
Full Text Available Integral foam has been used in the production of polymer materials for a long time. Metal integral foam casting systems are obtained by transferring and adapting polymer injection technology. Metal integral foam produced by casting has a solid skin at the surface and a foam core. Producing near-net shape reduces production expenses. Insurance companies nowadays want the automotive industry to use metallic foam parts because of their higher impact energy absorption properties. In this paper, manufacturing processes of aluminum integral foam with casting methods will be discussed.
Tau method approximation of the Hubbell rectangular source integral
International Nuclear Information System (INIS)
Kalla, S.L.; Khajah, H.G.
2000-01-01
The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows
Assessing Backwards Integration as a Method of KBO Family Finding
Benfell, Nathan; Ragozzine, Darin
2018-04-01
The age of young asteroid collisional families can sometimes be determined by using backwards n-body integrations of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt, Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. But various challenges present themselves when running precise and accurate 4+ Gyr integrations of Kuiper Belt objects. We have created simulated families of Kuiper Belt Objects with identical starting locations and velocity distributions, based on the Haumea Family. We then ran several long-term test integrations to observe the effect of various simulation parameters on integration results. These integrations were then used to investigate which parameters are of enough significance to require inclusion in the integration. Thereby we determined how to construct long-term integrations that both yield significant results and require manageable processing power. Additionally, we have tested the use of backwards integration as a method of discovery of potential young families in the Kuiper Belt.
Explicit integration of extremely stiff reaction networks: partial equilibrium methods
International Nuclear Information System (INIS)
Guidry, M W; Hix, W R; Billings, J J
2013-01-01
In two preceding papers (Guidry et al 2013 Comput. Sci. Disc. 6 015001 and Guidry and Harris 2013 Comput. Sci. Disc. 6 015002), we have shown that when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper, we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the approach to equilibrium and show that explicit asymptotic methods, combined with the new partial equilibrium methods, give an integration scheme that can plausibly deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that such explicit methods may offer alternatives to implicit integration of even extremely stiff systems and that these methods may permit integration of much larger networks than have been possible before in a number of fields. (paper)
Approximation of the exponential integral (well function) using sampling methods
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Directory of Open Access Journals (Sweden)
Orkun Oztürk
Full Text Available BACKGROUND: Predicting type-1 Human Immunodeficiency Virus (HIV-1 protease cleavage site in protein molecules and determining its specificity is an important task which has attracted considerable attention in the research community. Achievements in this area are expected to result in effective drug design (especially for HIV-1 protease inhibitors against this life-threatening virus. However, some drawbacks (like the shortage of the available training data and the high dimensionality of the feature space turn this task into a difficult classification problem. Thus, various machine learning techniques, and specifically several classification methods have been proposed in order to increase the accuracy of the classification model. In addition, for several classification problems, which are characterized by having few samples and many features, selecting the most relevant features is a major factor for increasing classification accuracy. RESULTS: We propose for HIV-1 data a consistency-based feature selection approach in conjunction with recursive feature elimination of support vector machines (SVMs. We used various classifiers for evaluating the results obtained from the feature selection process. We further demonstrated the effectiveness of our proposed method by comparing it with a state-of-the-art feature selection method applied on HIV-1 data, and we evaluated the reported results based on attributes which have been selected from different combinations. CONCLUSION: Applying feature selection on training data before realizing the classification task seems to be a reasonable data-mining process when working with types of data similar to HIV-1. On HIV-1 data, some feature selection or extraction operations in conjunction with different classifiers have been tested and noteworthy outcomes have been reported. These facts motivate for the work presented in this paper. SOFTWARE AVAILABILITY: The software is available at http
The nuclear N-body problem and the effective interaction in self-consistent mean-field methods
International Nuclear Information System (INIS)
Duguet, Thomas
2002-01-01
This work deals with two aspects of mean-field type methods extensively used in low-energy nuclear structure. The first study is at the mean-field level. The link between the wave-function describing an even-even nucleus and the odd-even neighbor is revisited. To get a coherent description as a function of the pairing intensity in the system, the utility of the formalization of this link through a two steps process is demonstrated. This two-steps process allows to identify the role played by different channels of the force when a nucleon is added in the system. In particular, perturbative formula evaluating the contribution of time-odd components of the functional to the nucleon separation energy are derived for zero and realistic pairing intensities. Self-consistent calculations validate the developed scheme as well as the derived perturbative formula. This first study ends up with an extended analysis of the odd-even mass staggering in nuclei. The new scheme allows to identify the contribution to this observable coming from different channels of the force. The necessity of a better understanding of time-odd terms in order to decide which odd-even mass formulae extracts the pairing gap the most properly is identified. These terms being nowadays more or less out of control, extended studies are needed to make precise the fit of a pairing force through the comparison of theoretical and experimental odd-even mass differences. The second study deals with beyond mean-field methods taking care of the correlations associated with large amplitude oscillations in nuclei. Their effects are usually incorporated through the GCM or the projected mean-field method. We derive a perturbation theory motivating such variational calculations from a diagrammatic point of view for the first time. Resuming two-body correlations in the energy expansion, we obtain an effective interaction removing the hard-core problem in the context of configuration mixing calculations. Proceeding to a
International Nuclear Information System (INIS)
Robin, Caroline
2014-01-01
This thesis project takes part in the development of the multiparticle-multi-hole configuration mixing method aiming to describe the structure of atomic nuclei. Based on a double variational principle, this approach allows to determine the expansion coefficients of the wave function and the single-particle states at the same time. In this work we apply for the first time the fully self-consistent formalism of the mp-mh method to the description of a few p- and sd-shell nuclei, using the D1S Gogny interaction. A first study of the 12 C nucleus is performed in order to test the doubly iterative convergence procedure when different types of truncation criteria are applied to select the many-body configurations included in the wave-function. A detailed analysis of the effect caused by the orbital optimization is conducted. In particular, its impact on the one-body density and on the fragmentation of the ground state wave function is analyzed. A systematic study of sd-shell nuclei is then performed. A careful analysis of the correlation content of the ground state is first conducted and observables quantities such as binding and separation energies, as well as charge radii are calculated and compared to experimental data. Satisfactory results are found. Spectroscopic properties are also studied. Excitation energies of low-lying states are found in very good agreement with experiment, and the study of magnetic dipole features are also satisfactory. Calculation of electric quadrupole properties, and in particular transition probabilities B(E2), however reveal a clear lack of collectivity of the wave function, due to the reduced valence space used to select the many-body configurations. Although the renormalization of orbitals leads to an important fragmentation of the ground state wave function, only little effect is observed on B(E2) probabilities. A tentative explanation is given. Finally, the structure description of nuclei provided by the multiparticle
An integrated lean-methods approach to hospital facilities redesign.
Nicholas, John
2012-01-01
Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.
Iterative algorithm for the volume integral method for magnetostatics problems
International Nuclear Information System (INIS)
Pasciak, J.E.
1980-11-01
Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived
Zeng, Irene Sui Lan; Lumley, Thomas
2018-01-01
Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.
Achieving Integration in Mixed Methods Designs—Principles and Practices
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-01-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835
Metriplectic Gyrokinetics and Discretization Methods for the Landau Collision Integral
Hirvijoki, Eero; Burby, Joshua W.; Kraus, Michael
2017-10-01
We present two important results for the kinetic theory and numerical simulation of warm plasmas: 1) We provide a metriplectic formulation of collisional electrostatic gyrokinetics that is fully consistent with the First and Second Laws of Thermodynamics. 2) We provide a metriplectic temporal and velocity-space discretization for the particle phase-space Landau collision integral that satisfies the conservation of energy, momentum, and particle densities to machine precision, as well as guarantees the existence of numerical H-theorem. The properties are demonstrated algebraically. These two result have important implications: 1) Numerical methods addressing the Vlasov-Maxwell-Landau system of equations, or its reduced gyrokinetic versions, should start from a metriplectic formulation to preserve the fundamental physical principles also at the discrete level. 2) The plasma physics community should search for a metriplectic reduction theory that would serve a similar purpose as the existing Lagrangian and Hamiltonian reduction theories do in gyrokinetics. The discovery of metriplectic formulation of collisional electrostatic gyrokinetics is strong evidence in favor of such theory and, if uncovered, the theory would be invaluable in constructing reduced plasma models. Supported by U.S. DOE Contract Nos. DE-AC02-09-CH11466 (EH) and DE-AC05-06OR23100 (JWB) and by European Union's Horizon 2020 research and innovation Grant No. 708124 (MK).
Computation of rectangular source integral by rational parameter polynomial method
International Nuclear Information System (INIS)
Prabha, Hem
2001-01-01
Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively
An integration weighting method to evaluate extremum coordinates
International Nuclear Information System (INIS)
Ilyushchenko, V.I.
1990-01-01
The numerical version of the Laplace asymptotics has been used to evaluate the coordinates of extrema of multivariate continuous and discontinuous test functions. The performed computer experiments demonstrate the high efficiency of the integration method proposed. The saturating dependence of extremum coordinates on such parameters as a number of integration subregions and that of K going /theoretically/ to infinity has been studied in detail for the limitand being a ratio of two Laplace integrals with exponentiated K. The given method is an integral equivalent of that of weighted means. As opposed to the standard optimization methods of the zero, first and second order the proposed method can be successfully applied to optimize discontinuous objective functions, too. There are possibilities of applying the integration method in the cases, when the conventional techniques fail due to poor analytical properties of the objective functions near extremal points. The proposed method is efficient in searching for both local and global extrema of multimodal objective functions. 12 refs.; 4 tabs
DEFF Research Database (Denmark)
Churchill, Nathan William; Madsen, Kristoffer Hougaard; Mørup, Morten
2016-01-01
flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension......The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize...... brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group-level fMRI, with applications in modeling the relationships between network variability and behavioral/demographic variables....
Higher-Order Integral Equation Methods in Computational Electromagnetics
DEFF Research Database (Denmark)
Jørgensen, Erik; Meincke, Peter
Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...
Two pricing methods for solving an integrated commercial fishery ...
African Journals Online (AJOL)
In this paper, we develop two novel pricing methods for solving an integer program. We demonstrate the methods by solving an integrated commercial fishery planning model (IFPM). In this problem, a fishery manager must schedule fishing trawlers (determine when and where the trawlers should go fishing, and when the ...
Method for integrating a train of fast, nanosecond wide pulses
International Nuclear Information System (INIS)
Rose, C.R.
1987-01-01
This paper describes a method used to integrate a train of fast, nanosecond wide pulses. The pulses come from current transformers in a RF LINAC beamline. Because they are ac signals and have no dc component, true mathematical integration would yield zero over the pulse train period or an equally erroneous value because of a dc baseline shift. The circuit used to integrate the pulse train first stretches the pulses to 35 ns FWHM. The signals are then fed into a high-speed, precision rectifier which restores a true dc baseline for the following stage - a fast, gated integrator. The rectifier is linear over 55dB in excess of 25 MHz, and the gated integrator is linear over a 60 dB range with input pulse widths as short as 16 ns. The assembled system is linear over 30 dB with a 6 MHz input signal
A study of compositional verification based IMA integration method
Huang, Hui; Zhang, Guoquan; Xu, Wanmeng
2018-03-01
The rapid development of avionics systems is driving the application of integrated modular avionics (IMA) systems. But meanwhile it is improving avionics system integration, complexity of system test. Then we need simplify the method of IMA system test. The IMA system supports a module platform that runs multiple applications, and shares processing resources. Compared with federated avionics system, IMA system is difficult to isolate failure. Therefore, IMA system verification will face the critical problem is how to test shared resources of multiple application. For a simple avionics system, traditional test methods are easily realizing to test a whole system. But for a complex system, it is hard completed to totally test a huge and integrated avionics system. Then this paper provides using compositional-verification theory in IMA system test, so that reducing processes of test and improving efficiency, consequently economizing costs of IMA system integration.
INTEGRATED SENSOR EVALUATION CIRCUIT AND METHOD FOR OPERATING SAID CIRCUIT
Krüger, Jens; Gausa, Dominik
2015-01-01
WO15090426A1 Sensor evaluation device and method for operating said device Integrated sensor evaluation circuit for evaluating a sensor signal (14) received from a sensor (12), having a first connection (28a) for connection to the sensor and a second connection (28b) for connection to the sensor. The integrated sensor evaluation circuit comprises a configuration data memory (16) for storing configuration data which describe signal properties of a plurality of sensor control signals (26a-c). T...
Mathematical methods linear algebra normed spaces distributions integration
Korevaar, Jacob
1968-01-01
Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector
User's guide to Monte Carlo methods for evaluating path integrals
Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan
2018-04-01
We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.
An integral nodal variational method for multigroup criticality calculations
International Nuclear Information System (INIS)
Lewis, E.E.; Tsoulfanidis, N.
2003-01-01
An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)
Integrative methods for analyzing big data in precision medicine.
Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša
2016-03-01
We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cultural adaptation and translation of measures: an integrated method.
Sidani, Souraya; Guruge, Sepali; Miranda, Joyal; Ford-Gilboe, Marilyn; Varcoe, Colleen
2010-04-01
Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integrated method for exploring conceptual equivalence within the process of adapting and translating measures. The integrated method involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integrated method are discussed. (c) 2010 Wiley Periodicals, Inc.
Dependent failure analysis research for the US NRC Risk Methods Integration and Evaluation Program
International Nuclear Information System (INIS)
Bohn, M.P.; Stack, D.W.; Campbell, D.J.; Rooney, J.J.; Rasmuson, D.M.
1985-01-01
The Risk Methods Integration and Evaluation Program (RMIEP), which is being performed for the Nuclear Regulatory Commission by Sandia National Laboratories, has the goals of developing new risk assessment methods and integrating the new and existing methods in a uniform procedure for performing an in-depth probabilistic risk assessment (PRA) with consistent levels of analysis for internal, external, and dependent failure scenarios. An important part of RMIEP is the recognition of the crucial importance of dependent common cause failures (CCFs) and the pressing need to develop effective methods for analyzing CCFs as part of a PRA. The NRC-sponsored Integrated Dependent Failure Methodology Program at Sandia is addressing this need. This paper presents a preliminary approach for analyzing CCFs as part of a PRA. A nine-step procedure for efficiently screening and analyzing dependent failure scenarios is presented, and each step is discussed
Mullany, Britta; Barlow, Allison; Neault, Nicole; Billy, Trudy; Hastings, Ranelda; Coho-Mescal, Valerie; Lorenzo, Sherilyn; Walkup, John T.
2013-01-01
Computer-assisted interviewing techniques have increasingly been used in program and research settings to improve data collection quality and efficiency. Little is known, however, regarding the use of such techniques with American Indian (AI) adolescents in collecting sensitive information. This brief compares the consistency of AI adolescent…
Chafouleas, Sandra M.; Riley-Tillman, T. Chris; Sassu, Kari A.; LaFrance, Mary J.; Patwa, Shamim S.
2007-01-01
In this study, the consistency of on-task data collected across raters using either a Daily Behavior Report Card (DBRC) or systematic direct observation was examined to begin to understand the decision reliability of using DBRCs to monitor student behavior. Results suggested very similar conclusions might be drawn when visually examining data…
Modeling of the 3RS tau protein with self-consistent field method and Monte Carlo simulation
Leermakers, F.A.M.; Jho, Y.S.; Zhulina, E.B.
2010-01-01
Using a model with amino acid resolution of the 196 aa N-terminus of the 3RS tau protein, we performed both a Monte Carlo study and a complementary self-consistent field (SCF) analysis to obtain detailed information on conformational properties of these moieties near a charged plane (mimicking the
The 3D Lagrangian Integral Method. Henrik Koblitz Rasmussen
DEFF Research Database (Denmark)
Rasmussen, Henrik Koblitz
2003-01-01
. This are processes such as thermo-forming, gas-assisted injection moulding and all kind of simultaneous multi-component polymer processing operations. Though, in all polymer processing operations free surfaces (or interfaces) are present and the dynamic of these surfaces are of interest. In the "3D Lagrangian...... Integral Method" to simulate viscoelastic flow, the governing equations are solved for the particle positions (Lagrangian kinematics). Therefore, the transient motion of surfaces can be followed in a particularly simple fashion even in 3D viscoelastic flow. The "3D Lagrangian Integral Method" is described...
DEFF Research Database (Denmark)
Jensen, Jesper; Tan, Zheng-Hua
2014-01-01
We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The research purpose of this paper is to show the limitations of the existing radiometric normalization approaches and their disadvantages in change detection of artificial objects by comparing the existing approaches,on the basis of which a preprocessing approach to radiometric consistency,based on wavelet transform and a spatial low-pass filter,has been devised.This approach first separates the high frequency information and low frequency information by wavelet transform.Then,the processing of relative radiometric consistency based on a low-pass filter is conducted on the low frequency parts.After processing,an inverse wavelet transform is conducted to obtain the results image.The experimental results show that this approach can substantially reduce the influence on change detection of linear or nonlinear radiometric differences in multi-temporal images.
Developing integrated methods to address complex resource and environmental issues
Smith, Kathleen S.; Phillips, Jeffrey D.; McCafferty, Anne E.; Clark, Roger N.
2016-02-08
IntroductionThis circular provides an overview of selected activities that were conducted within the U.S. Geological Survey (USGS) Integrated Methods Development Project, an interdisciplinary project designed to develop new tools and conduct innovative research requiring integration of geologic, geophysical, geochemical, and remote-sensing expertise. The project was supported by the USGS Mineral Resources Program, and its products and acquired capabilities have broad applications to missions throughout the USGS and beyond.In addressing challenges associated with understanding the location, quantity, and quality of mineral resources, and in investigating the potential environmental consequences of resource development, a number of field and laboratory capabilities and interpretative methodologies evolved from the project that have applications to traditional resource studies as well as to studies related to ecosystem health, human health, disaster and hazard assessment, and planetary science. New or improved tools and research findings developed within the project have been applied to other projects and activities. Specifically, geophysical equipment and techniques have been applied to a variety of traditional and nontraditional mineral- and energy-resource studies, military applications, environmental investigations, and applied research activities that involve climate change, mapping techniques, and monitoring capabilities. Diverse applied geochemistry activities provide a process-level understanding of the mobility, chemical speciation, and bioavailability of elements, particularly metals and metalloids, in a variety of environmental settings. Imaging spectroscopy capabilities maintained and developed within the project have been applied to traditional resource studies as well as to studies related to ecosystem health, human health, disaster assessment, and planetary science. Brief descriptions of capabilities and laboratory facilities and summaries of some
Directory of Open Access Journals (Sweden)
Jacob W. Malcom
2016-07-01
Full Text Available Managers of large, complex wildlife conservation programs need information on the conservation status of each of many species to help strategically allocate limited resources. Oversimplifying status data, however, runs the risk of missing information essential to strategic allocation. Conservation status consists of two components, the status of threats a species faces and the species’ demographic status. Neither component alone is sufficient to characterize conservation status. Here we present a simple key for scoring threat and demographic changes for species using detailed information provided in free-form textual descriptions of conservation status. This key is easy to use (simple, captures the two components of conservation status without the cost of more detailed measures (sufficient, and can be applied by different personnel to any taxon (consistent. To evaluate the key’s utility, we performed two analyses. First, we scored the threat and demographic status of 37 species recently recommended for reclassification under the Endangered Species Act (ESA and 15 control species, then compared our scores to two metrics used for decision-making and reports to Congress. Second, we scored the threat and demographic status of all non-plant ESA-listed species from Florida (54 spp., and evaluated scoring repeatability for a subset of those. While the metrics reported by the U.S. Fish and Wildlife Service (FWS are often consistent with our scores in the first analysis, the results highlight two problems with the oversimplified metrics. First, we show that both metrics can mask underlying demographic declines or threat increases; for example, ∼40% of species not recommended for reclassification had changes in threats or demography. Second, we show that neither metric is consistent with either threats or demography alone, but conflates the two. The second analysis illustrates how the scoring key can be applied to a substantial set of species to
A numerical integration-based yield estimation method for integrated circuits
International Nuclear Information System (INIS)
Liang Tao; Jia Xinzhang
2011-01-01
A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)
A numerical integration-based yield estimation method for integrated circuits
Energy Technology Data Exchange (ETDEWEB)
Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)
2011-04-15
A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)
Energy Technology Data Exchange (ETDEWEB)
Webster, Mort David [MIT
2015-03-10
This report presents the final outcomes and products of the project as performed at the Massachusetts Institute of Technology. The research project consists of three main components: methodology development for decision-making under uncertainty, improving the resolution of the electricity sector to improve integrated assessment, and application of these methods to integrated assessment. Results in each area is described in the report.
Application of Stochastic Sensitivity Analysis to Integrated Force Method
Directory of Open Access Journals (Sweden)
X. F. Wei
2012-01-01
Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.
Real-time hybrid simulation using the convolution integral method
International Nuclear Information System (INIS)
Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A
2011-01-01
This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results
International Nuclear Information System (INIS)
Nishiyama, Seiya; Providencia, Joao da; Komatsu, Takao
2007-01-01
To go beyond perturbative method in terms of variables of collective motion, using infinite-dimensional fermions, we have aimed to construct the self-consistent-field (SCF) theory, i.e., time dependent Hartree-Fock theory on associative affine Kac-Moody algebras along the soliton theory. In this paper, toward such an ultimate goal we will reconstruct a theoretical frame for a υ (external parameter)-dependent SCF method to describe more precisely the dynamics on the infinite-dimensional fermion Fock space. An infinite-dimensional fermion operator is introduced through Laurent expansion of finite-dimensional fermion operators with respect to degrees of freedom of the fermions related to a υ-dependent and a Υ-periodic potential. As an illustration, we derive explicit expressions for the Laurent coefficients of soliton solutions for sl n and for su n on infinite-dimensional Grassmannian. The associative affine Kac-Moody algebras play a crucial role to determine the dynamics on the infinite-dimensional fermion Fock space
Integral methods in science and engineering theoretical and practical aspects
Constanda, C; Rollins, D
2006-01-01
Presents a series of analytic and numerical methods of solution constructed for important problems arising in science and engineering, based on the powerful operation of integration. This volume is meant for researchers and practitioners in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students.
An approximation method for nonlinear integral equations of Hammerstein type
International Nuclear Information System (INIS)
Chidume, C.E.; Moore, C.
1989-05-01
The solution of a nonlinear integral equation of Hammerstein type in Hilbert spaces is approximated by means of a fixed point iteration method. Explicit error estimates are given and, in some cases, convergence is shown to be at least as fast as a geometric progression. (author). 25 refs
The philosophy and method of integrative humanism and religious ...
African Journals Online (AJOL)
This paper titled “Philosophy and Method of Integrative Humanism and Religious Crises in Nigeria: Picking the Essentials”, acknowledges the damaging effects of religious bigotry, fanaticism and creed differences on the social, political and economic development of the country. The need for the cessation of religious ...
An Integrated Approach to Research Methods and Capstone
Postic, Robert; McCandless, Ray; Stewart, Beth
2014-01-01
In 1991, the AACU issued a report on improving undergraduate education suggesting, in part, that a curriculum should be both comprehensive and cohesive. Since 2008, we have systematically integrated our research methods course with our capstone course in an attempt to accomplish the twin goals of comprehensiveness and cohesion. By taking this…
Confluent education: an integrative method for nursing (continuing) education.
Francke, A.L.; Erkens, T.
1994-01-01
Confluent education is presented as a method to bridge the gap between cognitive and affective learning. Attention is focused on three main characteristics of confluent education: (a) the integration of four overlapping domains in a learning process (readiness, the cognitive domain, the affective
On the solution of high order stable time integration methods
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Blaheta, Radim; Sysala, Stanislav; Ahmad, B.
2013-01-01
Roč. 108, č. 1 (2013), s. 1-22 ISSN 1687-2770 Institutional support: RVO:68145535 Keywords : evolution equations * preconditioners for quadratic matrix polynomials * a stiffly stable time integration method Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2013 http://www.boundaryvalueproblems.com/content/2013/1/108
Educational integrating projects as a method of interactive learning
Directory of Open Access Journals (Sweden)
Иван Николаевич Куринин
2013-12-01
Full Text Available The article describes a method of interactive learning based on educational integrating projects. Some examples of content of such projects for the disciplines related to the study of information and Internet technologies and their application in management are presented.
Integrating Expressive Methods in a Relational-Psychotherapy
Directory of Open Access Journals (Sweden)
Richard G. Erskine
2011-06-01
Full Text Available Therapeutic Involvement is an integral part of all effective psychotherapy.This article is written to illustrate the concept of Therapeutic Involvement in working within a therapeutic relationship – within the transference -- and with active expressive and experiential methods to resolve traumatic experiences, relational disturbances and life shaping decisions.
The integral equation method applied to eddy currents
International Nuclear Information System (INIS)
Biddlecombe, C.S.; Collie, C.J.; Simkin, J.; Trowbridge, C.W.
1976-04-01
An algorithm for the numerical solution of eddy current problems is described, based on the direct solution of the integral equation for the potentials. In this method only the conducting and iron regions need to be divided into elements, and there are no boundary conditions. Results from two computer programs using this method for iron free problems for various two-dimensional geometries are presented and compared with analytic solutions. (author)
A practical implementation of the higher-order transverse-integrated nodal diffusion method
International Nuclear Information System (INIS)
Prinsloo, Rian H.; Tomašević, Djordje I.; Moraal, Harm
2014-01-01
Highlights: • A practical higher-order nodal method is developed for diffusion calculations. • The method resolves the issue of the transverse leakage approximation. • The method achieves much superior accuracy as compared to standard nodal methods. • The calculational cost is only about 50% greater than standard nodal methods. • The method is packaged in a module for connection to existing nodal codes. - Abstract: Transverse-integrated nodal diffusion methods currently represent the standard in full core neutronic simulation. The primary shortcoming of this approach is the utilization of the quadratic transverse leakage approximation. This approach, although proven to work well for typical LWR problems, is not consistent with the formulation of nodal methods and can cause accuracy and convergence problems. In this work, an improved, consistent quadratic leakage approximation is formulated, which derives from the class of higher-order nodal methods developed some years ago. Further, a number of iteration schemes are developed around this consistent quadratic leakage approximation which yields accurate node average results in much improved calculational times. The most promising of these iteration schemes results from utilizing the consistent leakage approximation as a correction method to the standard quadratic leakage approximation. Numerical results are demonstrated on a set of benchmark problems and further applied to a realistic reactor problem, particularly the SAFARI-1 reactor, operating at Necsa, South Africa. The final optimal solution strategy is packaged into a standalone module which may simply be coupled to existing nodal diffusion codes
Integral equation models for image restoration: high accuracy methods and fast algorithms
International Nuclear Information System (INIS)
Lu, Yao; Shen, Lixin; Xu, Yuesheng
2010-01-01
Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images
System and method for integrating hazard-based decision making tools and processes
Hodgin, C Reed [Westminster, CO
2012-03-20
A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.
Bartholomew, Theodore T; Lockard, Allison J
2018-06-13
Mixed methods can foster depth and breadth in psychological research. However, its use remains in development in psychotherapy research. Our purpose was to review the use of mixed methods in psychotherapy research. Thirty-one studies were identified via the PRISMA systematic review method. Using Creswell & Plano Clark's typologies to identify design characteristics, we assessed each study for rigor and how each used mixed methods. Key features of mixed methods designs and these common patterns were identified: (a) integration of clients' perceptions via mixing; (b) understanding group psychotherapy; (c) integrating methods with cases and small samples; (d) analyzing clinical data as qualitative data; and (e) exploring cultural identities in psychotherapy through mixed methods. The review is discussed with respect to the value of integrating multiple data in single studies to enhance psychotherapy research. © 2018 Wiley Periodicals, Inc.
Zhu, Ying; Herbert, John M.
2018-01-01
The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.
DEFF Research Database (Denmark)
Karimi, Yaser; Oraee, Hashem; Guerrero, Josep M.
2017-01-01
This paper proposes a new decentralized power management and load sharing method for a photovoltaic based, hybrid single/three-phase islanded microgrid consisting of various PV units, battery units and hybrid PV/battery units. The proposed method is not limited to the systems with separate PV...... in different load, PV generation and battery conditions is validated experimentally in a microgrid lab prototype consisted of one three-phase unit and two single-phase units....
Dhage Iteration Method for Generalized Quadratic Functional Integral Equations
Directory of Open Access Journals (Sweden)
Bapurao C. Dhage
2015-01-01
Full Text Available In this paper we prove the existence as well as approximations of the solutions for a certain nonlinear generalized quadratic functional integral equation. An algorithm for the solutions is developed and it is shown that the sequence of successive approximations starting at a lower or upper solution converges monotonically to the solutions of related quadratic functional integral equation under some suitable mixed hybrid conditions. We rely our main result on Dhage iteration method embodied in a recent hybrid fixed point theorem of Dhage (2014 in partially ordered normed linear spaces. An example is also provided to illustrate the abstract theory developed in the paper.
Entropic sampling in the path integral Monte Carlo method
International Nuclear Information System (INIS)
Vorontsov-Velyaminov, P N; Lyubartsev, A P
2003-01-01
We have extended the entropic sampling Monte Carlo method to the case of path integral representation of a quantum system. A two-dimensional density of states is introduced into path integral form of the quantum canonical partition function. Entropic sampling technique within the algorithm suggested recently by Wang and Landau (Wang F and Landau D P 2001 Phys. Rev. Lett. 86 2050) is then applied to calculate the corresponding entropy distribution. A three-dimensional quantum oscillator is considered as an example. Canonical distributions for a wide range of temperatures are obtained in a single simulation run, and exact data for the energy are reproduced
Integrating computational methods to retrofit enzymes to synthetic pathways.
Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula
2012-02-01
Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.
Nuclear methods - an integral part of the NBS certification program
International Nuclear Information System (INIS)
Gills, T.E.
1984-01-01
Within the past twenty years, new techniques and methods have emerged in response to new technologies that are based upon the performance of high-purity and well-characterized materials. The National Bureau of Standards, through its Standard Reference Materials (SRM's) Program, provides standards in the form of many of these materials to ensure accuracy and the compatibility of measurements throughout the US and the world. These standards, defined by the National Bureau of Standards as Standard Reference Materials (SRMs), are developed by using state-of-the-art methods and procedures for both preparation and analysis. Nuclear methods-activation analysis constitute an integral part of that analysis process
Harden, Angela; Thomas, James; Cargo, Margaret; Harris, Janet; Pantoja, Tomas; Flemming, Kate; Booth, Andrew; Garside, Ruth; Hannes, Karin; Noyes, Jane
2018-05-01
The Cochrane Qualitative and Implementation Methods Group develops and publishes guidance on the synthesis of qualitative and mixed-method evidence from process evaluations. Despite a proliferation of methods for the synthesis of qualitative research, less attention has focused on how to integrate these syntheses within intervention effectiveness reviews. In this article, we report updated guidance from the group on approaches, methods, and tools, which can be used to integrate the findings from quantitative studies evaluating intervention effectiveness with those from qualitative studies and process evaluations. We draw on conceptual analyses of mixed methods systematic review designs and the range of methods and tools that have been used in published reviews that have successfully integrated different types of evidence. We outline five key methods and tools as devices for integration which vary in terms of the levels at which integration takes place; the specialist skills and expertise required within the review team; and their appropriateness in the context of limited evidence. In situations where the requirement is the integration of qualitative and process evidence within intervention effectiveness reviews, we recommend the use of a sequential approach. Here, evidence from each tradition is synthesized separately using methods consistent with each tradition before integration takes place using a common framework. Reviews which integrate qualitative and process evaluation evidence alongside quantitative evidence on intervention effectiveness in a systematic way are rare. This guidance aims to support review teams to achieve integration and we encourage further development through reflection and formal testing. Copyright © 2017 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Koch, K.R.
1985-01-01
A new analysis method specially suited for the inherent difficulties of fusion neutronics was developed to provide detailed studies of the fusion neutron transport physics. These studies should provide a better understanding of the limitations and accuracies of typical fusion neutronics calculations. The new analysis method is based on the direct integration of the integral form of the neutron transport equation and employs a continuous energy formulation with the exact treatment of the energy angle kinematics of the scattering process. In addition, the overall solution is analyzed in terms of uncollided, once-collided, and multi-collided solution components based on a multiple collision treatment. Furthermore, the numerical evaluations of integrals use quadrature schemes that are based on the actual dependencies exhibited in the integrands. The new DITRAN computer code was developed on the Cyber 205 vector supercomputer to implement this direct integration multiple-collision fusion neutronics analysis. Three representative fusion reactor models were devised and the solutions to these problems were studied to provide suitable choices for the numerical quadrature orders as well as the discretized solution grid and to understand the limitations of the new analysis method. As further verification and as a first step in assessing the accuracy of existing fusion-neutronics calculations, solutions obtained using the new analysis method were compared to typical multigroup discrete ordinates calculations
Field Method for Integrating the First Order Differential Equation
Institute of Scientific and Technical Information of China (English)
JIA Li-qun; ZHENG Shi-wang; ZHANG Yao-yu
2007-01-01
An important modern method in analytical mechanics for finding the integral, which is called the field-method, is used to research the solution of a differential equation of the first order. First, by introducing an intermediate variable, a more complicated differential equation of the first order can be expressed by two simple differential equations of the first order, then the field-method in analytical mechanics is introduced for solving the two differential equations of the first order. The conclusion shows that the field-method in analytical mechanics can be fully used to find the solutions of a differential equation of the first order, thus a new method for finding the solutions of the first order is provided.
Computing thermal Wigner densities with the phase integration method
International Nuclear Information System (INIS)
Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.
2014-01-01
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
Methods for Developing Emissions Scenarios for Integrated Assessment Models
Energy Technology Data Exchange (ETDEWEB)
Prinn, Ronald [MIT; Webster, Mort [MIT
2007-08-20
The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.
System integrational and migrational concepts and methods within healthcare
DEFF Research Database (Denmark)
Endsleff, F; Loubjerg, P
1997-01-01
In this paper an overview and comparison of the basic concepts and methods behind different system integrational implementations is given, including the DHE, which is based on the coming Healthcare Information Systems Architecture pre-standard HISA, developed by CEN TC251. This standard and the DHE...... (Distributed Healthcare Environment) not only provides highly relevant standards, but also provides an efficient and well structured platform for Healthcare IT Systems....
A geometrical method towards first integrals for dynamical systems
International Nuclear Information System (INIS)
Labrunie, S.; Conte, R.
1996-01-01
We develop a method, based on Darboux close-quote s and Liouville close-quote s works, to find first integrals and/or invariant manifolds for a physically relevant class of dynamical systems, without making any assumption on these elements close-quote forms. We apply it to three dynamical systems: Lotka endash Volterra, Lorenz and Rikitake. copyright 1996 American Institute of Physics
Towards risk-based structural integrity methods for PWRs
International Nuclear Information System (INIS)
Chapman, O.J.V.; Lloyd, R.B.
1992-01-01
This paper describes the development of risk-based structural integrity assurance methods and their application to Pressurized Water Reactor (PWR) plant. In-service inspection is introduced as a way of reducing the failure probability of high risk sites and the latter are identified using reliability analysis; the extent and interval of inspection can also be optimized. The methodology is illustrated by reference to the aspect of reliability of weldments in PWR systems. (author)
INTEGRATED APPLICATION OF OPTICAL DIAGNOSTIC METHODS IN ULCERATIVE COLITIS
Directory of Open Access Journals (Sweden)
E. V. Velikanov
2013-01-01
Full Text Available Abstract. Our results suggest that the combined use of optical coherent tomography (OCT and fluorescence diagnosis helps to refine the nature and boundaries of the pathological process in the tissue of the colon in ulcerative colitis. Studies have shown that an integrated optical diagnostics allows us to differentiate lesions respectively to histology and to decide on the need for biopsy and venue. This method is most appropriate in cases difficult for diagnosis.
Li, Xiaofan; Nie, Qing
2009-07-01
Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integral method, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.
International Nuclear Information System (INIS)
Filippov, A.V.; Dyatko, N.A.; Pal', A.F.; Starostin, A.N.
2003-01-01
A model of dust grain charging is constructed using the method of moments. The dust grain charging process in a weakly ionized helium plasma produced by a 100-keV electron beam at atmospheric pressure is studied theoretically. In simulations, the beam current density was varied from 1 to 10 6 μA/cm 2 . It is shown that, in a He plasma, dust grains of radius 5 μm and larger perturb the electron temperature only slightly, although the reduced electric field near the grain reaches 8 Td, the beam current density being 10 6 μA/cm 2 . It is found that, at distances from the grain that are up to several tens or hundreds of times larger than its radius, the electron and ion densities are lower than their equilibrium values. Conditions are determined under which the charging process may be described by a model with constant electron transport coefficients. The dust grain charge is shown to be weakly affected by secondary electron emission. In a beam-produced helium plasma, the dust grain potential calculated in the drift-diffusion model is shown to be close to that calculated in the orbit motion limited model. It is found that, in the vicinity of a body perturbing the plasma, there may be no quasineutral plasma presheath with an ambipolar diffusion of charged particles. The conditions for the onset of this presheath in a beam-produced plasma are determined
Evaluation of the filtered leapfrog-trapezoidal time integration method
International Nuclear Information System (INIS)
Roache, P.J.; Dietrich, D.E.
1988-01-01
An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed
Investigation of Optimal Integrated Circuit Raster Image Vectorization Method
Directory of Open Access Journals (Sweden)
Leonas Jasevičius
2011-03-01
Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian
Numerical method for solving integral equations of neutron transport. II
International Nuclear Information System (INIS)
Loyalka, S.K.; Tsai, R.W.
1975-01-01
In a recent paper it was pointed out that the weakly singular integral equations of neutron transport can be quite conveniently solved by a method based on subtraction of singularity. This previous paper was devoted entirely to the consideration of simple one-dimensional isotropic-scattering and one-group problems. The present paper constitutes interesting extensions of the previous work in that in addition to a typical two-group anisotropic-scattering albedo problem in the slab geometry, the method is also applied to an isotropic-scattering problem in the x-y geometry. These results are compared with discrete S/sub N/ (ANISN or TWOTRAN-II) results, and for the problems considered here, the proposed method is found to be quite effective. Thus, the method appears to hold considerable potential for future applications. (auth)
Lugovtsova, Y. D.; Soldatov, A. I.
2016-01-01
Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.
A simple reliability block diagram method for safety integrity verification
International Nuclear Information System (INIS)
Guo Haitao; Yang Xianhui
2007-01-01
IEC 61508 requires safety integrity verification for safety related systems to be a necessary procedure in safety life cycle. PFD avg must be calculated to verify the safety integrity level (SIL). Since IEC 61508-6 does not give detailed explanations of the definitions and PFD avg calculations for its examples, it is difficult for common reliability or safety engineers to understand when they use the standard as guidance in practice. A method using reliability block diagram is investigated in this study in order to provide a clear and feasible way of PFD avg calculation and help those who take IEC 61508-6 as their guidance. The method finds mean down times (MDTs) of both channel and voted group first and then PFD avg . The calculated results of various voted groups are compared with those in IEC61508 part 6 and Ref. [Zhang T, Long W, Sato Y. Availability of systems with self-diagnostic components-applying Markov model to IEC 61508-6. Reliab Eng System Saf 2003;80(2):133-41]. An interesting outcome can be realized from the comparison. Furthermore, although differences in MDT of voted groups exist between IEC 61508-6 and this paper, PFD avg of voted groups are comparatively close. With detailed description, the method of RBD presented can be applied to the quantitative SIL verification, showing a similarity of the method in IEC 61508-6
Numerical Simulation of Antennas with Improved Integral Equation Method
International Nuclear Information System (INIS)
Ma Ji; Fang Guang-You; Lu Wei
2015-01-01
Simulating antennas around a conducting object is a challenge task in computational electromagnetism, which is concerned with the behaviour of electromagnetic fields. To analyze this model efficiently, an improved integral equation-fast Fourier transform (IE-FFT) algorithm is presented in this paper. The proposed scheme employs two Cartesian grids with different size and location to enclose the antenna and the other object, respectively. On the one hand, IE-FFT technique is used to store matrix in a sparse form and accelerate the matrix-vector multiplication for each sub-domain independently. On the other hand, the mutual interaction between sub-domains is taken as the additional exciting voltage in each matrix equation. By updating integral equations several times, the whole electromagnetic system can achieve a stable status. Finally, the validity of the presented method is verified through the analysis of typical antennas in the presence of a conducting object. (paper)
Li, Rundong; Li, Yanlong; Yang, Tianhua; Wang, Lei; Wang, Weiyun
2015-05-30
Evaluations of technologies for heavy metal control mainly examine the residual and leaching rates of a single heavy metal, such that developed evaluation method have no coordination or uniqueness and are therefore unsuitable for hazard control effect evaluation. An overall pollution toxicity index (OPTI) was established in this paper, based on the developed index, an integrated evaluation method of heavy metal pollution control was established. Application of this method in the melting and sintering of fly ash revealed the following results: The integrated control efficiency of the melting process was higher in all instances than that of the sintering process. The lowest integrated control efficiency of melting was 56.2%, and the highest integrated control efficiency of sintering was 46.6%. Using the same technology, higher integrated control efficiency conditions were all achieved with lower temperatures and shorter times. This study demonstrated the unification and consistency of this method. Copyright © 2015 Elsevier B.V. All rights reserved.
Gasparini, Patrizia; Di Cosmo, Lucio; Cenni, Enrico; Pompei, Enrico; Ferretti, Marco
2013-07-01
In the frame of a process aiming at harmonizing National Forest Inventory (NFI) and ICP Forests Level I Forest Condition Monitoring (FCM) in Italy, we investigated (a) the long-term consistency between FCM sample points (a subsample of the first NFI, 1985, NFI_1) and recent forest area estimates (after the second NFI, 2005, NFI_2) and (b) the effect of tree selection method (tree-based or plot-based) on sample composition and defoliation statistics. The two investigations were carried out on 261 and 252 FCM sites, respectively. Results show that some individual forest categories (larch and stone pine, Norway spruce, other coniferous, beech, temperate oaks and cork oak forests) are over-represented and others (hornbeam and hophornbeam, other deciduous broadleaved and holm oak forests) are under-represented in the FCM sample. This is probably due to a change in forest cover, which has increased by 1,559,200 ha from 1985 to 2005. In case of shift from a tree-based to a plot-based selection method, 3,130 (46.7%) of the original 6,703 sample trees will be abandoned, and 1,473 new trees will be selected. The balance between exclusion of former sample trees and inclusion of new ones will be particularly unfavourable for conifers (with only 16.4% of excluded trees replaced by new ones) and less for deciduous broadleaves (with 63.5% of excluded trees replaced). The total number of tree species surveyed will not be impacted, while the number of trees per species will, and the resulting (plot-based) sample composition will have a much larger frequency of deciduous broadleaved trees. The newly selected trees have-in general-smaller diameter at breast height (DBH) and defoliation scores. Given the larger rate of turnover, the deciduous broadleaved part of the sample will be more impacted. Our results suggest that both a revision of FCM network to account for forest area change and a plot-based approach to permit statistical inference and avoid bias in the tree sample
A comparison of non-integrating reprogramming methods
Schlaeger, Thorsten M; Daheron, Laurence; Brickler, Thomas R; Entwisle, Samuel; Chan, Karrie; Cianci, Amelia; DeVine, Alexander; Ettenger, Andrew; Fitzgerald, Kelly; Godfrey, Michelle; Gupta, Dipti; McPherson, Jade; Malwadkar, Prerana; Gupta, Manav; Bell, Blair; Doi, Akiko; Jung, Namyoung; Li, Xin; Lynes, Maureen S; Brookes, Emily; Cherry, Anne B C; Demirbas, Didem; Tsankov, Alexander M; Zon, Leonard I; Rubin, Lee L; Feinberg, Andrew P; Meissner, Alexander; Cowan, Chad A; Daley, George Q
2015-01-01
Human induced pluripotent stem cells (hiPSCs1–3) are useful in disease modeling and drug discovery, and they promise to provide a new generation of cell-based therapeutics. To date there has been no systematic evaluation of the most widely used techniques for generating integration-free hiPSCs. Here we compare Sendai-viral (SeV)4, episomal (Epi)5 and mRNA transfection mRNA6 methods using a number of criteria. All methods generated high-quality hiPSCs, but significant differences existed in aneuploidy rates, reprogramming efficiency, reliability and workload. We discuss the advantages and shortcomings of each approach, and present and review the results of a survey of a large number of human reprogramming laboratories on their independent experiences and preferences. Our analysis provides a valuable resource to inform the use of specific reprogramming methods for different laboratories and different applications, including clinical translation. PMID:25437882
Improved parallel solution techniques for the integral transport matrix method
Energy Technology Data Exchange (ETDEWEB)
Zerr, R. Joseph, E-mail: rjz116@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA (United States); Azmy, Yousry Y., E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Burlington Engineering Laboratories, Raleigh, NC (United States)
2011-07-01
Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)
Improved parallel solution techniques for the integral transport matrix method
International Nuclear Information System (INIS)
Zerr, R. Joseph; Azmy, Yousry Y.
2011-01-01
Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)
Application of multiple timestep integration method in SSC
International Nuclear Information System (INIS)
Guppy, J.G.
1979-01-01
The thermohydraulic transient simulation of an entire LMFBR system is, by its very nature, complex. Physically, the entire plant consists of many subsystems which are coupled by various processes and/or components. The characteristic integration timesteps for these processes/components can vary over a wide range. To improve computing efficiency, a multiple timestep scheme (MTS) approach has been used in the development of the Super System Code (SSC). In this paper: (1) the partitioning of the system and the timestep control are described, and (2) results are presented showing a savings in computer running time using the MTS of as much as five times the time required using a single timestep scheme
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
Partially integrable systems in multi-dimensions by a variant of the dressing method: part 1
International Nuclear Information System (INIS)
Zenchuk, A I; Santini, P M
2006-01-01
In this paper we construct nonlinear partial differential equations in more than three independent variables, possessing a manifold of analytic solutions with high, but not full, dimensionality. For this reason we call them 'partially integrable'. Such a construction is achieved using a suitable modification of the classical dressing scheme, consisting in assuming that the kernel of the basic integral operator of the dressing formalism be nontrivial. This new hypothesis leads to the construction of (1) a linear system of compatible spectral problems for the solution U(λ; x) of the integral equation in three independent variables each (while the usual dressing method generates spectral problems in one or two dimensions); (2) a system of nonlinear partial differential equations in n dimensions (n > 3), possessing a manifold of analytic solutions of dimension (n - 2), which includes one largely arbitrary relation among the fields. These nonlinear equations can also contain an arbitrary forcing
Canonical integration and analysis of periodic maps using non-standard analysis and life methods
Energy Technology Data Exchange (ETDEWEB)
Forest, E.; Berz, M.
1988-06-01
We describe a method and a way of thinking which is ideally suited for the study of systems represented by canonical integrators. Starting with the continuous description provided by the Hamiltonians, we replace it by a succession of preferably canonical maps. The power series representation of these maps can be extracted with a computer implementation of the tools of Non-Standard Analysis and analyzed by the same tools. For a nearly integrable system, we can define a Floquet ring in a way consistent with our needs. Using the finite time maps, the Floquet ring is defined only at the locations s/sub i/ where one perturbs or observes the phase space. At most the total number of locations is equal to the total number of steps of our integrator. We can also produce pseudo-Hamiltonians which describe the motion induced by these maps. 15 refs., 1 fig.
Method for deposition of a conductor in integrated circuits
Creighton, J. Randall; Dominguez, Frank; Johnson, A. Wayne; Omstead, Thomas R.
1997-01-01
A method is described for fabricating integrated semiconductor circuits and, more particularly, for the selective deposition of a conductor onto a substrate employing a chemical vapor deposition process. By way of example, tungsten can be selectively deposited onto a silicon substrate. At the onset of loss of selectivity of deposition of tungsten onto the silicon substrate, the deposition process is interrupted and unwanted tungsten which has deposited on a mask layer with the silicon substrate can be removed employing a halogen etchant. Thereafter, a plurality of deposition/etch back cycles can be carried out to achieve a predetermined thickness of tungsten.
Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong
2015-02-01
Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.
Directory of Open Access Journals (Sweden)
Harshal Sabane
2015-01-01
Full Text Available Objective In recent years, there has been a gradual but definitive shift in medical schools all over the globe to promote a more integrated way of teaching. Integration of medical disciplines promotes a holistic understanding of the medical curriculum in the students. This helps them better understand and appreciate the importance and role of each medical subject. Method The study was conducted among the 5th year Pre-clinical students. Questionnaire consisted of 4 questions on the level of integration, 5 questions on various aspects of the assessment and some questions which tested the level of awareness of the integrated method. Result Out of a total of 72 students present on the day of data collection, 65 participated in the study giving a response rate of 90.27 %. After primary data cleansing 4 questionnaires had to be omitted. Most of the students opined as “good” or “very good” for the questions on integration and its attributes. Only 27 (44 % were aware of integrated curriculum being taught in other medical schools in the gulf. Similar findings were observed regarding assessment related questions. Reduction in the number of block exams is unpopular among the students and only 6% have agreed for 3, 4, or 5 non-summative block assessments. Opinion regarding the help of integrated teaching in IFOM based OMSB entrance examination was mixed with a greater variance in the responses. 43% students have indicated that they would like to spend more time with PDCI. Conclusion The students of our institution seem to have a favourable opinion regarding the integrated system of teaching. The satisfaction with the conduct of examinations and its related variables is found to be high. A reduction in the number of block exams however is unpopular among the target group and they would appreciate a greater time allocation for subjects of PDCI and Pharmacology.
Life cycle integrated thermoeconomic assessment method for energy conversion systems
International Nuclear Information System (INIS)
Kanbur, Baris Burak; Xiang, Liming; Dubey, Swapnil; Choo, Fook Hoong; Duan, Fei
2017-01-01
Highlights: • A new LCA integrated thermoeconomic approach is presented. • The new unit fuel cost is found 4.8 times higher than the classic method. • The new defined parameter increased the sustainability index by 67.1%. • The case studies are performed for countries with different CO 2 prices. - Abstract: Life cycle assessment (LCA) based thermoeconomic modelling has been applied for the evaluation of energy conversion systems since it provided more comprehensive and applicable assessment criteria. This study proposes an improved thermoeconomic method, named as life cycle integrated thermoeconomic assessment (LCiTA), which combines the LCA based enviroeconomic parameters in the production steps of the system components and fuel with the conventional thermoeconomic method for the energy conversion systems. A micro-cogeneration system is investigated and analyzed with the LCiTA method, the comparative studies show that the unit cost of fuel by using the LCiTA method is 3.8 times higher than the conventional thermoeconomic model. It is also realized that the enviroeconomic parameters during the operation of the system components do not have significant impacts on the system streams since the exergetic parameters are dominant in the thermoeconomic calculations. Moreover, the improved sustainability index is found roundly 67.2% higher than the previously defined sustainability index, suggesting that the enviroeconomic and thermoeconomic parameters decrease the impact of the exergy destruction in the sustainability index definition. To find the feasible operation conditions for the micro-cogeneration system, different assessment strategies are presented. Furthermore, a case study for Singapore is conducted to see the impact of the forecasted carbon dioxide prices on the thermoeconomic performance of the micro-cogeneration system.
DEFF Research Database (Denmark)
Djomo, Sylvestre Njakou; Knudsen, Marie Trydeman; Andersen, Mikael Skou
2017-01-01
There is an ongoing debate regarding the influence of the source location of pollution on the fate of pollutants and their subsequent impacts. Several methods have been developed to derive site-dependent characterization factors (CFs) for use in life-cycle assessment (LCA). Consistent, precise, a...
Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Electre III method in assessment of variants of integrated urban public transport system in Cracow
Directory of Open Access Journals (Sweden)
Katarzyna SOLECKA
2014-12-01
Full Text Available There is a lot of methods which are currently used for assessment of urban public transport system development and operation e.g. economic analysis, mostly Cost-Benefit Analysis – CBA, Cost-Effectiveness Analysis - CEA, hybrid methods, measurement methods (survey e.g. among passengers and measurement of traffic volume, vehicles capacity etc., and multicriteria decision aiding methods (multicriteria analysis. The main aim of multicriteria analysis is the choice of the most desirable solution from among alternative variants according to different criteria which are difficult to compare against one another. There are several multicriteria methods for assessment of urban public transport system development and operation, e.g. AHP, ANP, Electre, Promethee, Oreste. The paper presents an application of one of the most popular variant ranking methods – Electre III method. The algorithm of Electre III method usage is presented in detail and then its application for assessment of variants of urban public transport system integration in Cracow is shown. The final ranking of eight variants of integration of urban public transport system in Cracow (from the best to the worst variant was drawn up with the application of the Electre III method. For assessment purposes 10 criteria were adopted: economical, technical, environmental, and social; they form a consistent criteria family. The problem was analyzed with taking into account different points of view: city authorities, public transport operators, city units responsible for transport management, passengers and others users. Separate models of preferences for all stakeholders were created.
Integrating Multiple Teaching Methods into a General Chemistry Classroom
Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella
1998-02-01
In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.
Integral equation methods for vesicle electrohydrodynamics in three dimensions
Veerapaneni, Shravan
2016-12-01
In this paper, we develop a new boundary integral equation formulation that describes the coupled electro- and hydro-dynamics of a vesicle suspended in a viscous fluid and subjected to external flow and electric fields. The dynamics of the vesicle are characterized by a competition between the elastic, electric and viscous forces on its membrane. The classical Taylor-Melcher leaky-dielectric model is employed for the electric response of the vesicle and the Helfrich energy model combined with local inextensibility is employed for its elastic response. The coupled governing equations for the vesicle position and its transmembrane electric potential are solved using a numerical method that is spectrally accurate in space and first-order in time. The method uses a semi-implicit time-stepping scheme to overcome the numerical stiffness associated with the governing equations.
Integrated Phoneme Subspace Method for Speech Feature Extraction
Directory of Open Access Journals (Sweden)
Park Hyunsin
2009-01-01
Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.
Accelerometer method and apparatus for integral display and control functions
Bozeman, Richard J., Jr.
1992-06-01
Vibration analysis has been used for years to provide a determination of the proper functioning of different types of machinery, including rotating machinery and rocket engines. A determination of a malfunction, if detected at a relatively early stage in its development, will allow changes in operating mode or a sequenced shutdown of the machinery prior to a total failure. Such preventative measures result in less extensive and/or less expensive repairs, and can also prevent a sometimes catastrophic failure of equipment. Standard vibration analyzers are generally rather complex, expensive, and of limited portability. They also usually result in displays and controls being located remotely from the machinery being monitored. Consequently, a need exists for improvements in accelerometer electronic display and control functions which are more suitable for operation directly on machines and which are not so expensive and complex. The invention includes methods and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. The apparatus includes an accelerometer package having integral display and control functions. The accelerometer package is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine condition over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase over the selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated. The benefits of a vibration recording and monitoring system with controls and displays readily
Acoustic 3D modeling by the method of integral equations
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2018-02-01
This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.
Hierarchical Matrices Method and Its Application in Electromagnetic Integral Equations
Directory of Open Access Journals (Sweden)
Han Guo
2012-01-01
Full Text Available Hierarchical (H- matrices method is a general mathematical framework providing a highly compact representation and efficient numerical arithmetic. When applied in integral-equation- (IE- based computational electromagnetics, H-matrices can be regarded as a fast algorithm; therefore, both the CPU time and memory requirement are reduced significantly. Its kernel independent feature also makes it suitable for any kind of integral equation. To solve H-matrices system, Krylov iteration methods can be employed with appropriate preconditioners, and direct solvers based on the hierarchical structure of H-matrices are also available along with high efficiency and accuracy, which is a unique advantage compared to other fast algorithms. In this paper, a novel sparse approximate inverse (SAI preconditioner in multilevel fashion is proposed to accelerate the convergence rate of Krylov iterations for solving H-matrices system in electromagnetic applications, and a group of parallel fast direct solvers are developed for dealing with multiple right-hand-side cases. Finally, numerical experiments are given to demonstrate the advantages of the proposed multilevel preconditioner compared to conventional “single level” preconditioners and the practicability of the fast direct solvers for arbitrary complex structures.
A method for establishing integrity in software-based systems
International Nuclear Information System (INIS)
Staple, B.D.; Berg, R.S.; Dalton, L.J.
1997-01-01
In this paper, the authors present a digital system requirements specification method that has demonstrated a potential for improving the completeness of requirements while reducing ambiguity. It assists with making proper digital system design decisions, including the defense against specific digital system failures modes. It also helps define the technical rationale for all of the component and interface requirements. This approach is a procedural method that abstracts key features that are expanded in a partitioning that identifies and characterizes hazards and safety system function requirements. The key system features are subjected to a hierarchy that progressively defines their detailed characteristics and components. This process produces a set of requirements specifications for the system and all of its components. Based on application to nuclear power plants, the approach described here uses two ordered domains: plant safety followed by safety system integrity. Plant safety refers to those systems defined to meet the safety goals for the protection of the public. Safety system integrity refers to systems defined to ensure that the system can meet the safety goals. Within each domain, a systematic process is used to identify hazards and define the corresponding means of defense and mitigation. In both domains, the approach and structure are focused on the completeness of information and eliminating ambiguities in the generation of safety system requirements that will achieve the plant safety goals
DEFF Research Database (Denmark)
Karimi, Yaser; Guerrero, Josep M.; Oraee, Hashem
2016-01-01
This paper proposes a new decentralized power management and load sharing method for a photovoltaic based, hybrid single/three-phase islanded microgrid consisting of various PV units, battery units and hybrid PV/battery units. The proposed method takes into account the available PV power...... and battery conditions of the units to share the load among them and power flow among different phases is performed automatically through three-phase units. Modified active power-frequency droop functions are used according to operating states of each unit and the frequency level is used as trigger...... for switching between the states. Efficacy of the proposed method in different load, PV generation and battery conditions is validated experimentally in a microgrid lab prototype consisted of one three-phase unit and two single-phase units....
Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA
2006-12-19
A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.
An integrated approach for facilities planning by ELECTRE method
Elbishari, E. M. Y.; Hazza, M. H. F. Al; Adesta, E. Y. T.; Rahman, Nur Salihah Binti Abdul
2018-01-01
Facility planning is concerned with the design, layout, and accommodation of people, machines and activities of a system. Most of the researchers try to investigate the production area layout and the related facilities. However, few of them try to investigate the relationship between the production space and its relationship with service departments. The aim of this research to is to integrate different approaches in order to evaluate, analyse and select the best facilities planning method that able to explain the relationship between the production area and other supporting departments and its effect on human efforts. To achieve the objective of this research two different approaches have been integrated: Apple’s layout procedure as one of the effective tools in planning factories, ELECTRE method as one of the Multi Criteria Decision Making methods (MCDM) to minimize the risk of getting poor facilities planning. Dalia industries have been selected as a case study to implement our integration the factory have been divided two main different area: the whole facility (layout A), and the manufacturing area (layout B). This article will be concerned with the manufacturing area layout (Layout B). After analysing the data gathered, the manufacturing area was divided into 10 activities. There are five factors that the alternative were compared upon which are: Inter department satisfactory level, total distance travelled for workers, total distance travelled for the product, total time travelled for the workers, and total time travelled for the product. Three different layout alternatives have been developed in addition to the original layouts. Apple’s layout procedure was used to study and evaluate the different alternatives layouts, the study and evaluation of the layouts was done by calculating scores for each of the factors. After obtaining the scores from evaluating the layouts, ELECTRE method was used to compare the proposed alternatives with each other and with
Integrated project delivery methods for energy renovation of social housing
Directory of Open Access Journals (Sweden)
Tadeo Baldiri Salcedo Rahola
2015-11-01
renting them. As such, SHOs are used to dealing with renovations on a professional basis. The limited financial capacity of SHOs to realise energy renovations magnifies the importance of improving process performance in order to get the best possible outcomes. In the last 30 years numerous authors have addressed the need to improve the performance of traditional construction processes via alternative project delivery methods. However, very little is known about the specifics of renovations processes for social housing, the feasibility of applying innovative construction management methods and the consequences for the process, for the role of all the actors involved and for the results of the projects. The aim of this study is to provide an insight into the project delivery methods available for SHOs when they are undertaking energy renovation projects and to evaluate how these methods could facilitate the achievement of a higher process performance. The main research question is: How can Social Housing Organisations improve the performance of energy renovation processes using more integrated project delivery methods? The idea of a PhD thesis about social housing renovation processes originated from the participation of TU Delft as research partner in the Intelligent Energy Europe project SHELTER1 which was carried out between 2010 and 2013. The aim of the SHELTER project was to promote and facilitate the use of new models of cooperation, inspired by integrated design, for the energy renovation of social housing. The SHELTER project was a joint effort between six social housing organisations (Arte Genova, Italy; Black Country Housing Group, United Kingdom; Bulgarian Housing Association, Bulgaria; Dynacité, France; Logirep, France and Société Wallonne du Logement, Belgium, three European professional federations based in Brussels (Architects Council of Europe, Cecodhas Housing Europe and European Builders Confederation and one research partner (Delft University of
Zhang, Yujing; Sun, Guoxiang; Hou, Zhifei; Yan, Bo; Zhang, Jing
2017-12-01
A novel averagely linear-quantified fingerprint method was proposed and successfully applied to monitor the quality consistency of alkaloids in powdered poppy capsule extractive. Averagely linear-quantified fingerprint method provided accurate qualitative and quantitative similarities for chromatographic fingerprints of Chinese herbal medicines. The stability and operability of the averagely linear-quantified fingerprint method were verified by the parameter r. The average linear qualitative similarity SL (improved based on conventional qualitative "Similarity") was used as a qualitative criterion in the averagely linear-quantified fingerprint method, and the average linear quantitative similarity PL was introduced as a quantitative one. PL was able to identify the difference in the content of all the chemical components. In addition, PL was found to be highly correlated to the contents of two alkaloid compounds (morphine and codeine). A simple flow injection analysis was developed for the determination of antioxidant capacity in Chinese Herbal Medicines, which was based on the scavenging of 2,2-diphenyl-1-picrylhydrazyl radical by antioxidants. The fingerprint-efficacy relationship linking chromatographic fingerprints and antioxidant activities was investigated utilizing orthogonal projection to latent structures method, which provided important pharmacodynamic information for Chinese herbal medicines quality control. In summary, quantitative fingerprinting based on averagely linear-quantified fingerprint method can be applied for monitoring the quality consistency of Chinese herbal medicines, and the constructed orthogonal projection to latent structures model is particularly suitable for investigating the fingerprint-efficacy relationship. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 These product consistency test methods A and B evaluate the chemical durability of homogeneous glasses, phase separated glasses, devitrified glasses, glass ceramics, and/or multiphase glass ceramic waste forms hereafter collectively referred to as “glass waste forms” by measuring the concentrations of the chemical species released to a test solution. 1.1.1 Test Method A is a seven-day chemical durability test performed at 90 ± 2°C in a leachant of ASTM-Type I water. The test method is static and conducted in stainless steel vessels. Test Method A can specifically be used to evaluate whether the chemical durability and elemental release characteristics of nuclear, hazardous, and mixed glass waste forms have been consistently controlled during production. This test method is applicable to radioactive and simulated glass waste forms as defined above. 1.1.2 Test Method B is a durability test that allows testing at various test durations, test temperatures, mesh size, mass of sample, leachant volume, a...
Phase-integral method allowing nearlying transition points
Fröman, Nanny
1996-01-01
The efficiency of the phase-integral method developed by the present au thors has been shown both analytically and numerically in many publica tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...
Integrating financial theory and methods in electricity resource planning
Energy Technology Data Exchange (ETDEWEB)
Felder, F.A. [Economics Resource Group, Cambridge, MA (United States)
1996-02-01
Decision makers throughout the world are introducing risk and market forces in the electric power industry to lower costs and improve services. Incentive based regulation (IBR), which replaces cost of service ratemaking with an approach that divorces costs from revenues, exposes the utility to the risk of profits or losses depending on their performance. Regulators also are allowing for competition within the industry, most notably in the wholesale market and possibly in the retail market. Two financial approaches that incorporate risk in resource planning are evaluated: risk adjusted discount rates (RADR) and options theory (OT). These two complementary approaches are an improvement over the standard present value revenue requirement (PVRR). However, each method has some important limitations. By correctly using RADR and OT and understanding their limitations, decision makers can improve their ability to value risk properly in power plant projects and integrated resource plans. (Author)
Apparatus and method for defect testing of integrated circuits
Cole, Jr., Edward I.; Soden, Jerry M.
2000-01-01
An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V.sub.DD, to an IC under test and measures a transient voltage component, V.sub.DDT, signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V.sub.DDT signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V.sub.DDT signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.
Integrated airfoil and blade design method for large wind turbines
DEFF Research Database (Denmark)
Zhu, Wei Jun; Shen, Wen Zhong
2013-01-01
This paper presents an integrated method for designing airfoil families of large wind turbine blades. For a given rotor diameter and tip speed ratio, the optimal airfoils are designed based on the local speed ratios. To achieve high power performance at low cost, the airfoils are designed...... with an objective of high Cp and small chord length. When the airfoils are obtained, the optimum flow angle and rotor solidity are calculated which forms the basic input to the blade design. The new airfoils are designed based on the previous in-house airfoil family which were optimized at a Reynolds number of 3...... million. A novel shape perturbation function is introduced to optimize the geometry on the existing airfoils and thus simplify the design procedure. The viscos/inviscid code Xfoil is used as the aerodynamic tool for airfoil optimization where the Reynolds number is set at 16 million with a free...
Integrated airfoil and blade design method for large wind turbines
DEFF Research Database (Denmark)
Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær
2014-01-01
This paper presents an integrated method for designing airfoil families of large wind turbine blades. For a given rotor diameter and a tip speed ratio, optimal airfoils are designed based on the local speed ratios. To achieve a high power performance at low cost, the airfoils are designed...... with the objectives of high Cp and small chord length. When the airfoils are obtained, the optimum flow angle and rotor solidity are calculated which forms the basic input to the blade design. The new airfoils are designed based on a previous in-house designed airfoil family which was optimized at a Reynolds number...... of 3 million. A novel shape perturbation function is introduced to optimize the geometry based on the existing airfoils which simplifies the design procedure. The viscous/inviscid interactive code XFOIL is used as the aerodynamic tool for airfoil optimization at a Reynolds number of 16 million...
Directory of Open Access Journals (Sweden)
Merce Bernardo
2011-09-01
Full Text Available Organizations are increasingly implementing multiple Management System Standards (M SSs and considering managing the related Management Systems (MSs as a single system.The aim of this paper is to analyze if methods us ed to integrate standardized MSs condition the level of integration of those MSs. A descriptive methodology has been applied to 343 Spanish organizations registered to, at least, ISO 9001 and ISO 14001. Seven groups of these organizations using different combinations of methods have been analyzed Results show that these organizations have a high level of integration of their MSs. The most common method used, was the process map. Organizations using a combination of different methods achieve higher levels of integration than those using a single method. However, no evidence has been found to confirm the relationship between the method used and the integration level achieved.
Optimized negative dimensional integration method (NDIM) and multiloop Feynman diagram calculation
International Nuclear Information System (INIS)
Gonzalez, Ivan; Schmidt, Ivan
2007-01-01
We present an improved form of the integration technique known as NDIM (negative dimensional integration method), which is a powerful tool in the analytical evaluation of Feynman diagrams. Using this technique we study a φ 3 +φ 4 theory in D=4-2ε dimensions, considering generic topologies of L loops and E independent external momenta, and where the propagator powers are arbitrary. The method transforms the Schwinger parametric integral associated to the diagram into a multiple series expansion, whose main characteristic is that the argument contains several Kronecker deltas which appear naturally in the application of the method, and which we call diagram presolution. The optimization we present here consists in a procedure that minimizes the series multiplicity, through appropriate factorizations in the multinomials that appear in the parametric integral, and which maximizes the number of Kronecker deltas that are generated in the process. The solutions are presented in terms of generalized hypergeometric functions, obtained once the Kronecker deltas have been used in the series. Although the technique is general, we apply it to cases in which there are 2 or 3 different energy scales (masses or kinematic variables associated to the external momenta), obtaining solutions in terms of a finite sum of generalized hypergeometric series 1 and 2 variables respectively, each of them expressible as ratios between the different energy scales that characterize the topology. The main result is a method capable of solving Feynman integrals, expressing the solutions as hypergeometric series of multiplicity (n-1), where n is the number of energy scales present in the diagram
International Nuclear Information System (INIS)
Hashimoto, Y.; Marumori, T.; Sakata, F.
1987-01-01
With the purpose of clarifying characteristic difference of the optimum collective submanifolds in nonresonant and resonant cases, we develop an improved method of solving the basic equations of the self-consistent collective-coordinate (SCC) method for large-amplitude collective motion. It is shown that, in the resonant cases, there inevitably arise essential coupling terms which break the maximal-decoupling property of the collective motion, and we have to extend the optimum collective submanifold so as to properly treat the degrees of freedom which bring about the resonances
Energy Technology Data Exchange (ETDEWEB)
Xiao Sanshui; He Sailing
2002-12-01
An FDTD numerical method for computing the off-plane band structure of a two-dimensional photonic crystal consisting of nearly free-electron metals is presented. The method requires only a two-dimensional discretization mesh for a given off-plane wave number k{sub z} although the off-plane propagation is a three-dimensional problem. The off-plane band structures of a square lattice of metallic rods with the high-frequency metallic model in the air are studied, and a complete band gap for some nonzero off-plane wave number k{sub z} is founded.
International Nuclear Information System (INIS)
Xiao Sanshui; He Sailing
2002-01-01
An FDTD numerical method for computing the off-plane band structure of a two-dimensional photonic crystal consisting of nearly free-electron metals is presented. The method requires only a two-dimensional discretization mesh for a given off-plane wave number k z although the off-plane propagation is a three-dimensional problem. The off-plane band structures of a square lattice of metallic rods with the high-frequency metallic model in the air are studied, and a complete band gap for some nonzero off-plane wave number k z is founded
Kovalenko, Andriy; Gusarov, Sergey
2018-01-31
In this work, we will address different aspects of self-consistent field coupling of computational chemistry methods at different time and length scales in modern materials and biomolecular science. Multiscale methods framework yields dramatically improved accuracy, efficiency, and applicability by coupling models and methods on different scales. This field benefits many areas of research and applications by providing fundamental understanding and predictions. It could also play a particular role in commercialization by guiding new developments and by allowing quick evaluation of prospective research projects. We employ molecular theory of solvation which allows us to accurately introduce the effect of the environment on complex nano-, macro-, and biomolecular systems. The uniqueness of this method is that it can be naturally coupled with the whole range of computational chemistry approaches, including QM, MM, and coarse graining.
Capacitors for Integrated Circuits Produced by Means of a Double Implantation Method
International Nuclear Information System (INIS)
Zukowski, P.; Partyka, J.; Wegierek, P.
1998-01-01
The paper presents a description of a method to produce capacitors in integrated circuits that consists in implanting weakly doped silicon with the same impurity, then subjecting it to annealing (producing the inner plate) and implanting it again with ions of neutral elements to produce the dielectric layer. Results of the testing capacitors produced that way are also presented. Unit capacity of C u = 4.5 nF/mm 2 at tgδ = 0.01 has been obtained. The authors are of the opinion that the interesting problem of discontinuous variations of dielectric losses and capacities considered as functions of temperature, must be viewed as an open problem. (author)
Directory of Open Access Journals (Sweden)
Gargon Elizabeth
2010-02-01
Full Text Available Abstract Background Key stakeholders regard generic utility instruments as suitable tools to inform health technology assessment decision-making regarding allocation of resources across competing interventions. These instruments require a 'descriptor', a 'valuation' and a 'perspective' of the economic evaluation. There are various approaches that can be taken for each of these, offering a potential lack of consistency between instruments (a basic requirement for comparisons across diseases. The 'reference method' has been proposed as a way to address the limitations of the Quality-Adjusted Life Year (QALY. However, the degree to which generic measures can assess patients' specific experiences with their disease would remain unresolved. This has been neglected in the discussions on methods development and its impact on the QALY values obtained and resulting cost per QALY estimate underestimated. This study explored the content of utility instruments relevant to type 2 diabetes and Alzheimer's disease (AD as examples, and the role of qualitative research in informing the trade-off between content coverage and consistency. Method A literature review was performed to identify qualitative and quantitative studies regarding patients' experiences with type 2 diabetes or AD, and associated treatments. Conceptual models for each indication were developed. Generic- and disease-specific instruments were mapped to the conceptual models. Results Findings showed that published descriptions of relevant concepts important to patients with type 2 diabetes or AD are available for consideration in deciding on the most comprehensive approach to utility assessment. While the 15-dimensional health related quality of life measure (15D seemed the most comprehensive measure for both diseases, the Health Utilities Index 3 (HUI 3 seemed to have the least coverage for type 2 diabetes and the EuroQol-5 Dimensions (EQ-5D for AD. Furthermore, some of the utility instruments
Methods of assessing total doses integrated across pathways
International Nuclear Information System (INIS)
Grzechnik, M.; Camplin, W.; Clyne, F.; Allott, R.; Webbe-Wood, D.
2006-01-01
Calculated doses for comparison with limits resulting from discharges into the environment should be summed across all relevant pathways and food groups to ensure adequate protection. Current methodology for assessments used in the radioactivity in Food and the Environment (R.I.F.E.) reports separate doses from pathways related to liquid discharges of radioactivity to the environment from those due to gaseous releases. Surveys of local inhabitant food consumption and occupancy rates are conducted in the vicinity of nuclear sites. Information has been recorded in an integrated way, such that the data for each individual is recorded for all pathways of interest. These can include consumption of foods, such as fish, crustaceans, molluscs, fruit and vegetables, milk and meats. Occupancy times over beach sediments and time spent in close proximity to the site is also recorded for inclusion of external and inhalation radiation dose pathways. The integrated habits survey data may be combined with monitored environmental radionuclide concentrations to calculate total dose. The criteria for successful adoption of a method for this calculation were: Reproducibility can others easily use the approach and reassess doses? Rigour and realism how good is the match with reality?Transparency a measure of the ease with which others can understand how the calculations are performed and what they mean. Homogeneity is the group receiving the dose relatively homogeneous with respect to age, diet and those aspects that affect the dose received? Five methods of total dose calculation were compared and ranked according to their suitability. Each method was labelled (A to E) and given a short, relevant name for identification. The methods are described below; A) Individual doses to individuals are calculated and critical group selection is dependent on dose received. B) Individual Plus As in A, but consumption and occupancy rates for high dose is used to derive rates for application in
Comparison of Multi-Criteria Decision Support Methods for Integrated Rehabilitation Prioritization
Directory of Open Access Journals (Sweden)
Franz Tscheikner-Gratl
2017-01-01
Full Text Available The decisions taken in rehabilitation planning for the urban water networks will have a long lasting impact on the functionality and quality of future services provided by urban infrastructure. These decisions can be assisted by different approaches ranging from linear depreciation for estimating the economic value of the network over using a deterioration model to assess the probability of failure or the technical service life to sophisticated multi-criteria decision support systems. Subsequently, the aim of this paper is to compare five available multi-criteria decision-making (MCDM methods (ELECTRE, AHP, WSM, TOPSIS, and PROMETHEE for the application in an integrated rehabilitation management scheme for a real world case study and analyze them with respect to their suitability to be used in integrated asset management of water systems. The results of the different methods are not equal. This occurs because the chosen score scales, weights and the resulting distributions of the scores within the criteria do not have the same impact on all the methods. Independently of the method used, the decision maker must be familiar with its strengths but also weaknesses. Therefore, in some cases, it would be rational to use one of the simplest methods. However, to check for consistency and increase the reliability of the results, the application of several methods is encouraged.
Integrated method for the measurement of trace nitrogenous atmospheric bases
Directory of Open Access Journals (Sweden)
D. Key
2011-12-01
Full Text Available Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.
Stress estimation in reservoirs using an integrated inverse method
Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre
2018-05-01
Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.
International Nuclear Information System (INIS)
Flocard, Hubert.
1975-01-01
Using the same effective interaction depending only on 6 parameters a large number of nuclear properties are calculated, and the results are compared with experiment. Total binding energies of all nuclei of the chart table are reproduced within 5MeV. It is shown that the remaining discrepancy is coherent with the increase of total binding energy that can be expected from the further inclusion of collective motion correlations. Monopole, quadrupole and hexadecupole part of the charge densities are also reproduced with good accuracy. The deformation energy curves of many nuclei ranging from carbon to superheavy elements are calculated, and the different features of these curves are discussed. It should be noted that the fission barrier of actinide nuclei has been obtained and the results exhibit the well known two-bump shape. In addition the fusion energy curve of two 16 O merging in one nucleus 32 S has been completed. Results concerning monopole, dipole and quadrupole giant resonances of light nuclei obtained within the frame of the generator coordinate method are also presented. The calculated position of these resonances agree well with present available data [fr
The Integral Method, a new approach to quantify bactericidal activity.
Gottardi, Waldemar; Pfleiderer, Jörg; Nagl, Markus
2015-08-01
The bactericidal activity (BA) of antimicrobial agents is generally derived from the results of killing assays. A reliable quantitative characterization and particularly a comparison of these substances, however, are impossible with this information. We here propose a new method that takes into account the course of the complete killing curve for assaying BA and that allows a clear-cut quantitative comparison of antimicrobial agents with only one number. The new Integral Method, based on the reciprocal area below the killing curve, reliably calculates an average BA [log10 CFU/min] and, by implementation of the agent's concentration C, the average specific bactericidal activity SBA=BA/C [log10 CFU/min/mM]. Based on experimental killing data, the pertaining BA and SBA values of exemplary active halogen compounds were established, allowing quantitative assertions. N-chlorotaurine (NCT), chloramine T (CAT), monochloramine (NH2Cl), and iodine (I2) showed extremely diverging SBA values of 0.0020±0.0005, 1.11±0.15, 3.49±0.22, and 291±137log10 CFU/min/mM, respectively, against Staphylococcus aureus. This immediately demonstrates an approximately 550-fold stronger activity of CAT, 1730-fold of NH2Cl, and 150,000-fold of I2 compared to NCT. The inferred quantitative assertions and conclusions prove the new method suitable for characterizing bactericidal activity. Its application comprises the effect of defined agents on various bacteria, the consequence of temperature shifts, the influence of varying drug structure, dose-effect relationships, ranking of isosteric agents, comparison of competing commercial antimicrobial formulations, and the effect of additives. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Dr. Fee-Alexandra HAASE
2012-11-01
Full Text Available In this article we will apply a method of proof for conceptual consistency in a long historical range taking the example of rhetoric and persuasion. We will analyze the evidentially present linguistic features of this concept within three linguistic areas: The Indo-European languages, the Semitic languages, and the Afro-Asiatic languages. We have chosen the case of the concept ‘rhetoric’ / ’persuasion’ as a paradigm for this study. With the phenomenon of ‘linguistic dispersion’ we can explain the development of language as undirected, but with linguistic consistency across the borders of language families. We will prove that the Semitic and Indo-European languages are related. As a consequence, the strict differentiation between the Semitic and the Indo-European language families is outdated following the research positions of Starostin. In contrast to this, we will propose a theory of cultural exchange between the two language families.
Energy Technology Data Exchange (ETDEWEB)
Orth, J. [ABB AG, Mannheim (Germany). Geschaeftsbereich Power Generation
2007-07-01
Today's power plants are highly automated. All subsystems of large thermal power plants can be controlled from a central control room. The electrical systems are an important part. In future the new standard IEC 61850 will improve the integration of electrical systems into automation of power plants supporting the reduction of operation and maintenance cost. (orig.)
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
Energy Technology Data Exchange (ETDEWEB)
Batista, Enrique R [Los Alamos National Laboratory; Newcomer, Micharel B [YALE UNIV; Raggin, Christina M [YALE UNIV; Gascon, Jose A [YALE UNIV; Loria, J Patrick [YALE UNIV; Batista, Victor S [YALE UNIV
2008-01-01
This paper generalizes the MoD-QM/MM hybrid method, developed for ab initio computations of protein electrostatic potentials [Gasc6n, l.A.; Leung, S.S.F.; Batista, E.R.; Batista, V.S. J. Chem. Theory Comput. 2006,2, 175-186], as a practical algorithm for structural refinement of extended systems. The computational protocol involves a space-domain decomposition scheme for the formal fragmentation of extended systems into smaller, partially overlapping, molecular domains and the iterative self-consistent energy minimization of the constituent domains by relaxation of their geometry and electronic structure. The method accounts for mutual polarization of the molecular domains, modeled as Quantum-Mechanical (QM) layers embedded in the otherwise classical Molecular-Mechanics (MM) environment according to QM/MM hybrid methods. The method is applied to the description of benchmark models systems that allow for direct comparisons with full QM calculations, and subsequently applied to the structural characterization of the DNA Oxytricha nova Guanine quadruplex (G4). The resulting MoD-QM/MM structural model of the DNA G4 is compared to recently reported highresolution X-ray diffraction and NMR models, and partially validated by direct comparisons between {sup 1}H NMR chemical shifts that are highly sensitive to hydrogen-bonding and stacking interactions and the corresponding theoretical values obtained at the density functional theory DFT QM/MM (BH&H/6-31 G*:Amber) level in conjunction with the gauge independent atomic orbital (GIAO) method for the ab initio self consistent-field (SCF) calculation of NMR chemical shifts.
Boundary integral method for torsion of composite shafts
International Nuclear Information System (INIS)
Chou, S.I.; Mohr, J.A.
1987-01-01
The Saint-Venant torsion problem for homogeneous shafts with simply or multiply-connected regions has received a great deal of attention in the past. However, because of the mathematical difficulties inherent in the problem, very few problems of torsion of shafts with composite cross sections have been solved analytically. Muskhelishvili (1963) studied the torsion problem for shafts with cross sections having several solid inclusions surrounded by an elastic material. The problem of a circular shaft reinforced by a non-concentric round inclusion, a rectangular shaft composed of two rectangular parts made of different materials were solved. In this paper, a boundary integral equation method, which can be used to solve problems more complex than those considered by Katsikadelis et. al., is developed. Square shaft with two dissimilar rectangular parts, square shaft with a square inclusion are solved and the results compared with those given in the reference cited above. Finally, a square shaft composed of two rectangular parts with circular inclusion is solved. (orig./GL)
Integration of rock typing methods for carbonate reservoir characterization
International Nuclear Information System (INIS)
Aliakbardoust, E; Rahimpour-Bonab, H
2013-01-01
Reservoir rock typing is the most important part of all reservoir modelling. For integrated reservoir rock typing, static and dynamic properties need to be combined, but sometimes these two are incompatible. The failure is due to the misunderstanding of the crucial parameters that control the dynamic behaviour of the reservoir rock and thus selecting inappropriate methods for defining static rock types. In this study, rock types were defined by combining the SCAL data with the rock properties, particularly rock fabric and pore types. First, air-displacing-water capillary pressure curues were classified because they are representative of fluid saturation and behaviour under capillary forces. Next the most important rock properties which control the fluid flow and saturation behaviour (rock fabric and pore types) were combined with defined classes. Corresponding petrophysical properties were also attributed to reservoir rock types and eventually, defined rock types were compared with relative permeability curves. This study focused on representing the importance of the pore system, specifically pore types in fluid saturation and entrapment in the reservoir rock. The most common tests in static rock typing, such as electrofacies analysis and porosity–permeability correlation, were carried out and the results indicate that these are not appropriate approaches for reservoir rock typing in carbonate reservoirs with a complicated pore system. (paper)
High-integrity software, computation and the scientific method
International Nuclear Information System (INIS)
Hatton, L.
2012-01-01
Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)
High temperature spectral emissivity measurement using integral blackbody method
Pan, Yijie; Dong, Wei; Lin, Hong; Yuan, Zundong; Bloembergen, Pieter
2016-10-01
Spectral emissivity is a critical material's thermos-physical property for heat design and radiation thermometry. A prototype instrument based upon an integral blackbody method was developed to measure material's spectral emissivity above 1000 °. The system was implemented with an optimized commercial variable-high-temperature blackbody, a high speed linear actuator, a linear pyrometer, and an in-house designed synchronization circuit. A sample was placed in a crucible at the bottom of the blackbody furnace, by which the sample and the tube formed a simulated blackbody which had an effective total emissivity greater than 0.985. During the measurement, the sample was pushed to the end opening of the tube by a graphite rod which was actuated through a pneumatic cylinder. A linear pyrometer was used to monitor the brightness temperature of the sample surface through the measurement. The corresponding opto-converted voltage signal was fed and recorded by a digital multi-meter. A physical model was proposed to numerically evaluate the temperature drop along the process. Tube was discretized as several isothermal cylindrical rings, and the temperature profile of the tube was measurement. View factors between sample and rings were calculated and updated along the whole pushing process. The actual surface temperature of the sample at the end opening was obtained. Taking advantages of the above measured voltage profile and the calculated true temperature, spectral emissivity under this temperature point was calculated.
Methods for assessing NPP containment pressure boundary integrity
International Nuclear Information System (INIS)
Naus, D.J.; Ellingwood, B.R.; Graves, H.L.
2004-01-01
Research is being conducted to address aging of the containment pressure boundary in light-water reactor plants. Objectives of this research are to (1) understand the significant factors relating to corrosion occurrence, efficacy of inspection, and structural capacity reduction of steel containments and of liners of concrete containments; (2) provide the U.S. Nuclear Regulatory Commission (USNRC) reviewers a means of establishing current structural capacity margins or estimating future residual structural capacity margins for steel containments and concrete containments as limited by liner integrity; and (3) provide recommendations, as appropriate, on information to be requested of licensees for guidance that could be utilized by USNRC reviewers in assessing the seriousness of reported incidences of containment degradation. Activities include development of a degradation assessment methodology; reviews of techniques and methods for inspection and repair of containment metallic pressure boundaries; evaluation of candidate techniques for inspection of inaccessible regions of containment metallic pressure boundaries; establishment of a methodology for reliability-based condition assessments of steel containments and liners; and fragility assessments of steel containments with localized corrosion
Directory of Open Access Journals (Sweden)
Lanping Yang
Full Text Available In this paper, microemulsion electrokinetic chromatography (MEEKC fingerprints combined with quantification were successfully developed to monitor the holistic quality consistency of Ixeris sonchifolia (Bge. Hance Injection (ISHI. ISHI is a Chinese traditional patent medicine used for its anti-inflammatory and hemostatic effects. The effects of five crucial experimental variables on MEEKC were optimized by the central composite design. Under the optimized conditions, the MEEKC fingerprints of 28 ISHIs were developed. Quantitative determination of seven marker compounds was employed simultaneously, then 28 batches of samples from two manufacturers were clearly divided into two clusters by the principal component analysis. In fingerprint assessments, a systematic quantitative fingerprint method was established for the holistic quality consistency evaluation of ISHI from qualitative and quantitative perspectives, by which the qualities of 28 samples were well differentiated. In addition, the fingerprint-efficacy relationship between the fingerprints and the antioxidant activities was established utilizing orthogonal projection to latent structures, which provided important medicinal efficacy information for quality control. The present study offered a powerful and holistic approach to evaluating the quality consistency of herbal medicines and their preparations.
A Multi-Objective Optimization Method to integrate Heat Pumps in Industrial Processes
Becker, Helen; Spinato, Giulia; Maréchal, François
2011-01-01
Aim of process integration methods is to increase the efficiency of industrial processes by using pinch analysis combined with process design methods. In this context, appropriate integrated utilities offer promising opportunities to reduce energy consumption, operating costs and pollutants emissions. Energy integration methods are able to integrate any type of predefined utility, but so far there is no systematic approach to generate potential utilities models based on their technology limit...
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-07-01
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl- + CH3Cl → ClCH3 + Cl-) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.
International Nuclear Information System (INIS)
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-01-01
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein–Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple S N 2 reaction (Cl − + CH 3 Cl → ClCH 3 + Cl − ) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-07-07
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl(-) + CH3Cl → ClCH3 + Cl(-)) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.
Rosenzweig, Cynthia E.; Jones, James W.; Hatfield, Jerry; Antle, John; Ruane, Alex; Boote, Ken; Thorburn, Peter; Valdivia, Roberto; Porter, Cheryl; Janssen, Sander;
2015-01-01
The purpose of this handbook is to describe recommended methods for a trans-disciplinary, systems-based approach for regional-scale (local to national scale) integrated assessment of agricultural systems under future climate, bio-physical and socio-economic conditions. An earlier version of this Handbook was developed and used by several AgMIP Regional Research Teams (RRTs) in Sub-Saharan Africa (SSA) and South Asia (SA)(AgMIP handbook version 4.2, www.agmip.org/regional-integrated-assessments-handbook/). In contrast to the earlier version, which was written specifically to guide a consistent set of integrated assessments across SSA and SA, this version is intended to be more generic such that the methods can be applied to any region globally. These assessments are the regional manifestation of research activities described by AgMIP in its online protocols document (available at www.agmip.org). AgMIP Protocols were created to guide climate, crop modeling, economics, and information technology components of its projects.
Structural Consistency, Consistency, and Sequential Rationality.
Kreps, David M; Ramey, Garey
1987-01-01
Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...
Methods for the integral assessment of energy-related problems
International Nuclear Information System (INIS)
Hirschberg, S.; Suter, P.
1995-01-01
The present paper presents a number of methods for a comprehensive assessment of energy systems, discusses their merits and limitations, and provides some result examples. The areas addressed include environmental impacts, risks and economic aspects. Three step Life Cycle Analysis (LCA) has been used to analyse environmental impacts. Transparent and consistent inventories were developed for electricity generation (nine fuel cycles) and for heating systems. The results, which include gaseous and liquid emissions as well as non-energetic resources such as land depreciation, cover average, currently operating systems in the UCPTE network and in Switzerland. Examples of comparisons of heating systems and electricity generation systems, with respect to their contributions to such impact classes as greenhouse effect, acidification and photosmog, are provided. Major gaps exist with respect to the assessment of the severe accidents potential within the different energy systems. When analysing the objective risks due to severe accidents two approaches are employed, i.e. direct use of past experience and applications of Probabilistic Safety Assessment (PSA). Progress with respect to extended knowledge about accidents that occurred in the past and in the context of uses of PSA for external costs calculations is reported. Limitations of historical data and modelling issues are discussed along with the role of risk aversion and current attempts to account for it. (author) 10 figs., 1 tab
International Nuclear Information System (INIS)
Ravindra, M.K.; Banon, H.
1992-07-01
In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as a three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC's PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described
Diagrammatical methods within the path integral representation for quantum systems
International Nuclear Information System (INIS)
Alastuey, A
2014-01-01
The path integral representation has been successfully applied to the study of equilibrium properties of quantum systems for a long time. In particular, such a representation allowed Ginibre to prove the convergence of the low-fugacity expansions for systems with short-range interactions. First, I will show that the crucial trick underlying Ginibre's proof is the introduction of an equivalent classical system made with loops. Within the Feynman-Kac formula for the density matrix, such loops naturally emerge by collecting together the paths followed by particles exchanged in a given cyclic permutation. Two loops interact via an average of two- body genuine interactions between particles belonging to different loops, while the interactions between particles inside a given loop are accounted for in a loop fugacity. It turns out that the grand-partition function of the genuine quantum system exactly reduces to its classical counterpart for the gas of loops. The corresponding so-called magic formula can be combined with standard Mayer diagrammatics for the classical gas of loops. This provides low-density representations for the quantum correlations or thermodynamical functions, which are quite useful when collective effects must be taken into account properly. Indeed, resummations and or reorganizations of Mayer graphs can be performed by exploiting their remarkable topological and combinatorial properties, while statistical weights and bonds are purely c-numbers. The interest of that method will be illustrated through a brief description of its application to two long-standing problems, namely recombination in Coulomb systems and condensation in the interacting Bose gas.
Wunschel, David S; Melville, Angela M; Ehrhardt, Christopher J; Colburn, Heather A; Victry, Kristin D; Antolick, Kathryn C; Wahl, Jon H; Wahl, Karen L
2012-05-07
The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of Ricinus communis, commonly known as the castor plant. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatography-mass spectrometry (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid, as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods, starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid, or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method, independent of the seed source. In particular, the abundance of mannose, arabinose, fucose, ricinoleic acid, and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation than would be possible using a single analytical method.
Trijsburg, Laura; Geelen, Anouk; Hollman, Peter Ch; Hulshof, Paul Jm; Feskens, Edith Jm; Van't Veer, Pieter; Boshuizen, Hendriek C; de Vries, Jeanne Hm
2017-03-01
As misreporting, mostly under-reporting, of dietary intake is a generally known problem in nutritional research, we aimed to analyse the association between selected determinants and the extent of misreporting by the duplicate portion method (DP), 24 h recall (24hR) and FFQ by linear regression analysis using the biomarker values as unbiased estimates. For each individual, two DP, two 24hR, two FFQ and two 24 h urinary biomarkers were collected within 1·5 years. Also, for sixty-nine individuals one or two doubly labelled water measurements were obtained. The associations of basic determinants (BMI, gender, age and level of education) with misreporting of energy, protein and K intake of the DP, 24hR and FFQ were evaluated using linear regression analysis. Additionally, associations between other determinants, such as physical activity and smoking habits, and misreporting were investigated. The Netherlands. One hundred and ninety-seven individuals aged 20-70 years. Higher BMI was associated with under-reporting of dietary intake assessed by the different dietary assessment methods for energy, protein and K, except for K by DP. Men tended to under-report protein by the DP, FFQ and 24hR, and persons of older age under-reported K but only by the 24hR and FFQ. When adjusted for the basic determinants, the other determinants did not show a consistent association with misreporting of energy or nutrients and by the different dietary assessment methods. As BMI was the only consistent determinant of misreporting, we conclude that BMI should always be taken into account when assessing and correcting dietary intake.
Liu, Meilin; Bagci, Hakan
2011-01-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results
Integrating Evidence Within and Across Evidence Streams Using Qualitative Methods
There is high demand in environmental health for adoption of a structured process that evaluates and integrates evidence while making decisions transparent. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework holds promise to address this deman...
Kot, V. A.
2017-11-01
The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.
The integrated circuit IC EMP transient state disturbance effect experiment method investigates
International Nuclear Information System (INIS)
Li Xiaowei
2004-01-01
Transient state disturbance characteristic study on the integrated circuit, IC, need from its coupling path outset. Through cable (aerial) coupling, EMP converts to an pulse current voltage and results in the impact to the integrated circuit I/O orifice passing the cable. Aiming at the armament system construction feature, EMP effect to the integrated circuit, IC inside the system is analyzed. The integrated circuit, IC EMP effect experiment current injection method is investigated and a few experiments method is given. (authors)
Madsen, J. A.; Allen, D. E.; Donham, R. S.; Fifield, S. J.; Shipman, H. L.; Ford, D. J.; Dagher, Z. R.
2004-12-01
With funding from the National Science Foundation, we have designed an integrated science content and methods course for sophomore-level elementary teacher education (ETE) majors. This course, the Science Semester, is a 15-credit sequence that consists of three science content courses (Earth, Life, and Physical Science) and a science teaching methods course. The goal of this integrated science and education methods curriculum is to foster holistic understandings of science and pedagogy that future elementary teachers need to effectively use inquiry-based approaches in teaching science in their classrooms. During the Science Semester, traditional subject matter boundaries are crossed to stress shared themes that teachers must understand to teach standards-based elementary science. Exemplary approaches that support both learning science and learning how to teach science are used. In the science courses, students work collaboratively on multidisciplinary problem-based learning (PBL) activities that place science concepts in authentic contexts and build learning skills. In the methods course, students critically explore the theory and practice of elementary science teaching, drawing on their shared experiences of inquiry learning in the science courses. An earth system science approach is ideally adapted for the integrated, inquiry-based learning that takes place during the Science Semester. The PBL investigations that are the hallmark of the Science Semester provide the backdrop through which fundamental earth system interactions can be studied. For example in the PBL investigation that focuses on energy, the carbon cycle is examined as it relates to fossil fuels. In another PBL investigation centered on kids, cancer, and the environment, the hydrologic cycle with emphasis on surface runoff and ground water contamination is studied. In a PBL investigation that has students learning about the Delaware Bay ecosystem through the story of the horseshoe crab and the biome
Method of mechanical quadratures for solving singular integral equations of various types
Sahakyan, A. V.; Amirjanyan, H. A.
2018-04-01
The method of mechanical quadratures is proposed as a common approach intended for solving the integral equations defined on finite intervals and containing Cauchy-type singular integrals. This method can be used to solve singular integral equations of the first and second kind, equations with generalized kernel, weakly singular equations, and integro-differential equations. The quadrature rules for several different integrals represented through the same coefficients are presented. This allows one to reduce the integral equations containing integrals of different types to a system of linear algebraic equations.
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no [Centre for Theoretical and Computational Chemistry CTCC, Department of Chemistry, University of Tromsø, N-9037 Tromsø (Norway); Törk, Lisa; Hättig, Christof, E-mail: christof.haettig@rub.de [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44801 Bochum (Germany)
2014-11-21
We present scaling factors for vibrational frequencies calculated within the harmonic approximation and the correlated wave-function methods coupled cluster singles and doubles model (CC2) and Møller-Plesset perturbation theory (MP2) with and without a spin-component scaling (SCS or spin-opposite scaling (SOS)). Frequency scaling factors and the remaining deviations from the reference data are evaluated for several non-augmented basis sets of the cc-pVXZ family of generally contracted correlation-consistent basis sets as well as for the segmented contracted TZVPP basis. We find that the SCS and SOS variants of CC2 and MP2 lead to a slightly better accuracy for the scaled vibrational frequencies. The determined frequency scaling factors can also be used for vibrational frequencies calculated for excited states through response theory with CC2 and the algebraic diagrammatic construction through second order and their spin-component scaled variants.
Energy Technology Data Exchange (ETDEWEB)
Keating, Kristina [Rutgers Univ., Newark, NJ (United States). Dept. of Earth and Environmental Sciences; Slater, Lee [Rutgers Univ., Newark, NJ (United States). Dept. of Earth and Environmental Sciences; Ntarlagiannis, Dimitris [Rutgers Univ., Newark, NJ (United States). Dept. of Earth and Environmental Sciences; Williams, Kenneth H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division
2015-02-24
This documents contains the final report for the project "Integrated Geophysical Measurements for Bioremediation Monitoring: Combining Spectral Induced Polarization, Nuclear Magnetic Resonance and Magnetic Methods" (DE-SC0007049) Executive Summary: Our research aimed to develop borehole measurement techniques capable of monitoring subsurface processes, such as changes in pore geometry and iron/sulfur geochemistry, associated with remediation of heavy metals and radionuclides. Previous work has demonstrated that geophysical method spectral induced polarization (SIP) can be used to assess subsurface contaminant remediation; however, SIP signals can be generated from multiple sources limiting their interpretation value. Integrating multiple geophysical methods, such as nuclear magnetic resonance (NMR) and magnetic susceptibility (MS), with SIP, could reduce the ambiguity of interpretation that might result from a single method. Our research efforts entails combining measurements from these methods, each sensitive to different mineral forms and/or mineral-fluid interfaces, providing better constraints on changes in subsurface biogeochemical processes and pore geometries significantly improving our understanding of processes impacting contaminant remediation. The Rifle Integrated Field Research Challenge (IFRC) site was used as a test location for our measurements. The Rifle IFRC site is located at a former uranium ore-processing facility in Rifle, Colorado. Leachate from spent mill tailings has resulted in residual uranium contamination of both groundwater and sediments within the local aquifer. Studies at the site include an ongoing acetate amendment strategy, native microbial populations are stimulated by introduction of carbon intended to alter redox conditions and immobilize uranium. To test the geophysical methods in the field, NMR and MS logging measurements were collected before, during, and after acetate amendment. Next, laboratory NMR, MS, and SIP measurements
Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg
2017-04-01
In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain
This report was prepared by the Global Change Research Program (GCRP) in the National Center for Environmental Assessment (NCEA) of the Office of Research and Development (ORD) at the U.S. Environmental Protection Agency (EPA). This draft report is a description of the methods u...
Characterization methods of integrated optics for mid-infrared interferometry
Labadie, Lucas; Kern, Pierre Y.; Schanen-Duport, Isabelle; Broquin, Jean-Emmanuel
2004-10-01
his article deals with one of the important instrumentation challenges of the stellar interferometry mission IRSI-Darwin of the European Space Agency: the necessity to have a reliable and performant system for beam combination has enlightened the advantages of an integrated optics solution, which is already in use for ground-base interferometry in the near infrared. Integrated optics provides also interesting features in terms of filtering, which is a main issue for the deep null to be reached by Darwin. However, Darwin will operate in the mid infrared range from 4 microns to 20 microns where no integrated optics functions are available on-the-shelf. This requires extending the integrated optics concept and the undergoing technology in this spectral range. This work has started with the IODA project (Integrated Optics for Darwin) under ESA contract and aims to provide a first component for interferometry. In this paper are presented the guidelines of the characterization work that is implemented to test and validate the performances of a component at each step of the development phase. We present also an example of characterization experiment used within the frame of this work, is theoretical approach and some results.
Integrate life-cycle assessment and risk analysis results, not methods.
Linkov, Igor; Trump, Benjamin D; Wender, Ben A; Seager, Thomas P; Kennedy, Alan J; Keisler, Jeffrey M
2017-08-04
Two analytic perspectives on environmental assessment dominate environmental policy and decision-making: risk analysis (RA) and life-cycle assessment (LCA). RA focuses on management of a toxicological hazard in a specific exposure scenario, while LCA seeks a holistic estimation of impacts of thousands of substances across multiple media, including non-toxicological and non-chemically deleterious effects. While recommendations to integrate the two approaches have remained a consistent feature of environmental scholarship for at least 15 years, the current perception is that progress is slow largely because of practical obstacles, such as a lack of data, rather than insurmountable theoretical difficulties. Nonetheless, the emergence of nanotechnology presents a serious challenge to both perspectives. Because the pace of nanomaterial innovation far outstrips acquisition of environmentally relevant data, it is now clear that a further integration of RA and LCA based on dataset completion will remain futile. In fact, the two approaches are suited for different purposes and answer different questions. A more pragmatic approach to providing better guidance to decision-makers is to apply the two methods in parallel, integrating only after obtaining separate results.
Griffiths, Robert B.
2001-11-01
Quantum mechanics is one of the most fundamental yet difficult subjects in physics. Nonrelativistic quantum theory is presented here in a clear and systematic fashion, integrating Born's probabilistic interpretation with Schrödinger dynamics. Basic quantum principles are illustrated with simple examples requiring no mathematics beyond linear algebra and elementary probability theory. The quantum measurement process is consistently analyzed using fundamental quantum principles without referring to measurement. These same principles are used to resolve several of the paradoxes that have long perplexed physicists, including the double slit and Schrödinger's cat. The consistent histories formalism used here was first introduced by the author, and extended by M. Gell-Mann, J. Hartle and R. Omnès. Essential for researchers yet accessible to advanced undergraduate students in physics, chemistry, mathematics, and computer science, this book is supplementary to standard textbooks. It will also be of interest to physicists and philosophers working on the foundations of quantum mechanics. Comprehensive account Written by one of the main figures in the field Paperback edition of successful work on philosophy of quantum mechanics
Watanabe, Hiroshi C; Kubillus, Maximilian; Kubař, Tomáš; Stach, Robert; Mizaikoff, Boris; Ishikita, Hiroshi
2017-07-21
In the condensed phase, quantum chemical properties such as many-body effects and intermolecular charge fluctuations are critical determinants of the solvation structure and dynamics. Thus, a quantum mechanical (QM) molecular description is required for both solute and solvent to incorporate these properties. However, it is challenging to conduct molecular dynamics (MD) simulations for condensed systems of sufficient scale when adapting QM potentials. To overcome this problem, we recently developed the size-consistent multi-partitioning (SCMP) quantum mechanics/molecular mechanics (QM/MM) method and realized stable and accurate MD simulations, using the QM potential to a benchmark system. In the present study, as the first application of the SCMP method, we have investigated the structures and dynamics of Na + , K + , and Ca 2+ solutions based on nanosecond-scale sampling, a sampling 100-times longer than that of conventional QM-based samplings. Furthermore, we have evaluated two dynamic properties, the diffusion coefficient and difference spectra, with high statistical certainty. Furthermore the calculation of these properties has not previously been possible within the conventional QM/MM framework. Based on our analysis, we have quantitatively evaluated the quantum chemical solvation effects, which show distinct differences between the cations.
Jang, Eunice E.; McDougall, Douglas E.; Pollon, Dawn; Herbert, Monique; Russell, Pia
2008-01-01
There are both conceptual and practical challenges in dealing with data from mixed methods research studies. There is a need for discussion about various integrative strategies for mixed methods data analyses. This article illustrates integrative analytic strategies for a mixed methods study focusing on improving urban schools facing challenging…
Piloting a method to evaluate the implementation of integrated water ...
African Journals Online (AJOL)
Journal Home > Vol 41, No 5 (2015) >. Log in or Register to get access to full text downloads. ... A methodology with a set of principles, change areas and measures was developed as a performance assessment tool. ... Keywords: Integrated water resource management, Inkomati River Basin, South Africa, Swaziland ...
A joint classification method to integrate scientific and social networks
Neshati, Mahmood; Asgari, Ehsaneddin; Hiemstra, Djoerd; Beigy, Hamid
In this paper, we address the problem of scientific-social network integration to find a matching relationship between members of these networks. Utilizing several name similarity patterns and contextual properties of these networks, we design a focused crawler to find high probable matching pairs,
Fringe integral equation method for a truncated grounded dielectric slab
DEFF Research Database (Denmark)
Jørgensen, Erik; Maci, S.; Toccafondi, A.
2001-01-01
The problem of scattering by a semi-infinite grounded dielectric slab illuminated by an arbitrary incident TMz polarized electric field is studied by solving a new set of “fringe” integral equations (F-IEs), whose functional unknowns are physically associated to the wave diffraction processes...
Fourier path-integral Monte Carlo methods: Partial averaging
International Nuclear Information System (INIS)
Doll, J.D.; Coalson, R.D.; Freeman, D.L.
1985-01-01
Monte Carlo Fourier path-integral techniques are explored. It is shown that fluctuation renormalization techniques provide an effective means for treating the effects of high-order Fourier contributions. The resulting formalism is rapidly convergent, is computationally convenient, and has potentially useful variational aspects
Writing Integrative Reviews of the Literature: Methods and Purposes
Torraco, Richard J.
2016-01-01
This article discusses the integrative review of the literature as a distinctive form of research that uses existing literature to create new knowledge. As an expansion and update of a previously published article on this topic, it acknowledges the growth and appeal of this form of research to scholars, it identifies the main components of the…
Integral reactor system and method for fuel cells
Fernandes, Neil Edward; Brown, Michael S; Cheekatamarla, Praveen; Deng, Thomas; Dimitrakopoulos, James; Litka, Anthony F
2013-11-19
A reactor system is integrated internally within an anode-side cavity of a fuel cell. The reactor system is configured to convert hydrocarbons to smaller species while mitigating the lower production of solid carbon. The reactor system may incorporate one or more of a pre-reforming section, an anode exhaust gas recirculation device, and a reforming section.
Numerical integration methods and layout improvements in the context of dynamic RNA visualization.
Shabash, Boris; Wiese, Kay C
2017-05-30
RNA visualization software tools have traditionally presented a static visualization of RNA molecules with limited ability for users to interact with the resulting image once it is complete. Only a few tools allowed for dynamic structures. One such tool is jViz.RNA. Currently, jViz.RNA employs a unique method for the creation of the RNA molecule layout by mapping the RNA nucleotides into vertexes in a graph, which we call the detailed graph, and then utilizes a Newtonian mechanics inspired system of forces to calculate a layout for the RNA molecule. The work presented here focuses on improvements to jViz.RNA that allow the drawing of RNA secondary structures according to common drawing conventions, as well as dramatic run-time performance improvements. This is done first by presenting an alternative method for mapping the RNA molecule into a graph, which we call the compressed graph, and then employing advanced numerical integration methods for the compressed graph representation. Comparing the compressed graph and detailed graph implementations, we find that the compressed graph produces results more consistent with RNA drawing conventions. However, we also find that employing the compressed graph method requires a more sophisticated initial layout to produce visualizations that would require minimal user interference. Comparing the two numerical integration methods demonstrates the higher stability of the Backward Euler method, and its resulting ability to handle much larger time steps, a high priority feature for any software which entails user interaction. The work in this manuscript presents the preferred use of compressed graphs to detailed ones, as well as the advantages of employing the Backward Euler method over the Forward Euler method. These improvements produce more stable as well as visually aesthetic representations of the RNA secondary structures. The results presented demonstrate that both the compressed graph representation, as well as the Backward
International Nuclear Information System (INIS)
Hassenstein, A.; Richard, G.; Inhoffen, W.; Scholz, F.
2007-01-01
The new integration method (DIM) provides for the first time the anatomically precise integration of the OCT-scan position into the angiogram (fluorescein angiography, FLA), using reference marker at corresponding vessel crossings. Therefore an exact correlation of angiographic and morphological pathological findings is possible und leads to a better understanding of OCT and FLA. Occult findings in FLA were the patient group which profited most. Occult leakages could gain additional information using DIM such as serous detachment of the retinal pigment epithelium (RPE) in a topography. So far it was unclear whether the same localization in the lesion was examined by FLA and OCT especially when different staff were performing and interpreting the examination. Using DIM this problem could be solved using objective markers. This technique is the requirement for follow-up examinations by OCT. Using DIM for an objective, reliable and precise correlation of OCT and FLA-findings it is now possible to provide the identical scan-position in follow-up. Therefore for follow-up in clinical studies it is mandatory to use DIM to improve the evidence-based statement of OCT and the quality of the study. (author) [de
Energy Technology Data Exchange (ETDEWEB)
Wang, Yan; Han, Bingqian; Chen, Nan; Deng, Dongyang; Guan, Hongtao [Department of Materials Science and Engineering, Yunnan University, 650091, Kunming (China); Wang, Yude, E-mail: ydwang@ynu.edu.cn [Department of Materials Science and Engineering, Yunnan University, 650091, Kunming (China); Yunnan Province Key Lab of Micro-Nano Materials and Technology, Yunnan University, 650091, Kunming (China)
2016-08-15
MnO{sub 2} hollow microspheres consisted of nanoribbons were successfully fabricated via a facile hydrothermal method with SiO{sub 2} sphere templates. The crystal structure, morphology and microwave absorption properties in X and Ku band of the as-synthesized samples were characterized by powder X-ray diffraction (XRD), transmission electron microscopy (TEM) and a vector network analyzer. The results show that the three-dimensional (3D) hollow microspheres are assembled by ultra thin and narrow one-dimensional (1D) nanoribbons. A rational process for the formation of hollow microspheres is proposed. The 3D MnO{sub 2} hollow microspheres possess improved dielectric and magnetic properties than the 1D nanoribbons prepared by the same procedures with the absence of SiO{sub 2} hard templates, which are closely related to their special nanostructures. The MnO{sub 2} microspheres also show much better microwave absorption properties in X (8–12 GHz) and Ku (12–18 GHz) microwave band compared with 1D MnO{sub 2} nanoribbons. The minimum reflection loss of −40 dB for hollow microsphere can be observed at 14.2 GHz and reflection loss below −10 dB is 3.5 GHz with a thickness of only 4 mm. The possible mechanism for the enhanced microwave absorption properties is also discussed. - Graphical abstract: MnO{sub 2} hollow microspheres composed of nanoribbons show the excellent microwave absorption properties in X and Ku band. - Highlights: • MnO{sub 2} hollow microspheres consisted of MnO{sub 2} nanoribbons were successfully prepared. • MnO{sub 2} hollow microspheres possess good microwave absorption performances. • The excellent microwave absorption properties are in X and Ku microwave band. • Electromagnetic impedance matching is great contribution to absorption properties.
Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol
2011-01-01
A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032
Harada, Y.; Wessel, P.; Sterling, A.; Kroenke, L.
2002-12-01
Inter-hotspot motion within the Pacific plate is one of the most controversial issues in recent geophysical studies. However, it is a fact that many geophysical and geological data including ages and positions of seamount chains in the Pacific plate can largely be explained by a simple model of absolute motion derived from assumptions of rigid plates and fixed hotspots. Therefore we take the stand that if a model of plate motion can explain the ages and positions of Pacific hotspot tracks, inter-hotspot motion would not be justified. On the other hand, if any discrepancies between the model and observations are found, the inter-hotspot motion may then be estimated from these discrepancies. To make an accurate model of the absolute motion of the Pacific plate, we combined two different approaches: the polygonal finite rotation method (PFRM) by Harada and Hamano (2000) and the hot-spotting technique developed by Wessel and Kroenke (1997). The PFRM can determine accurate positions of finite rotation poles for the Pacific plate if the present positions of hotspots are known. On the other hand, the hot-spotting technique can predict present positions of hotspots if the absolute plate motion is given. Therefore we can undertake iterative calculations using the two methods. This hybrid method enables us to determine accurate finite rotation poles for the Pacific plate solely from geometry of Hawaii, Louisville and Easter(Crough)-Line hotspot tracks from around 70 Ma to present. Information of ages can be independently assigned to the model after the poles and rotation angles are determined. We did not detect any inter-hotspot motion from the geometry of these Pacific hotspot tracks using this method. The Ar-Ar ages of Pacific seamounts including new age data of ODP Leg 197 are used to test the newly determined model of the Pacific plate motion. The ages of Hawaii, Louisville, Easter(Crough)-Line, and Cobb hotspot tracks are quite consistent with each other from 70 Ma to
International Nuclear Information System (INIS)
Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper
An Integration of Geophysical Methods to Explore Buried Structures on the Bench and in the Field
Booterbaugh, A. P.; Lachhab, A.
2011-12-01
In the following study, an integration of geophysical methods and devices were implemented on the bench and in the field to accurately identify buried structures. Electrical resistivity and ground penetrating radar methods, including both a fabricated electrical resistivity apparatus and an electrical resistivity device were all used in this study. The primary goal of the study was to test the accuracy and reliability of the apparatus which costs a fraction of the price of a commercially sold resistivity instrument. The apparatus consists of four electrodes, two multimeters, a 12-volt battery, a DC to AC inverter and wires. Using this apparatus, an electrical current, is injected into earth material through the outer electrodes and the potential voltage is measured across the inner electrodes using a multimeter. The recorded potential and the intensity of the current can then be used to calculate the apparent resistivity of a given material. In this study the Wenner array, which consists of four equally spaced electrodes, was used due to its higher accuracy and greater resolution when investigating lateral variations of resistivity in shallow depths. In addition, the apparatus was used with an electrical resistivity device and a ground penetrating radar unit to explore the buried building foundation of Gustavus Adolphus Hall located on Susquehanna University Campus, Selinsgrove, PA. The apparatus successfully produced consistent results on the bench level revealing the location of small bricks buried under a soil material. In the summer of 2010, seventeen electrical resistivity transects were conducted on the Gustavus Adolphus site where and revealed remnants of the foundation. In the summer of 2011, a ground penetrating radar survey and an electrical resistivity tomography survey were conducted to further explore the site. Together these methods identified the location of the foundation and proved that the apparatus was a reliable tool for regular use on the bench
Local defect correction for boundary integral equation methods
Kakuba, G.; Anthonissen, M.J.H.
2014-01-01
The aim in this paper is to develop a new local defect correction approach to gridding for problems with localised regions of high activity in the boundary element method. The technique of local defect correction has been studied for other methods as finite difference methods and finite volume
Local defect correction for boundary integral equation methods
Kakuba, G.; Anthonissen, M.J.H.
2013-01-01
This paper presents a new approach to gridding for problems with localised regions of high activity. The technique of local defect correction has been studied for other methods as ¿nite difference methods and ¿nite volume methods. In this paper we develop the technique for the boundary element
Developments of integrated laser crystals by a direct bonding method
International Nuclear Information System (INIS)
Sugiyama, Akira; Fukuyama, Hiroyasu; Katsumata, Masaki; Tanaka, Mitsuhiro; Okada, Yukikatu
2003-01-01
Laser crystal integration using a neodymium-doped yttrium vanadate (or orthovanadate) laser crystal, and non-doped yttrium vanadate crystals that function as cold fingers has been demonstrated. A newly developed dry etching process was adopted in the preparation for contact of mechanically polished surfaces. In the heat treatment process, temperature optimization was essential to get rid of precipitation of vanadic acid caused by the thermo-chemical reaction in a vacuum furnace. The bonded crystal was studied via optical characteristics, magnified inspections, laser output performances pumped by a CW laser diode. From these experiments, it was clear that the integrated Nd:YVO 4 laser crystal, securing the well-improved thermal conductivity, can increase laser output power nearly twice that of the conventional single crystal which was cracked in high power laser pumping of 10 W due to its intrinsic poor thermal conductivity. (author)
International Nuclear Information System (INIS)
Gray, S.K.; Noid, D.W.; Sumpter, B.G.
1994-01-01
We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique
Pesticides and public health: integrated methods of mosquito management.
Rose, R. I.
2001-01-01
Pesticides have a role in public health as part of sustainable integrated mosquito management. Other components of such management include surveillance, source reduction or prevention, biological control, repellents, traps, and pesticide-resistance management. We assess the future use of mosquito control pesticides in view of niche markets, incentives for new product development, Environmental Protection Agency registration, the Food Quality Protection Act, and improved pest management strate...
Application of dematel method in integrated framework of corporate governance
Klozíková, Jana; Dočkalíková, Iveta
2015-01-01
Corporate governance was created in recent decades and we can say that it is a new field of science. The most famous companies failed from day to day. Their failure and scandals had significant impact on local and international community. Finding of a new effective framework of level of corporate governance can help that the similar negative events wouldn't be repeated never again. The new approach in the corporate governance - an integrated framework, created for corporate governance is one ...
Methods for integrating a functional component into a microfluidic device
Simmons, Blake; Domeier, Linda; Woo, Noble; Shepodd, Timothy; Renzi, Ronald F.
2014-08-19
Injection molding is used to form microfluidic devices with integrated functional components. One or more functional components are placed in a mold cavity, which is then closed. Molten thermoplastic resin is injected into the mold and then cooled, thereby forming a solid substrate including the functional component(s). The solid substrate including the functional component(s) is then bonded to a second substrate, which may include microchannels or other features.
Overlay improvement methods with diffraction based overlay and integrated metrology
Nam, Young-Sun; Kim, Sunny; Shin, Ju Hee; Choi, Young Sin; Yun, Sang Ho; Kim, Young Hoon; Shin, Si Woo; Kong, Jeong Heung; Kang, Young Seog; Ha, Hun Hwan
2015-03-01
To accord with new requirement of securing more overlay margin, not only the optical overlay measurement is faced with the technical limitations to represent cell pattern's behavior, but also the larger measurement samples are inevitable for minimizing statistical errors and better estimation of circumstance in a lot. From these reasons, diffraction based overlay (DBO) and integrated metrology (IM) were mainly proposed as new approaches for overlay enhancement in this paper.
International Nuclear Information System (INIS)
Dejneko, A.O.
2011-01-01
Based on an analysis of existing models, methods and means of acquiring knowledge, a base method of automated knowledge acquisition has been chosen. On the base of this method, a new approach to integrate information acquired from knowledge sources of different typologies has been proposed, and the concept of a distributed knowledge acquisition with the aim of computerized formation of the most complete and consistent models of problem areas has been introduced. An original algorithm for distributed knowledge acquisition from databases, based on the construction of binary decision trees has been developed [ru
International Nuclear Information System (INIS)
Sturgeon, R.E.; Chakrabarti, C.L.; Maines, I.S.; Bertels, P.C.
1975-01-01
Oscilloscopic traces of transient atomic absorption signals generated during continuous heating of a Carbon Rod Atomizer model 63 show features which are characteristic of the element being atomized. This research was undertaken to determine the significance and usefulness of the two analytically significant parameters, absorbance maximum and integrated absorbance. For measuring integrated absorbance, an electronic integrating control unit consisting of a timing circuit, a lock-in amplifier, and a digital voltmeter, which functions as a direct absorbance x second readout, has been designed, developed, and successfully tested. Oscilloscopic and recorder traces of the absorbance maximum and digital display of the integrated absorbance are simultaneously obtained. For the elements studied, Cd, Zn, Cu, Al, Sn, Mo, and V, the detection limits and the precision obtained are practically identical for both methods of measurements. The sensitivities by the integration method are about the same as, or less than, those obtained by the peak-height method, whereas the calibration curves by the former are generally linear over wider ranges of concentrations. (U.S.)
Directory of Open Access Journals (Sweden)
Nederhof Esther
2012-07-01
. Conclusions First, extensive recruitment effort at the first assessment wave of a prospective population based cohort study has long lasting positive effects. Second, characteristics of hard-to-recruit responders are largely consistent across age groups and survey methods.
2012-01-01
recruitment effort at the first assessment wave of a prospective population based cohort study has long lasting positive effects. Second, characteristics of hard-to-recruit responders are largely consistent across age groups and survey methods. PMID:22747967
Evaluation of time integration methods for transient response analysis of nonlinear structures
International Nuclear Information System (INIS)
Park, K.C.
1975-01-01
Recent developments in the evaluation of direct time integration methods for the transient response analysis of nonlinear structures are presented. These developments, which are based on local stability considerations of an integrator, show that the interaction between temporal step size and nonlinearities of structural systems has a pronounced effect on both accuracy and stability of a given time integration method. The resulting evaluation technique is applied to a model nonlinear problem, in order to: 1) demonstrate that it eliminates the present costly process of evaluating time integrator for nonlinear structural systems via extensive numerical experiments; 2) identify the desirable characteristics of time integration methods for nonlinear structural problems; 3) develop improved stiffly-stable methods for application to nonlinear structures. Extension of the methodology for examination of the interaction between a time integrator and the approximate treatment of nonlinearities (such as due to pseudo-force or incremental solution procedures) is also discussed. (Auth.)
Integrated geophysical-geochemical methods for archaeological prospecting
Persson, Kjell
2005-01-01
A great number of field measurements with different methods and instruments were conducted in attempts to develop a method for an optimal combination of various geochemical and geophysical methods in archaeological prospecting. The research presented in this thesis focuses on a study of how different anthropogenic changes in the ground can be detected by geochemical and geophysical mapping and how the results can be presented. A six-year pilot project, Svealand in Vendel and Viking periods (S...
Towards thermodynamical consistency of quasiparticle picture
International Nuclear Information System (INIS)
Biro, T.S.; Shanenko, A.A.; Toneev, V.D.; Research Inst. for Particle and Nuclear Physics, Hungarian Academy of Sciences, Budapest
2003-01-01
The purpose of the present article is to call attention to some realistic quasi-particle-based description of the quark/gluon matter and its consistent implementation in thermodynamics. A simple and transparent representation of the thermodynamical consistency conditions is given. This representation allows one to review critically and systemize available phenomenological approaches to the deconfinement problem with respect to their thermodynamical consistency. A particular attention is paid to the development of a method for treating the string screening in the dense matter of unbound color charges. The proposed method yields an integrable effective pair potential, which can be incorporated into the mean-field picture. The results of its application are in reasonable agreement with lattice data on the QCD thermodynamics [ru
Toward thermodynamic consistency of quasiparticle picture
International Nuclear Information System (INIS)
Biro, T.S.; Toneev, V.D.; Shanenko, A.A.
2003-01-01
The purpose of the present article is to call attention to some realistic quasiparticle-based description of quark/gluon matter and its consistent implementation in thermodynamics. A simple and transparent representation of the thermodynamic consistency conditions is given. This representation allows one to review critically and systemize available phenomenological approaches to the deconfinement problem with respect to their thermodynamic consistency. Particular attention is paid to the development of a method for treating the string screening in the dense matter of unbound color charges. The proposed method yields an integrable effective pair potential that can be incorporated into the mean-field picture. The results of its application are in reasonable agreement with lattice data on the QCD thermodynamics
measurements of the absorption resonance integrals by reactor oscillator method
International Nuclear Information System (INIS)
Markovic, V.; Kocic, A.
1965-12-01
Experimental values of resonance integrals for silver vary significantly dependent on authors. That is why we have chosen this sample to measure RI. On the other hand, nuclear fuel (for example natural uranium) still represents an interesting objective for research in reactor physics. Measurements of natural uranium are done as a function of S/M. Measurements were done by amplitude reactor oscillator ROB-1/5 with precision from 0.5% - 2% dependent on the conditions of the oscillator. Measurements were completed at the heavy water reactor RB with 2% enriched uranium fuel [fr
Assembly and method for testing the integrity of stuffing tubes
Energy Technology Data Exchange (ETDEWEB)
Morrison, E.F.
1996-12-31
A stuffing tube integrity checking assembly includes first and second annular seals, with each seal adapted to be positioned about a stuffing tube penetration component. An annular inflation bladder is provided, the bladder having a slot extending longitudinally there along and including a separator for sealing the slot. A first valve is in fluid communication with the bladder for introducing pressurized fluid to the space defined by the bladder when mounted about the tube. First and second releasible clamps are provided. Each clamp assembly is positioned about the bladder for securing the bladder to one of the seals for thereby establishing a fluid-tight chamber about the tube.
The Integration Method of Ceramic Arts in the Product Design
Shuxin, Wang
2018-03-01
As one of the four ancient civilization countries, the firing technology of ceramic invented by China has made a great contribution to the progress and development of human society. In modern life, even the development of technology still needs the ceramics, there are large number of artists who take the ceramics as carrier active in the field of contemporary art. The ceramics can be seen everywhere in our daily life, this paper mainly discusses the integration means of ceramic art in the product design.
Two new solutions to the third-order symplectic integration method
International Nuclear Information System (INIS)
Iwatsu, Reima
2009-01-01
Two new solutions are obtained for the symplecticity conditions of explicit third-order partitioned Runge-Kutta time integration method. One of them has larger stability limit and better dispersion property than the Ruth's method.
International Nuclear Information System (INIS)
Hong, Ser Gi; Kim, Kang-Seog
2011-01-01
This paper describes the iteration methods using resonance integral tables to estimate the effective resonance cross sections in heterogeneous transport lattice calculations. Basically, these methods have been devised to reduce an effort to convert resonance integral table into subgroup data to be used in the physical subgroup method. Since these methods do not use subgroup data but only use resonance integral tables directly, these methods do not include an error in converting resonance integral into subgroup data. The effective resonance cross sections are estimated iteratively for each resonance nuclide through the heterogeneous fixed source calculations for the whole problem domain to obtain the background cross sections. These methods have been implemented in the transport lattice code KARMA which uses the method of characteristics (MOC) to solve the transport equation. The computational results show that these iteration methods are quite promising in the practical transport lattice calculations.
Liu, Meilin
2011-07-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.
Numerical method for solving linear Fredholm fuzzy integral equations of the second kind
Energy Technology Data Exchange (ETDEWEB)
Abbasbandy, S. [Department of Mathematics, Imam Khomeini International University, P.O. Box 288, Ghazvin 34194 (Iran, Islamic Republic of)]. E-mail: saeid@abbasbandy.com; Babolian, E. [Faculty of Mathematical Sciences and Computer Engineering, Teacher Training University, Tehran 15618 (Iran, Islamic Republic of); Alavi, M. [Department of Mathematics, Arak Branch, Islamic Azad University, Arak 38135 (Iran, Islamic Republic of)
2007-01-15
In this paper we use parametric form of fuzzy number and convert a linear fuzzy Fredholm integral equation to two linear system of integral equation of the second kind in crisp case. We can use one of the numerical method such as Nystrom and find the approximation solution of the system and hence obtain an approximation for fuzzy solution of the linear fuzzy Fredholm integral equations of the second kind. The proposed method is illustrated by solving some numerical examples.
Optimal Homotopy Asymptotic Method for Solving System of Fredholm Integral Equations
Directory of Open Access Journals (Sweden)
Bahman Ghazanfari
2013-08-01
Full Text Available In this paper, optimal homotopy asymptotic method (OHAM is applied to solve system of Fredholm integral equations. The effectiveness of optimal homotopy asymptotic method is presented. This method provides easy tools to control the convergence region of approximating solution series wherever necessary. The results of OHAM are compared with homotopy perturbation method (HPM and Taylor series expansion method (TSEM.
Property Valuation: Integration of Methods and Determination of Depreciation
Tempelmans Plat, H.; Verhaegh, M.
2000-01-01
Property valuation up to now is a global guess. On the one hand we have the Investment Method which regards a property as just a sum of money, on the other hand we have the Contractor's Method which is based on the actual new construction costs of the building and the actual value of the land. Both
Integrals of random fields treated by the model correction factor method
DEFF Research Database (Denmark)
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...
DEFF Research Database (Denmark)
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...
Nonlinear Fredholm Integral Equation of the Second Kind with Quadrature Methods
Directory of Open Access Journals (Sweden)
M. Jafari Emamzadeh
2010-06-01
Full Text Available In this paper, a numerical method for solving the nonlinear Fredholm integral equation is presented. We intend to approximate the solution of this equation by quadrature methods and by doing so, we solve the nonlinear Fredholm integral equation more accurately. Several examples are given at the end of this paper
Directory of Open Access Journals (Sweden)
Mohammad Almousa
2013-01-01
Full Text Available The aim of this study is to present the use of a semi analytical method called the optimal homotopy asymptotic method (OHAM for solving the linear Fredholm integral equations of the first kind. Three examples are discussed to show the ability of the method to solve the linear Fredholm integral equations of the first kind. The results indicated that the method is very effective and simple.
Systems and methods for integrating ion mobility and ion trap mass spectrometers
Ibrahim, Yehia M.; Garimella, Sandilya; Prost, Spencer A.
2018-04-10
Described herein are examples of systems and methods for integrating IMS and MS systems. In certain examples, systems and methods for decoding double multiplexed data are described. The systems and methods can also perform multiple refining procedures in order to minimize the demultiplexing artifacts. The systems and methods can be used, for example, for the analysis of proteomic and petroleum samples, where the integration of IMS and high mass resolution are used for accurate assignment of molecular formulae.
Directory of Open Access Journals (Sweden)
Nuraini Sari
2015-12-01
Full Text Available The purpose of this study is to evaluate about how Starbucks Corporation uses transfer pricing to minimize the tax bill. In addition, the study also will evaluate how Indonesia’s domestic rules can overcome the case if Starbucks UK case happens in Indonesia. There are three steps conducted in this study. First, using information provided by UK Her Majesty's Revenue and Customs (HMRC and other related articles, find methods used by Starbucks UK to minimize the tax bill. Second, find Organisation for Economic Co-Operation and Development (OECD viewpoint regarding Starbucks Corporation cases. Third, analyze how Indonesia’s transfer pricing rules will work if Starbucks UK’s cases happened in Indonesia. The results showed that there were three inter-company transactions that helped Starbucks UK to minimize the tax bill, such as coffee costs, royalty on intangible property, and interest on inter-company loans. Through a study of OECD’s BEPS action plans, it is recommended to improve the OECD Model Tax Convention including Indonesia’s domestic tax rules in order to produce a fair and transparent judgment on transfer pricing. This study concluded that by using current tax rules, although UK HMRC has been disadvantaged because transfer pricing practices done by most of multinational companies, UK HMRC still cannot prove the transfer pricing practices are not consistent with arm’s length principle. Therefore, current international tax rules need to be improved.
Franz, Anke; Worrell, Marcia; Vögele, Claus
2013-01-01
In recent years, combining quantitative and qualitative research methods in the same study has become increasingly acceptable in both applied and academic psychological research. However, a difficulty for many mixed methods researchers is how to integrate findings consistently. The value of using a coherent framework throughout the research…
Nederhof, Esther; Jörg, Frederike; Raven, Dennis; Veenstra, René; Verhulst, Frank C; Ormel, Johan; Oldehinkel, Albertine J
2012-07-02
assessment wave of a prospective population based cohort study has long lasting positive effects. Second, characteristics of hard-to-recruit responders are largely consistent across age groups and survey methods.
Energy Technology Data Exchange (ETDEWEB)
Uslar, Mathias; Beenken, Petra; Beer, Sebastian [OFFIS, Oldenburg (Germany)
2009-07-01
The ongoing integration of distributed energy recourses into the existing power grid has lead to both grown communication costs and an increased need for interoperability between the involved actors. In this context, standardized and ontology- based data models help to reduce integration costs in heterogeneous system landscapes. Using ontology-based security profiles, such models can be extended with meta-data containing information about security measures for energyrelated data in need of protection. By this approach, we achieve both a unified data model and a unified security level. (orig.)
The general 2-D moments via integral transform method for acoustic radiation and scattering
Smith, Jerry R.; Mirotznik, Mark S.
2004-05-01
The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.
Integrative health care method based on combined complementary ...
African Journals Online (AJOL)
Background: There are various models of health care, such as the ... sociological, economic, systemic of Neuman, cognitive medicine or ecological, ayurvedic, ... 2013, with a comprehensive approach in 64 patients using the clinical method.
On beam propagation methods for modelling in integrated optics
Hoekstra, Hugo
1997-01-01
In this paper the main features of the Fourier transform and finite difference beam propagation methods are summarized. Limitations and improvements, related to the paraxial approximation, finite differencing and tilted structures are discussed.
Extending product modeling methods for integrated product development
DEFF Research Database (Denmark)
Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný
2013-01-01
Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...
Simulation methods of nuclear electromagnetic pulse effects in integrated circuits
International Nuclear Information System (INIS)
Cheng Jili; Liu Yuan; En Yunfei; Fang Wenxiao; Wei Aixiang; Yang Yuanzhen
2013-01-01
In the paper the ways to compute the response of transmission line (TL) illuminated by electromagnetic pulse (EMP) were introduced firstly, which include finite-difference time-domain (FDTD) and trans-mission line matrix (TLM); then the feasibility of electromagnetic topology (EMT) in ICs nuclear electromagnetic pulse (NEMP) effect simulation was discussed; in the end, combined with the methods computing the response of TL, a new method of simulate the transmission line in IC illuminated by NEMP was put forward. (authors)
Systems and methods for switched-inductor integrated voltage regulators
Shepard, Kenneth L.; Sturcken, Noah Andrew
2017-12-12
Power controller includes an output terminal having an output voltage, at least one clock generator to generate a plurality of clock signals and a plurality of hardware phases. Each hardware phase is coupled to the at least one clock generator and the output terminal and includes a comparator. Each hardware phase is configured to receive a corresponding one of the plurality of clock signals and a reference voltage, combine the corresponding clock signal and the reference voltage to produce a reference input, generate a feedback voltage based on the output voltage, compare the reference input and the feedback voltage using the comparator and provide a comparator output to the output terminal, whereby the comparator output determines a duty cycle of the power controller. An integrated circuit including the power controller is also provided.
Challenges and promises of integrating knowledge engineering and qualitative methods
Lundberg, C. Gustav; Holm, Gunilla
Our goal is to expose some of the close ties that exist between knowledge engineering (KE) and qualitative methodology (QM). Many key concepts of qualitative research, for example meaning, commonsense, understanding, and everyday life, overlap with central research concerns in artificial intelligence. These shared interests constitute a largely unexplored avenue for interdisciplinary cooperation. We compare and take some steps toward integrating two historically diverse methodologies by exploring the commonalities of KE and QM both from a substantive and a methodological/technical perspective. In the second part of this essay, we address knowledge acquisition problems and procedures. Knowledge acquisition within KE has been based primarily on cognitive psychology/science foundations, whereas knowledge acquisition within QM has a broader foundation in phenomenology, symbolic interactionism, and ethnomethodology. Our discussion and examples are interdisciplinary in nature. We do not suggest that there is a clash between the KE and QM frameworks, but rather that the lack of communication potentially may limit each framework's future development.
Dai, Qianwei; Lin, Fangpeng; Wang, Xiaoping; Feng, Deshan; Bayless, Richard C.
2017-05-01
An integrated geophysical investigation was performed at S dam located at Dadu basin in China to assess the condition of the dam curtain. The key methodology of the integrated technique used was flow-field fitting method, which allowed identification of the hydraulic connections between the dam foundation and surface water sources (upstream and downstream), and location of the anomalous leakage outlets in the dam foundation. Limitations of the flow-field fitting method were complemented with resistivity logging to identify the internal erosion which had not yet developed into seepage pathways. The results of the flow-field fitting method and resistivity logging were consistent when compared with data provided by seismic tomography, borehole television, water injection test, and rock quality designation.
A sub-structure method for multidimensional integral transport calculations
International Nuclear Information System (INIS)
Kavenoky, A.; Stankovski, Z.
1983-03-01
A new method has been developed for fine structure burn-up calculations of very heterogeneous large size media. It is a generalization of the well-known surface-source method, allowing coupling actual two-dimensional heterogeneous assemblies, called sub-structures. The method has been applied to a rectangular medium, divided into sub-structures, containing rectangular and/or cylindrical fuel, moderator and structure elements. The sub-structures are divided into homogeneous zones. A zone-wise flux expansion is used to formulate a direct collision probability problem within it (linear or flat flux expansion in the rectangular zones, flat flux in the others). The coupling of the sub-structures is performed by making extra assumptions on the currents entering and leaving the interfaces. The accuracies and computing times achieved are illustrated by numerical results on two benchmark problems
A Method for Improving the Integrity of Peer Review.
Dadkhah, Mehdi; Kahani, Mohsen; Borchardt, Glenn
2017-08-15
Peer review is the most important aspect of reputable journals. Without it, we would be unsure about whether the material published was as valid and reliable as is possible. However, with the advent of the Internet, scientific literature has now become subject to a relatively new phenomenon: fake peer reviews. Some dishonest researchers have been manipulating the peer review process to publish what are often inferior papers. There are even papers that explain how to do it. This paper discusses one of those methods and how editors can defeat it by using a special review ID. This method is easy to understand and can be added to current peer review systems easily.
Enhancing user experience design with an integrated storytelling method
Peng, Qiong; Matterns, Jean Bernard; Marcus, A.
2016-01-01
Storytelling has been known as a service design method and been used broadly not only in service design but also in the context of user experience design. However, practitioners cannot yet fully appreciate the benefits of storytelling, and often confuse storytelling with storyboarding and scenarios.
Microfluidic devices and methods for integrated flow cytometry
Srivastava, Nimisha [Goleta, CA; Singh, Anup K [Danville, CA
2011-08-16
Microfluidic devices and methods for flow cytometry are described. In described examples, various sample handling and preparation steps may be carried out within a same microfluidic device as flow cytometry steps. A combination of imaging and flow cytometry is described. In some examples, spiral microchannels serve as incubation chambers. Examples of automated sample handling and flow cytometry are described.
Integration of educational methods and physical settings: Design ...
African Journals Online (AJOL)
... setting without having an architectural background. The theoretical framework of the research allows designers to consider key features and users' possible activities in High/ Scope settings and shape their designs accordingly. Keywords: daily activity; design; High/Scope education; interior space; teaching method ...
Integrating Research Skills Training into Non--Research Methods Courses
Woolf, Jules
2014-01-01
Research skills are a valued commodity by industry and university administrators. Despite the importance placed on these skills students typically dislike taking research method courses where these skills are learned. However, training in research skills does not necessarily have to be confined to these courses. In this study participants at a…
Effect of integration of cultural, botanical, and chemical methods of ...
African Journals Online (AJOL)
A field experiment was conducted from November 2011 to June 2013 to evaluate the effects of botanical, cultural, and chemical methods on termite colony survival, crop and wooden damage, and other biological activities in Ghimbi district of western Ethiopia. The termite mounds were dug and the following treatments were ...
14th International Conference on Integral Methods in Science and Engineering
Riva, Matteo; Lamberti, Pier; Musolino, Paolo
2017-01-01
This contributed volume contains a collection of articles on the most recent advances in integral methods. The first of two volumes, this work focuses on the construction of theoretical integral methods. Written by internationally recognized researchers, the chapters in this book are based on talks given at the Fourteenth International Conference on Integral Methods in Science and Engineering, held July 25-29, 2016, in Padova, Italy. A broad range of topics is addressed, such as: • Integral equations • Homogenization • Duality methods • Optimal design • Conformal techniques This collection will be of interest to researchers in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students in these disciplines, and to other professionals who use integration as an essential tool in their work.
International Nuclear Information System (INIS)
Ramani, D.T.
1977-01-01
The importance of knowledge of structural integrity of a reactor vessel due to thermal shock effects, is related to safety and operational requirements in assessing the adequacy and flawless functioing of the nuclear power systems. Followig a loss-of-coolant accident (LOCA) condition the integrity of the reactor vessel due to a sudden thermal shock induced by actuation of emergency core cooling system (ECCS), must be maintained to ensure safe and orderly shutdown of the reactor and its components. The paper encompasses criteria underlaying a fracture mechanics method of analysis to evaluate structural integrity of a typical 950 MWe PWR vessel as a result of very drastic changes in thermal and mechanical stress levels in the reactor vessel wall. The main object of this investigation therefore consists in assessing the capability of a PWR vessel to withstand the most critical thermal shock without inpairing its ability to conserve vital coolant owing to probable crack propagation. (Auth.)
Enrichment Assay Methods Development for the Integrated Cylinder Verification System
International Nuclear Information System (INIS)
Smith, Leon E.; Misner, Alex C.; Hatchell, Brian K.; Curtis, Michael M.
2009-01-01
International Atomic Energy Agency (IAEA) inspectors currently perform periodic inspections at uranium enrichment plants to verify UF6 cylinder enrichment declarations. Measurements are typically performed with handheld high-resolution sensors on a sampling of cylinders taken to be representative of the facility's entire product-cylinder inventory. Pacific Northwest National Laboratory (PNNL) is developing a concept to automate the verification of enrichment plant cylinders to enable 100 percent product-cylinder verification and potentially, mass-balance calculations on the facility as a whole (by also measuring feed and tails cylinders). The Integrated Cylinder Verification System (ICVS) could be located at key measurement points to positively identify each cylinder, measure its mass and enrichment, store the collected data in a secure database, and maintain continuity of knowledge on measured cylinders until IAEA inspector arrival. The three main objectives of this FY09 project are summarized here and described in more detail in the report: (1) Develop a preliminary design for a prototype NDA system, (2) Refine PNNL's MCNP models of the NDA system, and (3) Procure and test key pulse-processing components. Progress against these tasks to date, and next steps, are discussed.
Enrichment Assay Methods Development for the Integrated Cylinder Verification System
Energy Technology Data Exchange (ETDEWEB)
Smith, Leon E.; Misner, Alex C.; Hatchell, Brian K.; Curtis, Michael M.
2009-10-22
International Atomic Energy Agency (IAEA) inspectors currently perform periodic inspections at uranium enrichment plants to verify UF6 cylinder enrichment declarations. Measurements are typically performed with handheld high-resolution sensors on a sampling of cylinders taken to be representative of the facility's entire product-cylinder inventory. Pacific Northwest National Laboratory (PNNL) is developing a concept to automate the verification of enrichment plant cylinders to enable 100 percent product-cylinder verification and potentially, mass-balance calculations on the facility as a whole (by also measuring feed and tails cylinders). The Integrated Cylinder Verification System (ICVS) could be located at key measurement points to positively identify each cylinder, measure its mass and enrichment, store the collected data in a secure database, and maintain continuity of knowledge on measured cylinders until IAEA inspector arrival. The three main objectives of this FY09 project are summarized here and described in more detail in the report: (1) Develop a preliminary design for a prototype NDA system, (2) Refine PNNL's MCNP models of the NDA system, and (3) Procure and test key pulse-processing components. Progress against these tasks to date, and next steps, are discussed.
[Integrated intensive treatment of tinnitus: method and initial results].
Mazurek, B; Georgiewa, P; Seydel, C; Haupt, H; Scherer, H; Klapp, B F; Reisshauer, A
2005-07-01
In recent years, no major advances have been made in understanding the mechanisms underlying the development of tinnitus. Hence, the present therapeutic strategies aim at decoupling the subconscious from the perception of tinnitus. Mindful of the lessons drawn from existing tinnitus retraining and desensitisation therapies, a new integrated day hospital strategy of treatment lasting 7-14 days has been developed at the Charité Hospital and is presented in the present paper. The strategy for treating tinnitus in the proximity of patient domicile is designed for patients who feel disturbed in their world of perception and their efficiency due to tinnitus and give evidence of mental and physical strain. In view of the etiologically non-uniform and multiple events connected with tinnitus, consideration was also given to the fact that somatic and psychosocial factors are equally involved. Therefore, therapy should aim at diagnosing and therapeutically influencing those psychosocial factors that reduce the hearing impression to such an extent that the affected persons suffer from strain. The first results of therapy-dependent changes of 46 patients suffering from chronic tinnitus are presented. The data were evaluated before and after 7 days of treatment and 6 months after the end of treatment. Immediately after the treatment, the scores of both the tinnitus questionnaire (Goebel and Hiller) and the subscales improved significantly. These results were maintained during the 6-month post-treatment period and even improved.
Monari, Antonio; Rivail, Jean-Louis; Assfeld, Xavier
2013-02-19
Molecular mechanics methods can efficiently compute the macroscopic properties of a large molecular system but cannot represent the electronic changes that occur during a chemical reaction or an electronic transition. Quantum mechanical methods can accurately simulate these processes, but they require considerably greater computational resources. Because electronic changes typically occur in a limited part of the system, such as the solute in a molecular solution or the substrate within the active site of enzymatic reactions, researchers can limit the quantum computation to this part of the system. Researchers take into account the influence of the surroundings by embedding this quantum computation into a calculation of the whole system described at the molecular mechanical level, a strategy known as the mixed quantum mechanics/molecular mechanics (QM/MM) approach. The accuracy of this embedding varies according to the types of interactions included, whether they are purely mechanical or classically electrostatic. This embedding can also introduce the induced polarization of the surroundings. The difficulty in QM/MM calculations comes from the splitting of the system into two parts, which requires severing the chemical bonds that link the quantum mechanical subsystem to the classical subsystem. Typically, researchers replace the quantoclassical atoms, those at the boundary between the subsystems, with a monovalent link atom. For example, researchers might add a hydrogen atom when a C-C bond is cut. This Account describes another approach, the Local Self Consistent Field (LSCF), which was developed in our laboratory. LSCF links the quantum mechanical portion of the molecule to the classical portion using a strictly localized bond orbital extracted from a small model molecule for each bond. In this scenario, the quantoclassical atom has an apparent nuclear charge of +1. To achieve correct bond lengths and force constants, we must take into account the inner shell of
Neutron imaging integrated circuit and method for detecting neutrons
Nagarkar, Vivek V.; More, Mitali J.
2017-12-05
The present disclosure provides a neutron imaging detector and a method for detecting neutrons. In one example, a method includes providing a neutron imaging detector including plurality of memory cells and a conversion layer on the memory cells, setting one or more of the memory cells to a first charge state, positioning the neutron imaging detector in a neutron environment for a predetermined time period, and reading a state change at one of the memory cells, and measuring a charge state change at one of the plurality of memory cells from the first charge state to a second charge state less than the first charge state, where the charge state change indicates detection of neutrons at said one of the memory cells.
Standard Test Methods for Determining Mechanical Integrity of Photovoltaic Modules
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 These test methods cover procedures for determining the ability of photovoltaic modules to withstand the mechanical loads, stresses and deflections used to simulate, on an accelerated basis, high wind conditions, heavy snow and ice accumulation, and non-planar installation effects. 1.1.1 A static load test to 2400 Pa is used to simulate wind loads on both module surfaces 1.1.2 A static load test to 5400 Pa is used to simulate heavy snow and ice accumulation on the module front surface. 1.1.3 A twist test is used to simulate the non-planar mounting of a photovoltaic module by subjecting it to a twist angle of 1.2°. 1.1.4 A cyclic load test of 10 000 cycles duration and peak loading to 1440 Pa is used to simulate dynamic wind or other flexural loading. Such loading might occur during shipment or after installation at a particular location. 1.2 These test methods define photovoltaic test specimens and mounting methods, and specify parameters that must be recorded and reported. 1.3 Any individual mech...
Integration of equations of parabolic type by the method of nets
Saul'Yev, V K; Stark, M; Ulam, S
1964-01-01
International Series of Monographs in Pure and Applied Mathematics, Volume 54: Integration of Equations of Parabolic Type by the Method of Nets deals with solving parabolic partial differential equations using the method of nets. The first part of this volume focuses on the construction of net equations, with emphasis on the stability and accuracy of the approximating net equations. The method of nets or method of finite differences (used to define the corresponding numerical method in ordinary differential equations) is one of many different approximate methods of integration of partial diff
VALUE - Validating and Integrating Downscaling Methods for Climate Change Research
Maraun, Douglas; Widmann, Martin; Benestad, Rasmus; Kotlarski, Sven; Huth, Radan; Hertig, Elke; Wibig, Joanna; Gutierrez, Jose
2013-04-01
Our understanding of global climate change is mainly based on General Circulation Models (GCMs) with a relatively coarse resolution. Since climate change impacts are mainly experienced on regional scales, high-resolution climate change scenarios need to be derived from GCM simulations by downscaling. Several projects have been carried out over the last years to validate the performance of statistical and dynamical downscaling, yet several aspects have not been systematically addressed: variability on sub-daily, decadal and longer time-scales, extreme events, spatial variability and inter-variable relationships. Different downscaling approaches such as dynamical downscaling, statistical downscaling and bias correction approaches have not been systematically compared. Furthermore, collaboration between different communities, in particular regional climate modellers, statistical downscalers and statisticians has been limited. To address these gaps, the EU Cooperation in Science and Technology (COST) action VALUE (www.value-cost.eu) has been brought into life. VALUE is a research network with participants from currently 23 European countries running from 2012 to 2015. Its main aim is to systematically validate and develop downscaling methods for climate change research in order to improve regional climate change scenarios for use in climate impact studies. Inspired by the co-design idea of the international research initiative "future earth", stakeholders of climate change information have been involved in the definition of research questions to be addressed and are actively participating in the network. The key idea of VALUE is to identify the relevant weather and climate characteristics required as input for a wide range of impact models and to define an open framework to systematically validate these characteristics. Based on a range of benchmark data sets, in principle every downscaling method can be validated and compared with competing methods. The results of
Integration of OpenMC methods into MAMMOTH and Serpent
Energy Technology Data Exchange (ETDEWEB)
Kerby, Leslie [Idaho National Lab. (INL), Idaho Falls, ID (United States); Idaho State Univ., Idaho Falls, ID (United States); DeHart, Mark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tumulak, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States); Univ. of Michigan, Ann Arbor, MI (United States)
2016-09-01
OpenMC, a Monte Carlo particle transport simulation code focused on neutron criticality calculations, contains several methods we wish to emulate in MAMMOTH and Serpent. First, research coupling OpenMC and the Multiphysics Object-Oriented Simulation Environment (MOOSE) has shown promising results. Second, the utilization of Functional Expansion Tallies (FETs) allows for a more efficient passing of multiphysics data between OpenMC and MOOSE. Both of these capabilities have been preliminarily implemented into Serpent. Results are discussed and future work recommended.
Improving protein function prediction methods with integrated literature data
Directory of Open Access Journals (Sweden)
Gabow Aaron P
2008-04-01
Full Text Available Abstract Background Determining the function of uncharacterized proteins is a major challenge in the post-genomic era due to the problem's complexity and scale. Identifying a protein's function contributes to an understanding of its role in the involved pathways, its suitability as a drug target, and its potential for protein modifications. Several graph-theoretic approaches predict unidentified functions of proteins by using the functional annotations of better-characterized proteins in protein-protein interaction networks. We systematically consider the use of literature co-occurrence data, introduce a new method for quantifying the reliability of co-occurrence and test how performance differs across species. We also quantify changes in performance as the prediction algorithms annotate with increased specificity. Results We find that including information on the co-occurrence of proteins within an abstract greatly boosts performance in the Functional Flow graph-theoretic function prediction algorithm in yeast, fly and worm. This increase in performance is not simply due to the presence of additional edges since supplementing protein-protein interactions with co-occurrence data outperforms supplementing with a comparably-sized genetic interaction dataset. Through the combination of protein-protein interactions and co-occurrence data, the neighborhood around unknown proteins is quickly connected to well-characterized nodes which global prediction algorithms can exploit. Our method for quantifying co-occurrence reliability shows superior performance to the other methods, particularly at threshold values around 10% which yield the best trade off between coverage and accuracy. In contrast, the traditional way of asserting co-occurrence when at least one abstract mentions both proteins proves to be the worst method for generating co-occurrence data, introducing too many false positives. Annotating the functions with greater specificity is harder
Integration of nanomechanical sensors on CMOS by nanopatterning methods
Arcamone, Julien
2007-01-01
Consultable des del TDX Títol obtingut de la portada digitalitzada La presente tesis ha sido realizada principalmente en el Centro Nacional de Microelectrónica de Barcelona (CNM-IMB) del CSIC, y en parte también en el Instituto de Nanotecnología de Lyon (Francia) del CNRS. El Dr Francesc Pérez-Murano ha codirigido la tesis en el CNM, y el Pr Georges Brémond en el INL. Este trabajo se enmarca en el proyecto europeo NaPa ('Emerging Nanopatterning Methods') cuyo objetivo es desarrollar téc...
A systematic and efficient method to compute multi-loop master integrals
Directory of Open Access Journals (Sweden)
Xiao Liu
2018-04-01
Full Text Available We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
A systematic and efficient method to compute multi-loop master integrals
Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu
2018-04-01
We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
Fibonacci-regularization method for solving Cauchy integral equations of the first kind
Directory of Open Access Journals (Sweden)
Mohammad Ali Fariborzi Araghi
2017-09-01
Full Text Available In this paper, a novel scheme is proposed to solve the first kind Cauchy integral equation over a finite interval. For this purpose, the regularization method is considered. Then, the collocation method with Fibonacci base function is applied to solve the obtained second kind singular integral equation. Also, the error estimate of the proposed scheme is discussed. Finally, some sample Cauchy integral equations stem from the theory of airfoils in fluid mechanics are presented and solved to illustrate the importance and applicability of the given algorithm. The tables in the examples show the efficiency of the method.
Experimental determination of dynamic fracture toughness by J integral method
International Nuclear Information System (INIS)
Marandel, B.; Phelippeau, G.; Sanz, G.
1982-01-01
Fracture toughness tests are conducted on fatigue precracked compact tension specimens (IT - CT) loaded at K rates of about 2 x 10 4 MPa square root of m/s on a servo-hydraulic machine using a damped set-up. A high frequency alternating current system (10 kHz) is used for the detection of subcritical crack growth during loading. The analog signals from the clip gage, load cell, ram travel and potential drop system are fed into a magnetic tape recorder, filtered and converted to digital data. Load-time and load-displacement-potential curves are plotted and analysed automatically by two different methods, according to the fracture mode: in the lower part of the transition curve, Ksub(ID) is calculated from the maximum load at failure in the linear elastic range (ASTM E399); in the transition and upper shelf regions, Ksub(JD) is calculated from Jsub(ID) at initiation of ductile crack growth in the elastic plastic range. The experimental method described here is applied, as an example, to the study of a low-alloy, medium strength pressure vessel steel (A 508 Cl.3). A comparison is established between the toughness transition curves obtained under quasi-static (Ksub(Jc)) and dynamic (Ksub(JD)) conditions. (author)
Deterministic factor analysis: methods of integro-differentiation of non-integral order
Directory of Open Access Journals (Sweden)
Valentina V. Tarasova
2016-12-01
Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes
Visualizing Volume to Help Students Understand the Disk Method on Calculus Integral Course
Tasman, F.; Ahmad, D.
2018-04-01
Many research shown that students have difficulty in understanding the concepts of integral calculus. Therefore this research is interested in designing a classroom activity integrated with design research method to assist students in understanding the integrals concept especially in calculating the volume of rotary objects using disc method. In order to support student development in understanding integral concepts, this research tries to use realistic mathematical approach by integrating geogebra software. First year university student who takes a calculus course (approximately 30 people) was chosen to implement the classroom activity that has been designed. The results of retrospective analysis show that visualizing volume of rotary objects using geogebra software can assist the student in understanding the disc method as one way of calculating the volume of a rotary object.
Integration of Small-Diameter Wood Harvesting in Early Thinnings using the Two pile Cutting Method
Energy Technology Data Exchange (ETDEWEB)
Kaerhae, Kalle (Metsaeteho Oy, P.O. Box 101, FI-00171 Helsinki (Finland))
2008-10-15
Metsaeteho Oy studied the integrated harvesting of industrial roundwood (pulpwood) and energy wood based on a two-pile cutting method, i.e. pulpwood and energy wood fractions are stacked into two separate piles when cutting a first-thinning stand. The productivity and cost levels of the integrated, two-pile cutting method were determined, and the harvesting costs of the two-pile method were compared with those of conventional separate wood harvesting methods. In the time study, when the size of removal was 50 dm3, the productivity in conventional whole-tree cutting was 6% higher than in integrated cutting. With a stem size of 100 dm3, the productivity in whole-tree cutting was 7% higher than in integrated cutting. The results indicated, however, that integrated harvesting based on the two-pile method enables harvesting costs to be decreased to below the current cost level of separate pulpwood harvesting in first thinning stands. The greatest cost-saving potential lies in small-sized first thinnings. The results showed that, when integrated wood harvesting based on the two-pile method is applied, the removals of both energy wood and pulpwood should be more than 15-20 m3/ha at the harvesting sites in order to achieve economically viable integrated procurement
Integral methods for shallow free-surface flows with separation
DEFF Research Database (Denmark)
Watanabe, S.; Putkaradze, V.; Bohr, Tomas
2003-01-01
eddy and separated flow. Assuming a variable radial velocity profile as in Karman-Pohlhausen's method, we obtain a system of two ordinary differential equations for stationary states that can smoothly go through the jump. Solutions of the system are in good agreement with experiments. For the flow down...... an inclined plane we take a similar approach and derive a simple model in which the velocity profile is not restricted to a parabolic or self-similar form. Two types of solutions with large surface distortions are found: solitary, kink-like propagating fronts, obtained when the flow rate is suddenly changed......, and stationary jumps, obtained, for instance, behind a sluice gate. We then include time dependence in the model to study the stability of these waves. This allows us to distinguish between sub- and supercritical flows by calculating dispersion relations for wavelengths of the order of the width of the layer....
Digital integrated protection system: Quantitative methods for dependability evaluation
International Nuclear Information System (INIS)
Krotoff, H.; Benski, C.
1986-01-01
The inclusion of programmed digital techniques in the SPIN system provides the used with the capability of performing sophisticated processing operations. However, it causes the quantitative evaluation of the overall failure probabilities to become somewhat more intricate by reason that: A single component may be involved in several functions; Self-tests may readily be incorporated for the purpose of monitoring the dependable operation of the equipment at all times. This paper describes the methods as implemented by MERLIN GERIN for the purpose of evaluating: The probabilities for the protective actions not to be initiated (dangerous failures); The probabilities for such protective actions to be initiated accidentally. Although the communication is focused on the programmed portion of the SPIN (UAIP) it will also deal with the evaluation performed within the scope of study works that do not exclusively cover the UAIPs
Methods for examining data quality in healthcare integrated data repositories.
Huser, Vojtech; Kahn, Michael G; Brown, Jeffrey S; Gouripeddi, Ramkiran
2018-01-01
This paper summarizes content of the workshop focused on data quality. The first speaker (VH) described data quality infrastructure and data quality evaluation methods currently in place within the Observational Data Science and Informatics (OHDSI) consortium. The speaker described in detail a data quality tool called Achilles Heel and latest development for extending this tool. Interim results of an ongoing Data Quality study within the OHDSI consortium were also presented. The second speaker (MK) described lessons learned and new data quality checks developed by the PEDsNet pediatric research network. The last two speakers (JB, RG) described tools developed by the Sentinel Initiative and University of Utah's service oriented framework. The workshop discussed at the end and throughout how data quality assessment can be advanced by combining best features of each network.
Directory of Open Access Journals (Sweden)
Yurii M. Streliaiev
2016-06-01
Full Text Available Three-dimensional quasistatic contact problem of two linearly elastic bodies' interaction with Coulomb friction taken into account is considered. The boundary conditions of the problem have been simplified by the modification of the Coulomb's law of friction. This modification is based on the introducing of a delay in normal contact tractions that bound tangent contact tractions in the Coulomb's law of friction expressions. At this statement the problem is reduced to a sequence of similar systems of nonlinear integral equations describing bodies' interaction at each step of loading. A method for an approximate solution of the integral equations system corresponded to each step of loading is applied. This method consists of system regularization, discretization of regularized system and iterative process application for solving the discretized system. A numerical solution of a contact problem of an elastic sphere with an elastic half-space interaction under increasing and subsequently decreasing normal compressive force has been obtained.
International Nuclear Information System (INIS)
Ken, Soléakhéna; Cassol, Emmanuelle; Delannes, Martine; Celsis, Pierre; Cohen-Jonathan, Elizabeth Moyal; Laprie, Anne; Vieillevigne, Laure; Franceries, Xavier; Simon, Luc; Supper, Caroline; Lotterie, Jean-Albert; Filleron, Thomas; Lubrano, Vincent; Berry, Isabelle
2013-01-01
To integrate 3D MR spectroscopy imaging (MRSI) in the treatment planning system (TPS) for glioblastoma dose painting to guide simultaneous integrated boost (SIB) in intensity-modulated radiation therapy (IMRT). For sixteen glioblastoma patients, we have simulated three types of dosimetry plans, one conventional plan of 60-Gy in 3D conformational radiotherapy (3D-CRT), one 60-Gy plan in IMRT and one 72-Gy plan in SIB-IMRT. All sixteen MRSI metabolic maps were integrated into TPS, using normalization with color-space conversion and threshold-based segmentation. The fusion between the metabolic maps and the planning CT scans were assessed. Dosimetry comparisons were performed between the different plans of 60-Gy 3D-CRT, 60-Gy IMRT and 72-Gy SIB-IMRT, the last plan was targeted on MRSI abnormalities and contrast enhancement (CE). Fusion assessment was performed for 160 transformations. It resulted in maximum differences <1.00 mm for translation parameters and ≤1.15° for rotation. Dosimetry plans of 72-Gy SIB-IMRT and 60-Gy IMRT showed a significantly decreased maximum dose to the brainstem (44.00 and 44.30 vs. 57.01 Gy) and decreased high dose-volumes to normal brain (19 and 20 vs. 23% and 7 and 7 vs. 12%) compared to 60-Gy 3D-CRT (p < 0.05). Delivering standard doses to conventional target and higher doses to new target volumes characterized by MRSI and CE is now possible and does not increase dose to organs at risk. MRSI and CE abnormalities are now integrated for glioblastoma SIB-IMRT, concomitant with temozolomide, in an ongoing multi-institutional phase-III clinical trial. Our method of MR spectroscopy maps integration to TPS is robust and reliable; integration to neuronavigation systems with this method could also improve glioblastoma resection or guide biopsies
A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses
Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert
2011-01-01
Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325
A boundary integral method for two-dimensional (non)-Newtonian drops in slow viscous flow
Toose, E.M.; Geurts, B.J.; Kuerten, J.G.M.
1995-01-01
A boundary integral method for the simulation of the time-dependent deformation of Newtonian or non-Newtonian drops suspended in a Newtonian fluid is developed. The boundary integral formulation for Stokes flow is used and the non-Newtonian stress is treated as a source term which yields an extra
The Integrated Multi-Level Bilingual Teaching of "Social Research Methods"
Zhu, Yanhan; Ye, Jian
2012-01-01
"Social Research Methods," as a methodology course, combines theories and practices closely. Based on the synergy theory, this paper tries to establish an integrated multi-level bilingual teaching mode. Starting from the transformation of teaching concepts, we should integrate interactions, experiences, and researches together and focus…
Systematization of simplified J-integral evaluation method for flaw evaluation at high temperature
International Nuclear Information System (INIS)
Miura, Naoki; Takahashi, Yukio; Nakayama, Yasunari; Shimakawa, Takashi
2000-01-01
J-integral is an effective inelastic fracture parameter for the flaw evaluation of cracked components at high temperature. The evaluation of J-integral for an arbitrary crack configuration and an arbitrary loading condition can be generally accomplished by detailed numerical analysis such as finite element analysis, however, it is time-consuming and requires a high degree of expertise for its implementation. Therefore, it is important to develop simplified J-integral estimation techniques from the viewpoint of industrial requirements. In this study, a simplified J-integral evaluation method is proposed to estimate two types of J-integral parameters. One is the fatigue J-integral range to describe fatigue crack propagation behavior, and the other is the creep J-integral to describe creep crack propagation behavior. This paper presents the systematization of the simplified J-integral evaluation method incorporated with the reference stress method and the concept of elastic follow-up, and proposes a comprehensive evaluation procedure. The verification of the proposed method is presented in Part II of this paper. (author)
Directory of Open Access Journals (Sweden)
Hendrik Pratama
2017-12-01
Full Text Available The purpose of this research was determined the effect of application WhatsApp Messenger in the Group Investigation (GI method on learning achievement. The methods used experimental research with control group pretest-postest design. The sampling procedure used the purposive sampling technique that consists of 17 students as a control group and 17 students as an experimental group. The sample in this research is students in Electrical Engineering Education Study Program. The experimental group used the GI method that integrated with WhatsApp Messenger. The control group used lecture method without social media integration. The collecting data used observation, documentation, interview, questionnaire, and test. The researcher used a t-test for compared the control group and the experimental group’s learning outcomes at an alpha level of 0,05. The results showed differences between the experiment group and the control group. The study result of the experimental higher than the control groups. This learning was designed with start, grouping, planning, presenting, organizing, investigating, evaluating, ending’s stage. Integration of WhatsApp with group investigation method could cause the positive communication between student and lecturer. Discussion in this learning was well done, the student’s knowledge could appear in a group and the information could spread evenly and quickly.
Integrated Parasite Management for Livestock - Alternative control methods
Directory of Open Access Journals (Sweden)
Souvik Paul1
Full Text Available Internal parasites are considered by some to be one of the most economically important constraints in raising livestock. The growing concern about the resistance of internal parasites to all classes of dewormers has caused people to look for alternatives. As dewormers lose their effectiveness, the livestock community fears increasing economic losses from worms. There is no one thing that can be given or done to replace chemical dewormers. It will take a combination of extremely good management techniques and possibly some alternative therapies. It is not wise to think that one can just stop deworming animals with chemical dewormers. It is something one will need to change gradually, observing and testing animals and soil, in order to monitor the progress. Alternative parasite control is an area that is receiving a lot of interest and attention. Programs and research will continue in the pursuit of parasite control, using alternative and more management-intensive methods. [Veterinary World 2010; 3(9.000: 431-435
Novel bed integrated ventilation method for hospital patient rooms
DEFF Research Database (Denmark)
Bivolarova, Mariya Petrova; Melikov, Arsen Krikor; Kokora, Monika
2014-01-01
This study presents a novel method for advanced ventilation of hospital wards leading to improved air quality at reduced ventilation rate. The idea is to evacuate the bio-effluents generated from patients’ body by local exhaustion before being spread in the room. This concept was realized by using...... a mattress having a suction opening from which bio-effluents generated from human body are exhausted. Experiments were conducted in a full-scale two-bed hospital room mock-up, 4.7 x 5.3 x 2.6 m3 (W x L x H). Only one of the patients’ beds was equipped with the ventilated mattress. The room was air...... conditioned via mixing total volume ventilation system supplying air through a ceiling mounted diffuser. All experiments were performed at room air temperature of 23ºC. A thermal manikin was used to simulate a polluting patient on the bed equipped with the ventilated mattress. Two heated dummies were used...
An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction
Directory of Open Access Journals (Sweden)
Yong Zhu
2015-01-01
Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.
International Nuclear Information System (INIS)
Park, Jai Hak
2009-01-01
SGBEM(Symmetric Galerkin Boundary Element Method)-FEM alternating method has been proposed by Nikishkov, Park and Atluri. In the proposed method, arbitrarily shaped three-dimensional crack problems can be solved by alternating between the crack solution in an infinite body and the finite element solution without a crack. In the previous study, the SGBEM-FEM alternating method was extended further in order to solve elastic-plastic crack problems and to obtain elastic-plastic stress fields. For the elastic-plastic analysis the algorithm developed by Nikishkov et al. is used after modification. In the algorithm, the initial stress method is used to obtain elastic-plastic stress and strain fields. In this paper, elastic-plastic J integrals for three-dimensional cracks are obtained using the method. For that purpose, accurate values of displacement gradients and stresses are necessary on an integration path. In order to improve the accuracy of stress near crack surfaces, coordinate transformation and partitioning of integration domain are used. The coordinate transformation produces a transformation Jacobian, which cancels the singularity of the integrand. Using the developed program, simple three-dimensional crack problems are solved and elastic and elastic-plastic J integrals are obtained. The obtained J integrals are compared with the values obtained using a handbook solution. It is noted that J integrals obtained from the alternating method are close to the values from the handbook
Lstefani, L.Stefani; Ltoncelli, L.Toncelli; Mgianassi, M.Gianassi; Pmanetti, P.Manetti; Vdi, V.Di Tante; Mrvono, M.R.Vono; Amoretti, A.Moretti; Bcappelli, B.Cappelli; Gpedrizzetti, G.Pedrizzetti; Ggalanti, G.Galanti
2007-01-01
Abstract Background Myocardial contractility can be investigated using longitudinal peak strain. It can be calculated using the Doppler-derived TDI method and the non-Doppler method based on tissue tracking on B-mode images. Both are validated and show good reproducibility, but no comparative analysis of their results has yet been conducted. This study analyzes the results obtained from the basal segments of the ventricular chambers in a group of athletes. Methods 30 regularly-trained athlete...
Mixed Element Formulation for the Finite Element-Boundary Integral Method
National Research Council Canada - National Science Library
Meese, J; Kempel, L. C; Schneider, S. W
2006-01-01
A mixed element approach using right hexahedral elements and right prism elements for the finite element-boundary integral method is presented and discussed for the study of planar cavity-backed antennas...
Ensuring the integrity of information resources based methods dvooznakovoho structural data encoding
Directory of Open Access Journals (Sweden)
О.К. Юдін
2009-01-01
Full Text Available Developed methods of estimation of noise stability and correction of structural code constructions to distortion in comunication of data in informatively communication systems and networks taking into account providing of integrity of informative resource.
A new integral method for solving the point reactor neutron kinetics equations
International Nuclear Information System (INIS)
Li Haofeng; Chen Wenzhen; Luo Lei; Zhu Qian
2009-01-01
A numerical integral method that efficiently provides the solution of the point kinetics equations by using the better basis function (BBF) for the approximation of the neutron density in one time step integrations is described and investigated. The approach is based on an exact analytic integration of the neutron density equation, where the stiffness of the equations is overcome by the fully implicit formulation. The procedure is tested by using a variety of reactivity functions, including step reactivity insertion, ramp input and oscillatory reactivity changes. The solution of the better basis function method is compared to other analytical and numerical solutions of the point reactor kinetics equations. The results show that selecting a better basis function can improve the efficiency and accuracy of this integral method. The better basis function method can be used in real time forecasting for power reactors in order to prevent reactivity accidents.
International Nuclear Information System (INIS)
Lawrence, R.D.; Dorning, J.J.
1980-01-01
A coarse-mesh discrete nodal integral transport theory method has been developed for the efficient numerical solution of multidimensional transport problems of interest in reactor physics and shielding applications. The method, which is the discrete transport theory analogue and logical extension of the nodal Green's function method previously developed for multidimensional neutron diffusion problems, utilizes the same transverse integration procedure to reduce the multidimensional equations to coupled one-dimensional equations. This is followed by the conversion of the differential equations to local, one-dimensional, in-node integral equations by integrating back along neutron flight paths. One-dimensional and two-dimensional transport theory test problems have been systematically studied to verify the superior computational efficiency of the new method
Introduction to functional and path integral methods in quantum field theory
International Nuclear Information System (INIS)
Strathdee, J.
1991-11-01
The following aspects concerning the use of functional and path integral methods in quantum field theory are discussed: generating functionals and the effective action, perturbation series, Yang-Mills theory and BRST symmetry. 10 refs, 3 figs
Application of heat-balance integral method to conjugate thermal explosion
Directory of Open Access Journals (Sweden)
Novozhilov Vasily
2009-01-01
Full Text Available Conjugate thermal explosion is an extension of the classical theory, proposed and studied recently by the author. The paper reports application of heat-balance integral method for developing phase portraits for systems undergoing conjugate thermal explosion. The heat-balance integral method is used as an averaging method reducing partical differential equation problem to the set of first-order ordinary differential equations. The latter reduced problem allows natural interpretation in appropriately chosen phase space. It is shown that, with the help of heat-balance integral technique, conjugate thermal explosion problem can be described with a good accuracy by the set of non-linear first-order differential equations involving complex error function. Phase trajectories are presented for typical regimes emerging in conjugate thermal explosion. Use of heat-balance integral as a spatial averaging method allows efficient description of system evolution to be developed.
International Nuclear Information System (INIS)
Killingbeck, J.
1979-01-01
By using the methods of perturbation theory it is possible to construct simple formulae for the numerical integration of the Schroedinger equation, and also to calculate expectation values solely by means of simple eigenvalue calculations. (Auth.)
A Case Study of a Mixed Methods Study Engaged in Integrated Data Analysis
Schiazza, Daniela Marie
2013-01-01
The nascent field of mixed methods research has yet to develop a cohesive framework of guidelines and procedures for mixed methods data analysis (Greene, 2008). To support the field's development of analytical frameworks, this case study reflects on the development and implementation of a mixed methods study engaged in integrated data analysis.…
International Nuclear Information System (INIS)
Biazar, J.; Eslami, M.; Aminikhah, H.
2009-01-01
In this article, an application of He's homotopy perturbation method is applied to solve systems of Volterra integral equations of the first kind. Some non-linear examples are prepared to illustrate the efficiency and simplicity of the method. Applying the method for linear systems is so easily that it does not worth to have any example.
International Nuclear Information System (INIS)
Biazar, J.; Ghazvini, H.
2009-01-01
In this paper, the He's homotopy perturbation method is applied to solve systems of Volterra integral equations of the second kind. Some examples are presented to illustrate the ability of the method for linear and non-linear such systems. The results reveal that the method is very effective and simple.
Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations
B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)
2017-01-01
textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly
Bitcoin Meets Strong Consistency
Decker, Christian; Seidel, Jochen; Wattenhofer, Roger
2014-01-01
The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state...
Directory of Open Access Journals (Sweden)
Keshavarzi Z.
2016-02-01
Full Text Available Aims: Different education methods play crucial roles to improve education quality and students’ satisfaction. In the recent years, medical education highly changes through new education methods. The aim of this study was to compare medical students’ satisfaction in traditional and integrated methods of teaching physiology course. Instrument and Methods: In the descriptive analysis study, fifty 4th semester medical students of Bojnourd University of Medical Sciences were studied in 2015. The subjects were randomly selected based on availability. Data was collected by two researcher-made questionnaires; their validity and reliability were confirmed. Questionnaure 1 was completed by the students after presenting renal and endocrinology topics via traditional and integrated methods. Questionnaire 2 was only completed by the students after presenting the course via integrated method. Data was analyzed by SPSS 16 software using dependent T test. Findings: Mean score of the students’ satisfaction in traditional method (24.80±3.48 was higher than integrated method (22.30±4.03; p<0.0001. In the integrated method, most of the students were agreed and completely agreed on telling stories from daily life (76%, sitting mode in the classroom (48%, an attribution of cell roles to the students (60%, showing movies and animations (76%, using models (84%, and using real animal parts (72% during teaching, as well as expressing clinical items to enhance learning motivations (76%. Conclusion: Favorable satisfaction of the students in traditional lecture method to understand the issues, as well as their acceptance of new and active methods of learning, show effectiveness and efficiency of traditional method and the requirement of its enhancement by the integrated methods.
Crisp, Victoria; Novakovic, Nadezda
2009-01-01
Maintaining standards over time is a much debated topic in the context of national examinations in the UK. This study used a pilot method to compare the demands, over time, of two examination units testing administration. The method involved 15 experts revising a framework of demand types and making paired comparisons of examinations from…
Consistent classical supergravity theories
International Nuclear Information System (INIS)
Muller, M.
1989-01-01
This book offers a presentation of both conformal and Poincare supergravity. The consistent four-dimensional supergravity theories are classified. The formulae needed for further modelling are included
International Nuclear Information System (INIS)
Marder, L.I.; Myzin, A.I.
1993-01-01
A methodic approach to the grounding of the integration process efficiency within the Unified electric power system is given together with the selection of a rational areal structure and concentration of power-generating source capacities. Formation of an economic functional according to alternative scenavies including the cost components taking account of the regional interests is considered. A method for estimation and distribution of the effect from electric power production integration in the power systems under new economic conditions is proposed
Directory of Open Access Journals (Sweden)
Opeshko Nataliya Sergiivna
2017-06-01
Full Text Available The essence of the concept of «financing the enterprise» was determined. The system of indicators of measuring the effectiveness of management of financing the enterprise was developed. The internal structure of linkages in the scorecard was investigated. The method of integral evaluation of management of financing the enterprise was developed. The usefulness of using the developed method of integral evaluation was proved based on conducted experiments.
Li, Xiaofan; Nie, Qing
2009-01-01
Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratu...
International Nuclear Information System (INIS)
Trowbridge, C.W.
1976-06-01
Various integral equation methods are described. For magnetostatic problems three formulations are considered in detail, (a) the direct solution method for the magnetisation distribution in permeable materials, (b) a method based on a scalar potential and (c) the use of an integral equation derived from Green's Theorem, i.e. the so-called Boundary Integral Method (BIM). In the case of (a) results are given for two-and three-dimensional non-linear problems with comparisons against measurement. For methods (b) and (c) which both lead to a more economic use of the computer than (a) some preliminary results are given for simple cases. For eddy current problems various methods are discussed and some results are given from a computer program based on a vector potential formulation. (author)
Energy Technology Data Exchange (ETDEWEB)
Trowbridge, C W
1976-06-01
Various integral equation methods are described. For magnetostatic problems three formulations are considered in detail, (a) the direct solution method for the magnetisation distribution in permeable materials, (b) a method based on a scalar potential, and (c) the use of an integral equation derived from Green's Theorem, i.e. the so-called Boundary Integral Method (BIM). In the case of (a) results are given for two-and three-dimensional non-linear problems with comparisons against measurement. For methods (b) and (c), which both lead to a more economical use of the computer than (a), some preliminary results are given for simple cases. For eddy current problems various methods are discussed and some results are given from a computer program based on a vector potential formulation.
Yoon, Seung-Yil; Sagi, Hemi; Goldhammer, Craig; Li, Lei
2012-01-01
Container closure integrity (CCI) is a critical factor to ensure that product sterility is maintained over its entire shelf life. Assuring the CCI during container closure (C/C) system qualification, routine manufacturing and stability is important. FDA guidance also encourages industry to develop a CCI physical testing method in lieu of sterility testing in a stability program. A mass extraction system has been developed to check CCI for a variety of container closure systems such as vials, syringes, and cartridges. Various types of defects (e.g., glass micropipette, laser drill, wire) were created and used to demonstrate a detection limit. Leakage, detected as mass flow in this study, changes as a function of defect length and diameter. Therefore, the morphology of defects has been examined in detail with fluid theories. This study demonstrated that a mass extraction system was able to distinguish between intact samples and samples with 2 μm defects reliably when the defect was exposed to air, water, placebo, or drug product (3 mg/mL concentration) solution. Also, it has been verified that the method was robust, and capable of determining the acceptance limit using 3σ for syringes and 6σ for vials. Sterile products must maintain their sterility over their entire shelf life. Container closure systems such as those found in syringes and vials provide a seal between rubber and glass containers. This seal must be ensured to maintain product sterility. A mass extraction system has been developed to check container closure integrity for a variety of container closure systems such as vials, syringes, and cartridges. In order to demonstrate the method's capability, various types of defects (e.g., glass micropipette, laser drill, wire) were created in syringes and vials and were tested. This study demonstrated that a mass extraction system was able to distinguish between intact samples and samples with 2 μm defects reliably when the defect was exposed to air, water
Outcome of the First wwPDB Hybrid/Integrative Methods Task Force Workshop
Sali, Andrej; Berman, Helen M.; Schwede, Torsten; Trewhella, Jill; Kleywegt, Gerard; Burley, Stephen K.; Markley, John; Nakamura, Haruki; Adams, Paul; Bonvin, Alexandre M.J.J.; Chiu, Wah; Dal Peraro, Matteo; Di Maio, Frank; Ferrin, Thomas E.; Grünewald, Kay; Gutmanas, Aleksandras; Henderson, Richard; Hummer, Gerhard; Iwasaki, Kenji; Johnson, Graham; Lawson, Catherine L.; Meiler, Jens; Marti-Renom, Marc A.; Montelione, Gaetano T.; Nilges, Michael; Nussinov, Ruth; Patwardhan, Ardan; Rappsilber, Juri; Read, Randy J.; Saibil, Helen; Schröder, Gunnar F.; Schwieters, Charles D.; Seidel, Claus A. M.; Svergun, Dmitri; Topf, Maya; Ulrich, Eldon L.; Velankar, Sameer; Westbrook, John D.
2016-01-01
Summary Structures of biomolecular systems are increasingly computed by integrative modeling that relies on varied types of experimental data and theoretical information. We describe here the proceedings and conclusions from the first wwPDB Hybrid/Integrative Methods Task Force Workshop held at the European Bioinformatics Institute in Hinxton, UK, October 6 and 7, 2014. At the workshop, experts in various experimental fields of structural biology, experts in integrative modeling and visualization, and experts in data archiving addressed a series of questions central to the future of structural biology. How should integrative models be represented? How should the data and integrative models be validated? What data should be archived? How should the data and models be archived? What information should accompany the publication of integrative models? PMID:26095030
An integration time adaptive control method for atmospheric composition detection of occultation
Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin
2018-01-01
When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.
13th International Conference on Integral Methods in Science and Engineering
Kirsch, Andreas
2015-01-01
This contributed volume contains a collection of articles on state-of-the-art developments on the construction of theoretical integral techniques and their application to specific problems in science and engineering. Written by internationally recognized researchers, the chapters in this book are based on talks given at the Thirteenth International Conference on Integral Methods in Science and Engineering, held July 21–25, 2014, in Karlsruhe, Germany. A broad range of topics is addressed, from problems of existence and uniqueness for singular integral equations on domain boundaries to numerical integration via finite and boundary elements, conservation laws, hybrid methods, and other quadrature-related approaches. This collection will be of interest to researchers in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students in these disciplines and other professionals for whom integration is an essential tool.
Directory of Open Access Journals (Sweden)
Martin Strandberg-Larsen
2009-02-01
Full Text Available Background: Integrated healthcare delivery is a policy goal of healthcare systems. There is no consensus on how to measure the concept, which makes it difficult to monitor progress. Purpose: To identify the different types of methods used to measure integrated healthcare delivery with emphasis on structural, cultural and process aspects. Methods: Medline/Pubmed, EMBASE, Web of Science, Cochrane Library, WHOLIS, and conventional internet search engines were systematically searched for methods to measure integrated healthcare delivery (published – April 2008. Results: Twenty-four published scientific papers and documents met the inclusion criteria. In the 24 references we identified 24 different measurement methods; however, 5 methods shared theoretical framework. The methods can be categorized according to type of data source: a questionnaire survey data, b automated register data, or c mixed data sources. The variety of concepts measured reflects the significant conceptual diversity within the field, and most methods lack information regarding validity and reliability. Conclusion: Several methods have been developed to measure integrated healthcare delivery; 24 methods are available and some are highly developed. The objective governs the method best used. Criteria for sound measures are suggested and further developments should be based on an explicit conceptual framework and focus on simplifying and validating existing methods.
Time-consistent and market-consistent evaluations
Pelsser, A.; Stadje, M.A.
2014-01-01
We consider evaluation methods for payoffs with an inherent financial risk as encountered for instance for portfolios held by pension funds and insurance companies. Pricing such payoffs in a way consistent to market prices typically involves combining actuarial techniques with methods from
Preparation of CuIn1-xGaxS2 (x = 0.5) flowers consisting of nanoflakes via a solvothermal method
International Nuclear Information System (INIS)
Liang Xiaojuan; Zhong Jiasong; Yang Fan; Hua Wei; Jin Huaidong; Liu Haitao; Sun Juncai; Xiang Weidong
2011-01-01
Highlights: → We report for the first time a small biomolecule-assisted route using L-cysteine as sulfur source and complexing agent to synthesis CuIn 0.5 Ga 0.5 S 2 crystals. → The possible mechanisms leading to CuIn 0.5 Ga 0.5 S 2 flowers consisting of nanoflakes were proposed. → In addition, the morphology, structure, and phase composition of the as-prepared CuIn 0.5 Ga 0.5 S 2 products were investigated in detail by XRD, FESEM, EDS, XPS, TEM (HRTEM) and SAED. - Abstract: CuIn 1-x Ga x S 2 (x = 0.5) flowers consisting of nanoflakes were successfully prepared by a biomolecule-assisted solvothermal route at 220 deg. C for 10 h, employing copper chloride, gallium chloride, indium chloride and L-cysteine as precursors. The biomolecule L-cysteine acting as sulfur source was found to play a very important role in the formation of the final product. The diameter of the CuIn 0.5 Ga 0.5 S 2 flowers was 1-2 μm, and the thickness of the flakes was about 15 nm. The obtained products were characterized by X-ray diffraction (XRD), energy dispersion spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), field-emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM), selected area electron diffraction spectroscopy (SAED), and UV-vis absorption spectroscopy. The influences of the reaction temperature, reaction time, sulfur source and the molar ratio of Cu-to-L-cysteine (reactants) on the formation of the target compound were investigated. The formation mechanism of the CuIn 0.5 Ga 0.5 S 2 flowers consisting of flakes was discussed.
International Nuclear Information System (INIS)
Sanchez, Richard.
1980-11-01
This work is divided into two part the first part (note CEA-N-2165) deals with the solution of complex two-dimensional transport problems, the second one treats the critically mixed methods of resolution. These methods are applied for one-dimensional geometries with highly anisotropic scattering. In order to simplify the set of integral equation provided by the integral transport equation, the integro-differential equation is used to obtain relations that allow to lower the number of integral equation to solve; a general mathematical and numerical study is presented [fr
A novel prototyping method for die-level monolithic integration of MEMS above-IC
International Nuclear Information System (INIS)
Cicek, Paul-Vahe; Zhang, Qing; Saha, Tanmoy; Mahdavi, Sareh; Allidina, Karim; Gamal, Mourad El; Nabki, Frederic
2013-01-01
This work presents a convenient and versatile prototyping method for integrating surface-micromachined microelectromechanical systems (MEMS) directly above IC electronics, at the die level. Such localized implementation helps reduce development costs associated with the acquisition of full-sized semiconductor wafers. To demonstrate the validity of this method, variants of an IC-compatible surface-micromachining MEMS process are used to build different MEMS devices above a commercial transimpedance amplifier chip. Subsequent functional assessments for both the electronics and the MEMS indicate that the integration is successful, validating the prototyping methodology presented in this work, as well as the suitability of the selected MEMS technology for above-IC integration. (paper)
Hybrid Finite Element and Volume Integral Methods for Scattering Using Parametric Geometry
DEFF Research Database (Denmark)
Volakis, John L.; Sertel, Kubilay; Jørgensen, Erik
2004-01-01
n this paper we address several topics relating to the development and implementation of volume integral and hybrid finite element methods for electromagnetic modeling. Comparisons of volume integral equation formulations with the finite element-boundary integral method are given in terms of accu...... of vanishing divergence within the element but non-zero curl. In addition, a new domain decomposition is introduced for solving array problems involving several million degrees of freedom. Three orders of magnitude CPU reduction is demonstrated for such applications....
Medical Student Research: An Integrated Mixed-Methods Systematic Review and Meta-Analysis.
Directory of Open Access Journals (Sweden)
Mohamed Amgad
Full Text Available Despite the rapidly declining number of physician-investigators, there is no consistent structure within medical education so far for involving medical students in research.To conduct an integrated mixed-methods systematic review and meta-analysis of published studies about medical students' participation in research, and to evaluate the evidence in order to guide policy decision-making regarding this issue.We followed the PRISMA statement guidelines during the preparation of this review and meta-analysis. We searched various databases as well as the bibliographies of the included studies between March 2012 and September 2013. We identified all relevant quantitative and qualitative studies assessing the effect of medical student participation in research, without restrictions regarding study design or publication date. Prespecified outcome-specific quality criteria were used to judge the admission of each quantitative outcome into the meta-analysis. Initial screening of titles and abstracts resulted in the retrieval of 256 articles for full-text assessment. Eventually, 79 articles were included in our study, including eight qualitative studies. An integrated approach was used to combine quantitative and qualitative studies into a single synthesis. Once all included studies were identified, a data-driven thematic analysis was performed.Medical student participation in research is associated with improved short- and long- term scientific productivity, more informed career choices and improved knowledge about-, interest in- and attitudes towards research. Financial worries, gender, having a higher degree (MSc or PhD before matriculation and perceived competitiveness of the residency of choice are among the factors that affect the engagement of medical students in research and/or their scientific productivity. Intercalated BSc degrees, mandatory graduation theses and curricular research components may help in standardizing research education during
Consistency of orthodox gravity
Energy Technology Data Exchange (ETDEWEB)
Bellucci, S. [INFN, Frascati (Italy). Laboratori Nazionali di Frascati; Shiekh, A. [International Centre for Theoretical Physics, Trieste (Italy)
1997-01-01
A recent proposal for quantizing gravity is investigated for self consistency. The existence of a fixed-point all-order solution is found, corresponding to a consistent quantum gravity. A criterion to unify couplings is suggested, by invoking an application of their argument to more complex systems.
Quasiparticles and thermodynamical consistency
International Nuclear Information System (INIS)
Shanenko, A.A.; Biro, T.S.; Toneev, V.D.
2003-01-01
A brief and simple introduction into the problem of the thermodynamical consistency is given. The thermodynamical consistency relations, which should be taken into account under constructing a quasiparticle model, are found in a general manner from the finite-temperature extension of the Hellmann-Feynman theorem. Restrictions following from these relations are illustrated by simple physical examples. (author)
Directory of Open Access Journals (Sweden)
Legenchyk S.F.
2017-08-01
Full Text Available The essence of the integrated reporting is described. Preconditions and problems when introducing the integrated reporting are presented. The information requests of integrated reporting users are described. The necessity of reflection of all types of capital, namely, natural, social, human and intellectual in the integrated reporting are revealed. The main advantages of compiling the integrated accounts for the enterprise are established: a broader perspective of consideration of the activity; the improvement of accounting policy as a result of integration of the principles of sustainable development into activity; increasing the trust of workers and consumers in the safety of technological processes and products for environment. For the wider introduction of the integrated reporting it is necessary to develop the methodological provision for accounting in accordance with the principles of sustainable development to ensure the reliability of the indices obtained; it is necessary to select the optimal list of indices that can meet the information needs of all users, in particular, investors, the state, auditors, society, creditors, consumers, employees, management personnel, academics, and the media. The main tasks and principles of compilation of the integrated reporting are presented. Orientation to the future, materiality, demand, integrity, reliability, completeness, periodicity, consistency, timeliness, interpretation, and comparability are suggested as the principles of integrated reporting. The issue of the necessity of conducting the external audit of the data verification of the integrity reporting is highlighted. The algorithm of profit distribution depending on the chosen strategy of enterprise development is proposed. When directing net profits for the development of production, the authors identify the following three strategies: insufficient upgrades, upgrades at the level of wear and the advanced renewal of non-current assets. It
Analysis of Conflict Centers in Projects Procured with Traditional and Integrated Methods in Nigeria
Directory of Open Access Journals (Sweden)
Martin O. Dada
2012-07-01
Full Text Available Conflicts in any organization can either be functional or dysfunctional and can contribute to or detract from the achievement of organizational or project objectives. This study investigated the frequency and intensity of conflicts, using five conflict centers, on projects executed with either the integrated or traditional method in Nigeria. Questionnaires were administered through purposive and snowballing techniques on 274 projects located in twelve states of Nigeria and Abuja. 94 usable responses were obtained. The collected data were subjected to both descriptive and inferential statistical analysis. In projects procured with traditional methods, conflicts relating to resources for project execution had the greatest frequency, while conflicts around project/client goals had the least frequency. For projects executed with integrated methods, conflicts due to administrative procedures were ranked highest while conflicts due to project/client goals were ranked least. Regarding seriousness of conflict, conflicts due to administrative procedures and resources for project execution were ranked highest respectively for projects procured with traditional and integrated methods. Additionally, in terms of seriousness, personality issues and project/client goals were the least sources of conflict in projects executed with traditional and integrated methods. There were no significant differences in the incidence of conflicts, using the selected conflict centers, between the traditional and integrated procurement methods. There was however significant difference in the intensity or seriousness of conflicts between projects executed with the traditional method and those executed with integrated methods in the following areas: technical issues, administrative matters and personality issues. The study recommends that conscious efforts should be made at teambuilding on projects executed with integrated methods.
International Nuclear Information System (INIS)
Angell, Christopher T.; Hayakawa, Takehito; Shizuma, Toshiyuki; Hajima, Ryoichi
2013-01-01
Non-destructive assay (NDA) of 239 Pu in spent nuclear fuel or melted fuel using a γ-ray beam is possible using self absorption and the integral resonance transmission method. The method uses nuclear resonance absorption where resonances in 239 Pu remove photons from the beam, and the selective absorption is detected by measuring the decrease in scattering in a witness target placed in the beam after the fuel, consisting of the isotope of interest, namely 239 Pu. The method is isotope specific, and can use photofission or scattered γ-rays to assay the 239 Pu. It overcomes several problems related to NDA of melted fuel, including the radioactivity of the fuel, and the unknown composition and geometry. This talk will explain the general method, and how photofission can be used to assay specific isotopes, and present example calculations. (author)
International Nuclear Information System (INIS)
Zhang, H.; Rizwan-uddin; Dorning, J.J.
1995-01-01
A diffusion equation-based systematic homogenization theory and a self-consistent dehomogenization theory for fuel assemblies have been developed for use with coarse-mesh nodal diffusion calculations of light water reactors. The theoretical development is based on a multiple-scales asymptotic expansion carried out through second order in a small parameter, the ratio of the average diffusion length to the reactor characteristic dimension. By starting from the neutron diffusion equation for a three-dimensional heterogeneous medium and introducing two spatial scales, the development systematically yields an assembly-homogenized global diffusion equation with self-consistent expressions for the assembly-homogenized diffusion tensor elements and cross sections and assembly-surface-flux discontinuity factors. The rector eigenvalue 1/k eff is shown to be obtained to the second order in the small parameter, and the heterogeneous diffusion theory flux is shown to be obtained to leading order in that parameter. The latter of these two results provides a natural procedure for the reconstruction of the local fluxes and the determination of pin powers, even though homogenized assemblies are used in the global nodal diffusion calculation
Lagrangian multiforms and multidimensional consistency
Energy Technology Data Exchange (ETDEWEB)
Lobb, Sarah; Nijhoff, Frank [Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT (United Kingdom)
2009-10-30
We show that well-chosen Lagrangians for a class of two-dimensional integrable lattice equations obey a closure relation when embedded in a higher dimensional lattice. On the basis of this property we formulate a Lagrangian description for such systems in terms of Lagrangian multiforms. We discuss the connection of this formalism with the notion of multidimensional consistency, and the role of the lattice from the point of view of the relevant variational principle.
Yang, S A
2002-10-01
This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.
Approximate calculation method for integral of mean square value of nonstationary response
International Nuclear Information System (INIS)
Aoki, Shigeru; Fukano, Azusa
2010-01-01
The response of the structure subjected to nonstationary random vibration such as earthquake excitation is nonstationary random vibration. Calculating method for statistical characteristics of such a response is complicated. Mean square value of the response is usually used to evaluate random response. Integral of mean square value of the response corresponds to total energy of the response. In this paper, a simplified calculation method to obtain integral of mean square value of the response is proposed. As input excitation, nonstationary white noise and nonstationary filtered white noise are used. Integrals of mean square value of the response are calculated for various values of parameters. It is found that the proposed method gives exact value of integral of mean square value of the response.
Energy Technology Data Exchange (ETDEWEB)
Liang Xiaojuan [College of Chemistry and Materials Engineering, Wenzhou University, Wenzhou, Zhejiang Province 325035 (China); Institute of Materials and Technology, Dalian Maritime University, Dalian 116026 (China); Zhong Jiasong; Yang Fan; Hua Wei; Jin Huaidong [College of Chemistry and Materials Engineering, Wenzhou University, Wenzhou, Zhejiang Province 325035 (China); Liu Haitao, E-mail: lht@wzu.edu.cn [College of Chemistry and Materials Engineering, Wenzhou University, Wenzhou, Zhejiang Province 325035 (China); Sun Juncai [Institute of Materials and Technology, Dalian Maritime University, Dalian 116026 (China); Xiang Weidong, E-mail: weidongxiang@yahoo.com.cn [College of Chemistry and Materials Engineering, Wenzhou University, Wenzhou, Zhejiang Province 325035 (China)
2011-05-26
Highlights: > We report for the first time a small biomolecule-assisted route using L-cysteine as sulfur source and complexing agent to synthesis CuIn{sub 0.5}Ga{sub 0.5}S{sub 2} crystals. > The possible mechanisms leading to CuIn{sub 0.5}Ga{sub 0.5}S{sub 2} flowers consisting of nanoflakes were proposed. > In addition, the morphology, structure, and phase composition of the as-prepared CuIn{sub 0.5}Ga{sub 0.5}S{sub 2} products were investigated in detail by XRD, FESEM, EDS, XPS, TEM (HRTEM) and SAED. - Abstract: CuIn{sub 1-x}Ga{sub x}S{sub 2} (x = 0.5) flowers consisting of nanoflakes were successfully prepared by a biomolecule-assisted solvothermal route at 220 deg. C for 10 h, employing copper chloride, gallium chloride, indium chloride and L-cysteine as precursors. The biomolecule L-cysteine acting as sulfur source was found to play a very important role in the formation of the final product. The diameter of the CuIn{sub 0.5}Ga{sub 0.5}S{sub 2} flowers was 1-2 {mu}m, and the thickness of the flakes was about 15 nm. The obtained products were characterized by X-ray diffraction (XRD), energy dispersion spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), field-emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM), selected area electron diffraction spectroscopy (SAED), and UV-vis absorption spectroscopy. The influences of the reaction temperature, reaction time, sulfur source and the molar ratio of Cu-to-L-cysteine (reactants) on the formation of the target compound were investigated. The formation mechanism of the CuIn{sub 0.5}Ga{sub 0.5}S{sub 2} flowers consisting of flakes was discussed.
Guetterman, Timothy C.; Fetters, Michael D.; Creswell, John W.
2015-01-01
PURPOSE Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. METHODS We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. RESULTS The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. CONCLUSIONS Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. PMID:26553895
Walther, T; Wang, X
2016-05-01
Based on Monte Carlo simulations of X-ray generation by fast electrons we calculate curves of effective sensitivity factors for analytical transmission electron microscopy based energy-dispersive X-ray spectroscopy including absorption and fluorescence effects, as a function of Ga K/L ratio for different indium and gallium containing compound semiconductors. For the case of InGaN alloy thin films we show that experimental spectra can thus be quantified without the need to measure specimen thickness or density, yielding self-consistent values for quantification with Ga K and Ga L lines. The effect of uncertainties in the detector efficiency are also shown to be reduced. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
A difference quotient-numerical integration method for solving radiative transfer problems
International Nuclear Information System (INIS)
Ding Peizhu
1992-01-01
A difference quotient-numerical integration method is adopted to solve radiative transfer problems in an anisotropic scattering slab medium. By using the method, the radiative transfer problem is separated into a system of linear algebraic equations and the coefficient matrix of the system is a band matrix, so the method is very simple to evaluate on computer and to deduce formulae and easy to master for experimentalists. An example is evaluated and it is shown that the method is precise
Trijsburg, L.E.; Geelen, M.M.E.E.; Hollman, P.C.H.; Hulshof, P.J.M.; Feskens, E.J.M.; Veer, van 't P.; Boshuizen, H.C.; Vries, de J.H.M.
2017-01-01
As misreporting, mostly under-reporting, of dietary intake is a generally known problem in nutritional research, we aimed to analyse the association between selected determinants and the extent of misreporting by the duplicate portion method (DP), 24 h recall (24hR) and FFQ by linear regression
Directory of Open Access Journals (Sweden)
Poruba Z.
2009-06-01
Full Text Available For the numerical solution of elasto-plastic problems with use of Newton-Raphson method in global equilibrium equation it is necessary to determine the tangent modulus in each integration point. To reach the parabolic convergence of Newton-Raphson method it is convenient to use so called algorithmic tangent modulus which is consistent with used integration scheme. For more simple models for example Chaboche combined hardening model it is possible to determine it in analytical way. In case of more robust macroscopic models it is in many cases necessary to use the approximation approach. This possibility is presented in this contribution for radial return method on Chaboche model. An example solved in software Ansys corresponds to line contact problem with assumption of Coulomb's friction. The study shows at the end that the number of iteration of N-R method is higher in case of continuum tangent modulus and many times higher with use of modified N-R method, initial stiffness method.
Lafitte, Pauline; Melis, Ward; Samaey, Giovanni
2017-07-01
We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.
Methods That Matter: Integrating Mixed Methods for More Effective Social Science Research
Hay, M. Cameron, Ed.
2016-01-01
To do research that really makes a difference--the authors of this book argue--social scientists need questions and methods that reflect the complexity of the world. Bringing together a consortium of voices across a variety of fields, "Methods that Matter" offers compelling and successful examples of mixed methods research that do just…