WorldWideScience

Sample records for normal mode methods

  1. NOLB: Nonlinear Rigid Block Normal Mode Analysis Method

    OpenAIRE

    Hoffmann , Alexandre; Grudinin , Sergei

    2017-01-01

    International audience; We present a new conceptually simple and computationally efficient method for nonlinear normal mode analysis called NOLB. It relies on the rotations-translations of blocks (RTB) theoretical basis developed by Y.-H. Sanejouand and colleagues. We demonstrate how to physically interpret the eigenvalues computed in the RTB basis in terms of angular and linear velocities applied to the rigid blocks and how to construct a nonlinear extrapolation of motion out of these veloci...

  2. New method for computing ideal MHD normal modes in axisymmetric toroidal geometry

    International Nuclear Information System (INIS)

    Wysocki, F.; Grimm, R.C.

    1984-11-01

    Analytic elimination of the two magnetic surface components of the displacement vector permits the normal mode ideal MHD equations to be reduced to a scalar form. A Galerkin procedure, similar to that used in the PEST codes, is implemented to determine the normal modes computationally. The method retains the efficient stability capabilities of the PEST 2 energy principle code, while allowing computation of the normal mode frequencies and eigenfunctions, if desired. The procedure is illustrated by comparison with earlier various of PEST and by application to tilting modes in spheromaks, and to stable discrete Alfven waves in tokamak geometry

  3. A design of a mode converter for electron cyclotron heating by the method of normal mode expansion

    International Nuclear Information System (INIS)

    Hoshino, Katsumichi; Kawashima, Hisato; Hata, Kenichiro; Yamamoto, Takumi

    1983-09-01

    Mode conversion of electromagnetic wave propagating in the over-size circular waveguide is attained by giving a periodical perturbation in the guide wall. Mode coupling equation is expressed by ''generalized telegraphist's equations'' which are derived from the Maxwell's equations using a normal mode expansion. A computer code to solve the coupling equations is developed and mode amplitude, conversion efficiency, etc. of a particular type of mode converter for the 60 GHz electron cyclotron heating are obtained. (author)

  4. Single-Phase Full-Wave Rectifier as an Effective Example to Teach Normalization, Conduction Modes, and Circuit Analysis Methods

    Directory of Open Access Journals (Sweden)

    Predrag Pejovic

    2013-12-01

    Full Text Available Application of a single phase rectifier as an example in teaching circuit modeling, normalization, operating modes of nonlinear circuits, and circuit analysis methods is proposed.The rectifier supplied from a voltage source by an inductive impedance is analyzed in the discontinuous as well as in the continuous conduction mode. Completely analytical solution for the continuous conduction mode is derived. Appropriate numerical methods are proposed to obtain the circuit waveforms in both of the operating modes, and to compute the performance parameters. Source code of the program that performs such computation is provided.

  5. Normalized modes at selected points without normalization

    Science.gov (United States)

    Kausel, Eduardo

    2018-04-01

    As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á

  6. Normal mode analysis of macromolecular systems with the mobile block Hessian method

    International Nuclear Information System (INIS)

    Ghysels, An; Van Speybroeck, Veronique; Van Neck, Dimitri; Waroquier, Michel; Brooks, Bernard R.

    2015-01-01

    Until recently, normal mode analysis (NMA) was limited to small proteins, not only because the required energy minimization is a computationally exhausting task, but also because NMA requires the expensive diagonalization of a 3N a ×3N a matrix with N a the number of atoms. A series of simplified models has been proposed, in particular the Rotation-Translation Blocks (RTB) method by Tama et al. for the simulation of proteins. It makes use of the concept that a peptide chain or protein can be seen as a subsequent set of rigid components, i.e. the peptide units. A peptide chain is thus divided into rigid blocks with six degrees of freedom each. Recently we developed the Mobile Block Hessian (MBH) method, which in a sense has similar features as the RTB method. The main difference is that MBH was developed to deal with partially optimized systems. The position/orientation of each block is optimized while the internal geometry is kept fixed at a plausible - but not necessarily optimized - geometry. This reduces the computational cost of the energy minimization. Applying the standard NMA on a partially optimized structure however results in spurious imaginary frequencies and unwanted coordinate dependence. The MBH avoids these unphysical effects by taking into account energy gradient corrections. Moreover the number of variables is reduced, which facilitates the diagonalization of the Hessian. In the original implementation of MBH, atoms could only be part of one rigid block. The MBH is now extended to the case where atoms can be part of two or more blocks. Two basic linkages can be realized: (1) blocks connected by one link atom, or (2) by two link atoms, where the latter is referred to as the hinge type connection. In this work we present the MBH concept and illustrate its performance with the crambin protein as an example

  7. Normal modes and continuous spectra

    International Nuclear Information System (INIS)

    Balmforth, N.J.; Morrison, P.J.

    1994-12-01

    The authors consider stability problems arising in fluids, plasmas and stellar systems that contain singularities resulting from wave-mean flow or wave-particle resonances. Such resonances lead to singularities in the differential equations determining the normal modes at the so-called critical points or layers. The locations of the singularities are determined by the eigenvalue of the problem, and as a result, the spectrum of eigenvalues forms a continuum. They outline a method to construct the singular eigenfunctions comprising the continuum for a variety of problems

  8. Model-free methods of analyzing domain motions in proteins from simulation : A comparison of normal mode analysis and molecular dynamics simulation of lysozyme

    NARCIS (Netherlands)

    Hayward, S.; Kitao, A.; Berendsen, H.J.C.

    Model-free methods are introduced to determine quantities pertaining to protein domain motions from normal mode analyses and molecular dynamics simulations, For the normal mode analysis, the methods are based on the assumption that in low frequency modes, domain motions can be well approximated by

  9. Normal mode analysis as a method to derive protein dynamics information from the Protein Data Bank.

    Science.gov (United States)

    Wako, Hiroshi; Endo, Shigeru

    2017-12-01

    Normal mode analysis (NMA) can facilitate quick and systematic investigation of protein dynamics using data from the Protein Data Bank (PDB). We developed an elastic network model-based NMA program using dihedral angles as independent variables. Compared to the NMA programs that use Cartesian coordinates as independent variables, key attributes of the proposed program are as follows: (1) chain connectivity related to the folding pattern of a polypeptide chain is naturally embedded in the model; (2) the full-atom system is acceptable, and owing to a considerably smaller number of independent variables, the PDB data can be used without further manipulation; (3) the number of variables can be easily reduced by some of the rotatable dihedral angles; (4) the PDB data for any molecule besides proteins can be considered without coarse-graining; and (5) individual motions of constituent subunits and ligand molecules can be easily decomposed into external and internal motions to examine their mutual and intrinsic motions. Its performance is illustrated with an example of a DNA-binding allosteric protein, a catabolite activator protein. In particular, the focus is on the conformational change upon cAMP and DNA binding, and on the communication between their binding sites remotely located from each other. In this illustration, NMA creates a vivid picture of the protein dynamics at various levels of the structures, i.e., atoms, residues, secondary structures, domains, subunits, and the complete system, including DNA and cAMP. Comparative studies of the specific protein in different states, e.g., apo- and holo-conformations, and free and complexed configurations, provide useful information for studying structurally and functionally important aspects of the protein.

  10. Normal modes of Bardeen discs

    International Nuclear Information System (INIS)

    Verdaguer, E.

    1983-01-01

    The short wavelength normal modes of self-gravitating rotating polytropic discs in the Bardeen approximation are studied. The discs' oscillations can be seen in terms of two types of modes: the p-modes whose driving forces are pressure forces and the r-modes driven by Coriolis forces. As a consequence of differential rotation coupling between the two takes place and some mixed modes appear, their properties can be studied under the assumption of weak coupling and it is seen that they avoid the crossing of the p- and r-modes. The short wavelength analysis provides a basis for the classification of the modes, which can be made by using the properties of their phase diagrams. The classification is applied to the large wavelength modes of differentially rotating discs with strong coupling and to a uniformly rotating sequence with no coupling, which have been calculated in previous papers. Many of the physical properties and qualitative features of these modes are revealed by the analysis. (author)

  11. Anomalous normal mode oscillations in semiconductor microcavities

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H. [Univ. of Oregon, Eugene, OR (United States). Dept. of Physics; Hou, H.Q.; Hammons, B.E. [Sandia National Labs., Albuquerque, NM (United States)

    1997-04-01

    Semiconductor microcavities as a composite exciton-cavity system can be characterized by two normal modes. Under an impulsive excitation by a short laser pulse, optical polarizations associated with the two normal modes have a {pi} phase difference. The total induced optical polarization is then expected to exhibit a sin{sup 2}({Omega}t)-like oscillation where 2{Omega} is the normal mode splitting, reflecting a coherent energy exchange between the exciton and cavity. In this paper the authors present experimental studies of normal mode oscillations using three-pulse transient four wave mixing (FWM). The result reveals surprisingly that when the cavity is tuned far below the exciton resonance, normal mode oscillation in the polarization is cos{sup 2}({Omega}t)-like, in contrast to what is expected form the simple normal mode model. This anomalous normal mode oscillation reflects the important role of virtual excitation of electronic states in semiconductor microcavities.

  12. Normal mode analysis and applications in biological physics.

    Science.gov (United States)

    Dykeman, Eric C; Sankey, Otto F

    2010-10-27

    Normal mode analysis has become a popular and often used theoretical tool in the study of functional motions in enzymes, viruses, and large protein assemblies. The use of normal modes in the study of these motions is often extremely fruitful since many of the functional motions of large proteins can be described using just a few normal modes which are intimately related to the overall structure of the protein. In this review, we present a broad overview of several popular methods used in the study of normal modes in biological physics including continuum elastic theory, the elastic network model, and a new all-atom method, recently developed, which is capable of computing a subset of the low frequency vibrational modes exactly. After a review of the various methods, we present several examples of applications of normal modes in the study of functional motions, with an emphasis on viral capsids.

  13. A new method for evaluating the conformations and normal modes of macromolecule vibrations with a reduced force field. 2. Application to nonplanar distorted metal porphyrins

    Energy Technology Data Exchange (ETDEWEB)

    Unger, E.; Beck, M.; Lipski, R.J.; Dreybrodt, W.; Medforth, C.J.; Smith, K.M.; Schweitzer-Stenner, R.

    1999-11-11

    The authors have developed a novel method for molecular mechanics calculations and normal-mode analysis. It is based on symmetry of local units that constitutes the given molecule. Compared with general valence force field calculations, the number of free parameters is reduced by 40--80% in the procedure. It was found to reproduce very well the vibrational frequencies and mode compositions of aromatic compounds and porphyrins, as shown by comparison with DFT calculations. A slightly altered force field obtained from Ni(II) porphin was then used to calculate the structure and the normal modes of several meso-substituted Ni(II) porphyrins which are known to be subject to significant ruffling and/or saddling distortions. This method satisfactorily reproduces their nonplanar structure and Raman band frequencies in the natural abundance and isotopic derivative spectra. The polarization properties of bands from out-of-plane modes are in accordance with the predicted nonplanar distortions. Moreover, some of the modes below 800 cm{sup {minus}1} which appear intense in the Raman spectra contain considerable contributions from both in-plane and out-of-plane vibrations, so that the conventional mode assignments become questionable. The authors also demonstrate that the intensity and polarization of some low-frequency Raman bands can be used as a (quantitative) marker to elucidate type and magnitude of out-of-plane distortions. These were recently shown to affect heme groups of hemoglobin, myoglobin, and, in particular, of cytochrome c.

  14. On normal modes of gas sheets and discs

    International Nuclear Information System (INIS)

    Drury, L.O'C.

    1980-01-01

    A method is described for calculating the reflection and transmission coefficients characterizing normal modes of the Goldreich-Lynden-Bell gas sheet. Two families of gas discs without self-gravity for which the normal modes can be found analytically are given and used to illustrate the validity of the sheet approximation. (author)

  15. On normal modes in classical Hamiltonian systems

    NARCIS (Netherlands)

    van Groesen, Embrecht W.C.

    1983-01-01

    Normal modes of Hamittonian systems that are even and of classical type are characterized as the critical points of a normalized kinetic energy functional on level sets of the potential energy functional. With the aid of this constrained variational formulation the existence of at least one family

  16. Normal modes of weak colloidal gels

    Science.gov (United States)

    Varga, Zsigmond; Swan, James W.

    2018-01-01

    The normal modes and relaxation rates of weak colloidal gels are investigated in calculations using different models of the hydrodynamic interactions between suspended particles. The relaxation spectrum is computed for freely draining, Rotne-Prager-Yamakawa, and accelerated Stokesian dynamics approximations of the hydrodynamic mobility in a normal mode analysis of a harmonic network representing several colloidal gels. We find that the density of states and spatial structure of the normal modes are fundamentally altered by long-ranged hydrodynamic coupling among the particles. Short-ranged coupling due to hydrodynamic lubrication affects only the relaxation rates of short-wavelength modes. Hydrodynamic models accounting for long-ranged coupling exhibit a microscopic relaxation rate for each normal mode, λ that scales as l-2, where l is the spatial correlation length of the normal mode. For the freely draining approximation, which neglects long-ranged coupling, the microscopic relaxation rate scales as l-γ, where γ varies between three and two with increasing particle volume fraction. A simple phenomenological model of the internal elastic response to normal mode fluctuations is developed, which shows that long-ranged hydrodynamic interactions play a central role in the viscoelasticity of the gel network. Dynamic simulations of hard spheres that gel in response to short-ranged depletion attractions are used to test the applicability of the density of states predictions. For particle concentrations up to 30% by volume, the power law decay of the relaxation modulus in simulations accounting for long-ranged hydrodynamic interactions agrees with predictions generated by the density of states of the corresponding harmonic networks as well as experimental measurements. For higher volume fractions, excluded volume interactions dominate the stress response, and the prediction from the harmonic network density of states fails. Analogous to the Zimm model in polymer

  17. Helicon normal modes in Proto-MPEX

    Science.gov (United States)

    Piotrowicz, P. A.; Caneses, J. F.; Green, D. L.; Goulding, R. H.; Lau, C.; Caughman, J. B. O.; Rapp, J.; Ruzic, D. N.

    2018-05-01

    The Proto-MPEX helicon source has been operating in a high electron density ‘helicon-mode’. Establishing plasma densities and magnetic field strengths under the antenna that allow for the formation of normal modes of the fast-wave are believed to be responsible for the ‘helicon-mode’. A 2D finite-element full-wave model of the helicon antenna on Proto-MPEX is used to identify the fast-wave normal modes responsible for the steady-state electron density profile produced by the source. We also show through the simulation that in the regions of operation in which core power deposition is maximum the slow-wave does not deposit significant power besides directly under the antenna. In the case of a simulation where a normal mode is not excited significant edge power is deposited in the mirror region. ).

  18. A Bloch mode expansion approach for analyzing quasi-normal modes in open nanophotonic structures

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2014-01-01

    We present a new method for determining quasi-normal modes in open nanophotonic structures using a modal ex- pansion technique. The outgoing wave boundary condition of the quasi-normal modes is satisfied automatically without absorbing boundaries, representing a significant advantage compared...

  19. WEBnm@: a web application for normal mode analyses of proteins

    Directory of Open Access Journals (Sweden)

    Reuter Nathalie

    2005-03-01

    Full Text Available Abstract Background Normal mode analysis (NMA has become the method of choice to investigate the slowest motions in macromolecular systems. NMA is especially useful for large biomolecular assemblies, such as transmembrane channels or virus capsids. NMA relies on the hypothesis that the vibrational normal modes having the lowest frequencies (also named soft modes describe the largest movements in a protein and are the ones that are functionally relevant. Results We developed a web-based server to perform normal modes calculations and different types of analyses. Starting from a structure file provided by the user in the PDB format, the server calculates the normal modes and subsequently offers the user a series of automated calculations; normalized squared atomic displacements, vector field representation and animation of the first six vibrational modes. Each analysis is performed independently from the others and results can be visualized using only a web browser. No additional plug-in or software is required. For users who would like to analyze the results with their favorite software, raw results can also be downloaded. The application is available on http://www.bioinfo.no/tools/normalmodes. We present here the underlying theory, the application architecture and an illustration of its features using a large transmembrane protein as an example. Conclusion We built an efficient and modular web application for normal mode analysis of proteins. Non specialists can easily and rapidly evaluate the degree of flexibility of multi-domain protein assemblies and characterize the large amplitude movements of their domains.

  20. Sphalerons, deformed sphalerons and normal modes

    International Nuclear Information System (INIS)

    Brihaye, Y.; Kunz, J.; Oldenburg Univ.

    1992-01-01

    Topological arguments suggest that tha Weinberg-Salam model posses unstable solutions, sphalerons, representing the top of energy barriers between inequivalent vacua of the gauge theory. In the limit of vanishing Weinberg angle, such unstable solutions are known: the sphaleron of Klinkhamer and Manton and at large values of the Higgs mass in addition the deformed sphalerons. Here a systematic study of the discrete normal modes about these sphalerons for the full range Higgs mass is presented. The emergence of deformed sphalerons at critical values of the Higgs mass is seem to be related to the crossing of zero of the eigenvalue of the particular normal modes about the sphaleron. 6 figs., 1 tab., 19 refs. (author)

  1. Normal mode analysis for linear resistive magnetohydrodynamics

    International Nuclear Information System (INIS)

    Kerner, W.; Lerbinger, K.; Gruber, R.; Tsunematsu, T.

    1984-10-01

    The compressible, resistive MHD equations are linearized around an equilibrium with cylindrical symmetry and solved numerically as a complex eigenvalue problem. This normal mode code allows to solve for very small resistivity eta proportional 10 -10 . The scaling of growthrates and layer width agrees very well with analytical theory. Especially, both the influence of current and pressure on the instabilities is studied in detail; the effect of resistivity on the ideally unstable internal kink is analyzed. (orig.)

  2. Normal modes of vibration in nickel

    Energy Technology Data Exchange (ETDEWEB)

    Birgeneau, R J [Yale Univ., New Haven, Connecticut (United States); Cordes, J [Cambridge Univ., Cambridge (United Kingdom); Dolling, G; Woods, A D B

    1964-07-01

    The frequency-wave-vector dispersion relation, {nu}(q), for the normal vibrations of a nickel single crystal at 296{sup o}K has been measured for the [{zeta}00], [{zeta}00], [{zeta}{zeta}{zeta}], and [0{zeta}1] symmetric directions using inelastic neutron scattering. The results can be described in terms of the Born-von Karman theory of lattice dynamics with interactions out to fourth-nearest neighbors. The shapes of the dispersion curves are very similar to those of copper, the normal mode frequencies in nickel being about 1.24 times the corresponding frequencies in copper. The fourth-neighbor model was used to calculate the frequency distribution function g({nu}) and related thermodynamic properties. (author)

  3. Normal mode-guided transition pathway generation in proteins.

    Directory of Open Access Journals (Sweden)

    Byung Ho Lee

    Full Text Available The biological function of proteins is closely related to its structural motion. For instance, structurally misfolded proteins do not function properly. Although we are able to experimentally obtain structural information on proteins, it is still challenging to capture their dynamics, such as transition processes. Therefore, we need a simulation method to predict the transition pathways of a protein in order to understand and study large functional deformations. Here, we present a new simulation method called normal mode-guided elastic network interpolation (NGENI that performs normal modes analysis iteratively to predict transition pathways of proteins. To be more specific, NGENI obtains displacement vectors that determine intermediate structures by interpolating the distance between two end-point conformations, similar to a morphing method called elastic network interpolation. However, the displacement vector is regarded as a linear combination of the normal mode vectors of each intermediate structure, in order to enhance the physical sense of the proposed pathways. As a result, we can generate more reasonable transition pathways geometrically and thermodynamically. By using not only all normal modes, but also in part using only the lowest normal modes, NGENI can still generate reasonable pathways for large deformations in proteins. This study shows that global protein transitions are dominated by collective motion, which means that a few lowest normal modes play an important role in this process. NGENI has considerable merit in terms of computational cost because it is possible to generate transition pathways by partial degrees of freedom, while conventional methods are not capable of this.

  4. CASTOR: Normal-mode analysis of resistive MHD plasmas

    NARCIS (Netherlands)

    Kerner, W.; Goedbloed, J. P.; Huysmans, G. T. A.; Poedts, S.; Schwarz, E.

    1998-01-01

    The CASTOR (complex Alfven spectrum of toroidal plasmas) code computes the entire spectrum of normal-modes in resistive MHD for general tokamak configurations. The applied Galerkin method, in conjunction with a Fourier finite-element discretisation, leads to a large scale eigenvalue problem A (x)

  5. On normal modes and dispersive properties of plasma systems

    International Nuclear Information System (INIS)

    Weiland, J.

    1976-01-01

    The description of nonlinear wave phenomena in terms of normal modes contra that of electric fields is discussed. The possibility of defining higher order normal modes is pointed out and the field energy is expressed in terms of the normal mode and the electric field. (Auth.)

  6. Fast Eigensolver for Computing 3D Earth's Normal Modes

    Science.gov (United States)

    Shi, J.; De Hoop, M. V.; Li, R.; Xi, Y.; Saad, Y.

    2017-12-01

    We present a novel parallel computational approach to compute Earth's normal modes. We discretize Earth via an unstructured tetrahedral mesh and apply the continuous Galerkin finite element method to the elasto-gravitational system. To resolve the eigenvalue pollution issue, following the analysis separating the seismic point spectrum, we utilize explicitly a representation of the displacement for describing the oscillations of the non-seismic modes in the fluid outer core. Effectively, we separate out the essential spectrum which is naturally related to the Brunt-Väisälä frequency. We introduce two Lanczos approaches with polynomial and rational filtering for solving this generalized eigenvalue problem in prescribed intervals. The polynomial filtering technique only accesses the matrix pair through matrix-vector products and is an ideal candidate for solving three-dimensional large-scale eigenvalue problems. The matrix-free scheme allows us to deal with fluid separation and self-gravitation in an efficient way, while the standard shift-and-invert method typically needs an explicit shifted matrix and its factorization. The rational filtering method converges much faster than the standard shift-and-invert procedure when computing all the eigenvalues inside an interval. Both two Lanczos approaches solve for the internal eigenvalues extremely accurately, comparing with the standard eigensolver. In our computational experiments, we compare our results with the radial earth model benchmark, and visualize the normal modes using vector plots to illustrate the properties of the displacements in different modes.

  7. Comparative Study of Various Normal Mode Analysis Techniques Based on Partial Hessians

    OpenAIRE

    GHYSELS, AN; VAN SPEYBROECK, VERONIQUE; PAUWELS, EWALD; CATAK, SARON; BROOKS, BERNARD R.; VAN NECK, DIMITRI; WAROQUIER, MICHEL

    2010-01-01

    Standard normal mode analysis becomes problematic for complex molecular systems, as a result of both the high computational cost and the excessive amount of information when the full Hessian matrix is used. Several partial Hessian methods have been proposed in the literature, yielding approximate normal modes. These methods aim at reducing the computational load and/or calculating only the relevant normal modes of interest in a specific application. Each method has its own (dis)advantages and...

  8. Multi-scaled normal mode analysis method for dynamics simulation of protein-membrane complexes: A case study of potassium channel gating motion correlations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xiaokun; Han, Min; Ming, Dengming, E-mail: dming@fudan.edu.cn [Department of Physiology and Biophysics, School of Life Sciences, Fudan University, Shanghai (China)

    2015-10-07

    Membrane proteins play critically important roles in many cellular activities such as ions and small molecule transportation, signal recognition, and transduction. In order to fulfill their functions, these proteins must be placed in different membrane environments and a variety of protein-lipid interactions may affect the behavior of these proteins. One of the key effects of protein-lipid interactions is their ability to change the dynamics status of membrane proteins, thus adjusting their functions. Here, we present a multi-scaled normal mode analysis (mNMA) method to study the dynamics perturbation to the membrane proteins imposed by lipid bi-layer membrane fluctuations. In mNMA, channel proteins are simulated at all-atom level while the membrane is described with a coarse-grained model. mNMA calculations clearly show that channel gating motion can tightly couple with a variety of membrane deformations, including bending and twisting. We then examined bi-channel systems where two channels were separated with different distances. From mNMA calculations, we observed both positive and negative gating correlations between two neighboring channels, and the correlation has a maximum as the channel center-to-center distance is close to 2.5 times of their diameter. This distance is larger than recently found maximum attraction distance between two proteins embedded in membrane which is 1.5 times of the protein size, indicating that membrane fluctuation might impose collective motions among proteins within a larger area. The hybrid resolution feature in mNMA provides atomic dynamics information for key components in the system without costing much computer resource. We expect it to be a conventional simulation tool for ordinary laboratories to study the dynamics of very complicated biological assemblies. The source code is available upon request to the authors.

  9. Normal mode approach to modelling of feedback stabilization of the resistive wall mode

    International Nuclear Information System (INIS)

    Chu, M.S.; Chance, M.S.; Okabayashi, M.; Glasser, A.H.

    2003-01-01

    Feedback stabilization of the resistive wall mode (RWM) of a plasma in a general feedback configuration is formulated in terms of the normal modes of the plasma-resistive wall system. The growth/damping rates and the eigenfunctions of the normal modes are determined by an extended energy principle for the plasma during its open (feedback) loop operation. A set of equations are derived for the time evolution of these normal modes with currents in the feedback coils. The dynamics of the feedback system is completed by the prescription of the feedback logic. The feasibility of the feedback is evaluated by using the Nyquist diagram method or by solving the characteristic equations. The elements of the characteristic equations are formed from the growth and damping rates of the normal modes, the sensor matrix of the perturbation fluxes detected by the sensor loops, the excitation matrix of the energy input to the normal modes by the external feedback coils, and the feedback logic. (The RWM is also predicted to be excited by an external error field to a large amplitude when it is close to marginal stability.) This formulation has been implemented numerically and applied to the DIII-D tokamak. It is found that feedback with poloidal sensors is much more effective than feedback with radial sensors. Using radial sensors, increasing the number of feedback coils from a central band on the outboard side to include an upper and a lower band can substantially increase the effectiveness of the feedback system. The strength of the RWM that can be stabilized is increased from γτ w = 1 to 30 (γ is the growth rate of the RWM in the absence of feedback and τ w is the resistive wall time constant) Using poloidal sensors, just one central band of feedback coils is sufficient for the stabilization of the RWM with γτ w = 30. (author)

  10. Wormhole potentials and throats from quasi-normal modes

    Science.gov (United States)

    Völkel, Sebastian H.; Kokkotas, Kostas D.

    2018-05-01

    Exotic compact objects refer to a wide class of black hole alternatives or effective models to describe phenomenologically quantum gravitational effects on the horizon scale. In this work we show how the knowledge of the quasi-normal mode spectrum of non-rotating wormhole models can be used to reconstruct the effective potential that appears in perturbation equations. From this it is further possible to obtain the parameters that characterize the specific wormhole model, which in this paper was chosen to be the one by Damour and Solodukhin. We also address the question whether one can distinguish such type of wormholes from ultra compact stars, if only the quasi-normal mode spectrum is known. We have proven that this is not possible by using the trapped modes only, but requires additional information. The inverse method presented here is an extension of work that has previously been developed and applied to the oscillation spectra of ultra compact stars and gravastars. However, it is not limited to the study of exotic compact objects, but applicable to symmetric double barrier potentials that appear in one-dimensional wave equations. Therefore we think it can be of interest for other fields too.

  11. On matrix superpotential and three-component normal modes

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, R. de Lima [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Lima, A.F. de [Universidade Federal de Campina Grande (UFCG), PB (Brazil). Dept. de Fisica; Mello, E.R. Bezerra de; Bezerra, V.B. [Universidade Federal da Paraiba (UFPB), Joao Pessoa, PB (Brazil). Dept. de Fisica]. E-mails: rafael@df.ufcg.edu.br; aerlima@df.ufcg.edu.br; emello@fisica.ufpb.br; valdir@fisica.ufpb.br

    2007-07-01

    We consider the supersymmetric quantum mechanics(SUSY QM) with three-component normal modes for the Bogomol'nyi-Prasad-Sommerfield (BPS) states. An explicit form of the SUSY QM matrix superpotential is presented and the corresponding three-component bosonic zero-mode eigenfunction is investigated. (author)

  12. Comparative Investigation of Normal Modes and Molecular Dynamics of Hepatitis C NS5B Protein

    International Nuclear Information System (INIS)

    Asafi, M S; Tekpinar, M; Yildirim, A

    2016-01-01

    Understanding dynamics of proteins has many practical implications in terms of finding a cure for many protein related diseases. Normal mode analysis and molecular dynamics methods are widely used physics-based computational methods for investigating dynamics of proteins. In this work, we studied dynamics of Hepatitis C NS5B protein with molecular dynamics and normal mode analysis. Principal components obtained from a 100 nanoseconds molecular dynamics simulation show good overlaps with normal modes calculated with a coarse-grained elastic network model. Coarse-grained normal mode analysis takes at least an order of magnitude shorter time. Encouraged by this good overlaps and short computation times, we analyzed further low frequency normal modes of Hepatitis C NS5B. Motion directions and average spatial fluctuations have been analyzed in detail. Finally, biological implications of these motions in drug design efforts against Hepatitis C infections have been elaborated. (paper)

  13. A New Normal Form for Multidimensional Mode Conversion

    International Nuclear Information System (INIS)

    Tracy, E. R.; Richardson, A. S.; Kaufman, A. N.; Zobin, N.

    2007-01-01

    Linear conversion occurs when two wave types, with distinct polarization and dispersion characteristics, are locally resonant in a nonuniform plasma [1]. In recent work, we have shown how to incorporate a ray-based (WKB) approach to mode conversion in numerical algorithms [2,3]. The method uses the ray geometry in the conversion region to guide the reduction of the full NxN-system of wave equations to a 2x2 coupled pair which can be solved and matched to the incoming and outgoing WKB solutions. The algorithm in [2] assumes the ray geometry is hyperbolic and that, in ray phase space, there is an 'avoided crossing', which is the most common type of conversion. Here, we present a new formulation that can deal with more general types of conversion [4]. This formalism is based upon the fact (first proved in [5]) that it is always possible to put the 2x2 wave equation into a 'normal' form, such that the diagonal elements of the dispersion matrix Poisson-commute with the off-diagonals (at leading order). Therefore, if we use the diagonals (rather than the eigenvalues or the determinant) of the dispersion matrix as ray Hamiltonians, the off-diagonals will be conserved quantities. When cast into normal form, the 2x2 dispersion matrix has a very natural physical interpretation: the diagonals are the uncoupled ray hamiltonians and the off-diagonals are the coupling. We discuss how to incorporate the normal form into ray tracing algorithms

  14. [Raman, FTIR spectra and normal mode analysis of acetanilide].

    Science.gov (United States)

    Liang, Hui-Qin; Tao, Ya-Ping; Han, Li-Gang; Han, Yun-Xia; Mo, Yu-Jun

    2012-10-01

    The Raman and FTIR spectra of acetanilide (ACN) were measured experimentally in the regions of 3 500-50 and 3 500-600 cm(-1) respectively. The equilibrium geometry and vibration frequencies of ACN were calculated based on density functional theory (DFT) method (B3LYP/6-311G(d, p)). The results showed that the theoretical calculation of molecular structure parameters are in good agreement with previous report and better than the ones calculated based on 6-31G(d), and the calculated frequencies agree well with the experimental ones. Potential energy distribution of each frequency was worked out by normal mode analysis, and based on this, a detailed and accurate vibration frequency assignment of ACN was obtained.

  15. The energy spectrum of electromagnetic normal modes in dissipative media: modes between two metal half spaces

    International Nuclear Information System (INIS)

    Sernelius, Bo E

    2008-01-01

    The energy spectrum of electromagnetic normal modes plays a central role in the theory of the van der Waals and Casimir interaction. Here we study the modes in connection with the van der Waals interaction between two metal half spaces. Neglecting dissipation leads to distinct normal modes with real-valued frequencies. Including dissipation seems to have the effect that these distinct modes move away from the real axis into the complex frequency plane. The summation of the zero-point energies of these modes render a complex-valued result. Using the contour integration, resulting from the use of the generalized argument principle, gives a real-valued and different result. We resolve this contradiction and show that the spectrum of true normal modes forms a continuum with real frequencies

  16. Quasi-Normal Modes of Stars and Black Holes

    Directory of Open Access Journals (Sweden)

    Kokkotas Kostas

    1999-01-01

    Full Text Available Perturbations of stars and black holes have been one of the main topics of relativistic astrophysics for the last few decades. They are of particular importance today, because of their relevance to gravitational wave astronomy. In this review we present the theory of quasi-normal modes of compact objects from both the mathematical and astrophysical points of view. The discussion includes perturbations of black holes (Schwarzschild, Reissner-Nordström, Kerr and Kerr-Newman and relativistic stars (non-rotating and slowly-rotating. The properties of the various families of quasi-normal modes are described, and numerical techniques for calculating quasi-normal modes reviewed. The successes, as well as the limits, of perturbation theory are presented, and its role in the emerging era of numerical relativity and supercomputers is discussed.

  17. Normal-Mode Splitting in a Weakly Coupled Optomechanical System

    Science.gov (United States)

    Rossi, Massimiliano; Kralj, Nenad; Zippilli, Stefano; Natali, Riccardo; Borrielli, Antonio; Pandraud, Gregory; Serra, Enrico; Di Giuseppe, Giovanni; Vitali, David

    2018-02-01

    Normal-mode splitting is the most evident signature of strong coupling between two interacting subsystems. It occurs when two subsystems exchange energy between themselves faster than they dissipate it to the environment. Here we experimentally show that a weakly coupled optomechanical system at room temperature can manifest normal-mode splitting when the pump field fluctuations are antisquashed by a phase-sensitive feedback loop operating close to its instability threshold. Under these conditions the optical cavity exhibits an effectively reduced decay rate, so that the system is effectively promoted to the strong coupling regime.

  18. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  19. SwarmDock and the Use of Normal Modes in Protein-Protein Docking

    Directory of Open Access Journals (Sweden)

    Paul A. Bates

    2010-09-01

    Full Text Available Here is presented an investigation of the use of normal modes in protein-protein docking, both in theory and in practice. Upper limits of the ability of normal modes to capture the unbound to bound conformational change are calculated on a large test set, with particular focus on the binding interface, the subset of residues from which the binding energy is calculated. Further, the SwarmDock algorithm is presented, to demonstrate that the modelling of conformational change as a linear combination of normal modes is an effective method of modelling flexibility in protein-protein docking.

  20. Quasi-normal modes from non-commutative matrix dynamics

    Science.gov (United States)

    Aprile, Francesco; Sanfilippo, Francesco

    2017-09-01

    We explore similarities between the process of relaxation in the BMN matrix model and the physics of black holes in AdS/CFT. Focusing on Dyson-fluid solutions of the matrix model, we perform numerical simulations of the real time dynamics of the system. By quenching the equilibrium distribution we study quasi-normal oscillations of scalar single trace observables, we isolate the lowest quasi-normal mode, and we determine its frequencies as function of the energy. Considering the BMN matrix model as a truncation of N=4 SYM, we also compute the frequencies of the quasi-normal modes of the dual scalar fields in the AdS5-Schwarzschild background. We compare the results, and we finda surprising similarity.

  1. Comparative study of various normal mode analysis techniques based on partial Hessians.

    Science.gov (United States)

    Ghysels, An; Van Speybroeck, Veronique; Pauwels, Ewald; Catak, Saron; Brooks, Bernard R; Van Neck, Dimitri; Waroquier, Michel

    2010-04-15

    Standard normal mode analysis becomes problematic for complex molecular systems, as a result of both the high computational cost and the excessive amount of information when the full Hessian matrix is used. Several partial Hessian methods have been proposed in the literature, yielding approximate normal modes. These methods aim at reducing the computational load and/or calculating only the relevant normal modes of interest in a specific application. Each method has its own (dis)advantages and application field but guidelines for the most suitable choice are lacking. We have investigated several partial Hessian methods, including the Partial Hessian Vibrational Analysis (PHVA), the Mobile Block Hessian (MBH), and the Vibrational Subsystem Analysis (VSA). In this article, we focus on the benefits and drawbacks of these methods, in terms of the reproduction of localized modes, collective modes, and the performance in partially optimized structures. We find that the PHVA is suitable for describing localized modes, that the MBH not only reproduces localized and global modes but also serves as an analysis tool of the spectrum, and that the VSA is mostly useful for the reproduction of the low frequency spectrum. These guidelines are illustrated with the reproduction of the localized amine-stretch, the spectrum of quinine and a bis-cinchona derivative, and the low frequency modes of the LAO binding protein. 2009 Wiley Periodicals, Inc.

  2. Vocal fold contact patterns based on normal modes of vibration.

    Science.gov (United States)

    Smith, Simeon L; Titze, Ingo R

    2018-05-17

    The fluid-structure interaction and energy transfer from respiratory airflow to self-sustained vocal fold oscillation continues to be a topic of interest in vocal fold research. Vocal fold vibration is driven by pressures on the vocal fold surface, which are determined by the shape of the glottis and the contact between vocal folds. Characterization of three-dimensional glottal shapes and contact patterns can lead to increased understanding of normal and abnormal physiology of the voice, as well as to development of improved vocal fold models, but a large inventory of shapes has not been directly studied previously. This study aimed to take an initial step toward characterizing vocal fold contact patterns systematically. Vocal fold motion and contact was modeled based on normal mode vibration, as it has been shown that vocal fold vibration can be almost entirely described by only the few lowest order vibrational modes. Symmetric and asymmetric combinations of the four lowest normal modes of vibration were superimposed on left and right vocal fold medial surfaces, for each of three prephonatory glottal configurations, according to a surface wave approach. Contact patterns were generated from the interaction of modal shapes at 16 normalized phases during the vibratory cycle. Eight major contact patterns were identified and characterized by the shape of the flow channel, with the following descriptors assigned: convergent, divergent, convergent-divergent, uniform, split, merged, island, and multichannel. Each of the contact patterns and its variation are described, and future work and applications are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Normal Modes of Magnetized Finite Two-Dimensional Yukawa Crystals

    Science.gov (United States)

    Marleau, Gabriel-Dominique; Kaehlert, Hanno; Bonitz, Michael

    2009-11-01

    The normal modes of a finite two-dimensional dusty plasma in an isotropic parabolic confinement, including the simultaneous effects of friction and an external magnetic field, are studied. The ground states are found from molecular dynamics simulations with simulated annealing, and the influence of screening, friction, and magnetic field on the mode frequencies is investigated in detail. The two-particle problem is solved analytically and the limiting cases of weak and strong magnetic fields are discussed.[4pt] [1] C. Henning, H. K"ahlert, P. Ludwig, A. Melzer, and M.Bonitz. J. Phys. A 42, 214023 (2009)[2] B. Farokhi, M. Shahmansouri, and P. K. Shukla. Phys.Plasmas 16, 063703 (2009)[3] L. Cândido, J.-P. Rino, N. Studart, and F. M. Peeters. J. Phys.: Condens. Matter 10, 11627--11644 (1998)

  4. Modal analysis of inter-area oscillations using the theory of normal modes

    Energy Technology Data Exchange (ETDEWEB)

    Betancourt, R.J. [School of Electromechanical Engineering, University of Colima, Manzanillo, Col. 28860 (Mexico); Barocio, E. [CUCEI, University of Guadalajara, Guadalajara, Jal. 44480 (Mexico); Messina, A.R. [Graduate Program in Electrical Engineering, Cinvestav, Guadalajara, Jal. 45015 (Mexico); Martinez, I. [State Autonomous University of Mexico, Toluca, Edo. Mex. 50110 (Mexico)

    2009-04-15

    Based on the notion of normal modes in mechanical systems, a method is proposed for the analysis and characterization of oscillatory processes in power systems. The method is based on the property of invariance of modal subspaces and can be used to represent complex power system modal behavior by a set of decoupled, two-degree-of-freedom nonlinear oscillator equations. Using techniques from nonlinear mechanics, a new approach is outlined, for determining the normal modes (NMs) of motion of a general n-degree-of-freedom nonlinear system. Equations relating the normal modes and the physical velocities and displacements are developed from the linearized system model and numerical issues associated with the application of the technique are discussed. In addition to qualitative insight, this method can be utilized in the study of nonlinear behavior and bifurcation analyses. The application of these procedures is illustrated on a planning model of the Mexican interconnected system using a quadratic nonlinear model. Specifically, the use of normal mode analysis as a basis for identifying modal parameters, including natural frequencies and damping ratios of general, linear systems with n degrees of freedom is discussed. Comparisons to conventional linear analysis techniques demonstrate the ability of the proposed technique to extract the different oscillation modes embedded in the oscillation. (author)

  5. Conjugate gradient filtering of instantaneous normal modes, saddles on the energy landscape, and diffusion in liquids.

    Science.gov (United States)

    Chowdhary, J; Keyes, T

    2002-02-01

    Instantaneous normal modes (INM's) are calculated during a conjugate-gradient (CG) descent of the potential energy landscape, starting from an equilibrium configuration of a liquid or crystal. A small number (approximately equal to 4) of CG steps removes all the Im-omega modes in the crystal and leaves the liquid with diffusive Im-omega which accurately represent the self-diffusion constant D. Conjugate gradient filtering appears to be a promising method, applicable to any system, of obtaining diffusive modes and facilitating INM theory of D. The relation of the CG-step dependent INM quantities to the landscape and its saddles is discussed.

  6. Neutrino induced vorticity, Alfven waves and the normal modes

    Energy Technology Data Exchange (ETDEWEB)

    Bhatt, Jitesh R. [Theory Division, Physical Research Laboratory, Ahmedabad (India); George, Manu [Theory Division, Physical Research Laboratory, Ahmedabad (India); Indian Institute of Technology, Department of Physics, Ahmedabad (India)

    2017-08-15

    We consider a plasma consisting of electrons and ions in the presence of a background neutrino gas and develop the magnetohydrodynamic equations for the system. We show that the electron neutrino interaction can induce vorticity in the plasma even in the absence of any electromagnetic perturbations if the background neutrino density is left-right asymmetric. This induced vorticity supports a new kind of Alfven wave whose velocity depends on both the external magnetic field and on the neutrino asymmetry. The normal mode analysis show that in the presence of neutrino background the Alfven waves can have different velocities. We also discuss our results in the context of dense astrophysical plasma such as magnetars and show that the difference in the Alfven velocities can be used to explain the observed pulsar kick. We discuss also the relativistic generalisation of the electron fluid in presence of an asymmetric neutrino background. (orig.)

  7. "Good Vibrations": A workshop on oscillations and normal modes

    Science.gov (United States)

    Barbieri, Sara; Carpineti, Marina; Giliberti, Marco; Rigon, Enrico; Stellato, Marco; Tamborini, Marina

    2016-05-01

    We describe some theatrical strategies adopted in a two hour workshop in order to show some meaningful experiments and the underlying useful ideas to describe a secondary school path on oscillations, that develops from harmonic motion to normal modes of oscillations, and makes extensive use of video analysis, data logging, slow motions and applet simulations. Theatre is an extremely useful tool to stimulate motivation starting from positive emotions. That is the reason why the theatrical approach to the presentation of physical themes has been explored by the group "Lo spettacolo della Fisica" (http://spettacolo.fisica.unimi.it) of the Physics Department of University of Milano for the last ten years (Carpineti et al., JCOM, 10 (2011) 1; Nuovo Cimento B, 121 (2006) 901) and has been inserted also in the European FP7 Project TEMI (Teaching Enquiry with Mysteries Incorporated, see http://teachingmysteries.eu/en) which involves 13 different partners coming from 11 European countries, among which the Italian (Milan) group. According to the TEMI guidelines, this workshop has a written script based on emotionally engaging activities of presenting mysteries to be solved while participants have been involved in nice experiments following the developed path.

  8. Antidepressants normalize the default mode network in patients with dysthymia.

    Science.gov (United States)

    Posner, Jonathan; Hellerstein, David J; Gat, Inbal; Mechling, Anna; Klahr, Kristin; Wang, Zhishun; McGrath, Patrick J; Stewart, Jonathan W; Peterson, Bradley S

    2013-04-01

    The default mode network (DMN) is a collection of brain regions that reliably deactivate during goal-directed behaviors and is more active during a baseline, or so-called resting, condition. Coherence of neural activity, or functional connectivity, within the brain's DMN is increased in major depressive disorder relative to healthy control (HC) subjects; however, whether similar abnormalities are present in persons with dysthymic disorder (DD) is unknown. Moreover, the effect of antidepressant medications on DMN connectivity in patients with DD is also unknown. To use resting-state functional-connectivity magnetic resonance imaging (MRI) to study (1) the functional connectivity of the DMN in subjects with DD vs HC participants and (2) the effects of antidepressant therapy on DMN connectivity. After collecting baseline MRI scans from subjects with DD and HC participants, we enrolled the participants with DD into a 10-week prospective, double-blind, placebo-controlled trial of duloxetine and collected MRI scans again at the conclusion of the study. Enrollment occurred between 2007 and 2011. University research institute. Volunteer sample of 41 subjects with DD and 25 HC participants aged 18 to 53 years. Control subjects were group matched to patients with DD by age and sex. We used resting-state functional-connectivity MRI to measure the functional connectivity of the brain's DMN in persons with DD compared with HC subjects, and we examined the effects of treatment with duloxetine vs placebo on DMN connectivity. Of the 41 subjects with DD, 32 completed the clinical trial and MRI scans, along with the 25 HC participants. At baseline, we found that the coherence of neural activity within the brain's DMN was greater in persons with DD compared with HC subjects. Following a 10-week clinical trial, we found that treatment with duloxetine, but not placebo, normalized DMN connectivity. The baseline imaging findings are consistent with those found in patients with major

  9. Perturbations and quasi-normal modes of black holes in Einstein-Aether theory

    International Nuclear Information System (INIS)

    Konoplya, R.A.; Zhidenko, A.

    2007-01-01

    We develop a new method for calculation of quasi-normal modes of black holes, when the effective potential, which governs black hole perturbations, is known only numerically in some region near the black hole. This method can be applied to perturbations of a wide class of numerical black hole solutions. We apply it to the black holes in the Einstein-Aether theory, a theory where general relativity is coupled to a unit time-like vector field, in order to observe local Lorentz symmetry violation. We found that in the non-reduced Einstein-Aether theory, real oscillation frequency and damping rate of quasi-normal modes are larger than those of Schwarzschild black holes in the Einstein theory

  10. Optimization of hardening/softening behavior of plane frame structures using nonlinear normal modes

    DEFF Research Database (Denmark)

    Dou, Suguang; Jensen, Jakob Søndergaard

    2016-01-01

    Devices that exploit essential nonlinear behavior such as hardening/softening and inter-modal coupling effects are increasingly used in engineering and fundamental studies. Based on nonlinear normal modes, we present a gradient-based structural optimization method for tailoring the hardening...... involving plane frame structures where the hardening/softening behavior is qualitatively and quantitatively tuned by simple changes in the geometry of the structures....

  11. Evaluation of diaphragmatic motion in normal and diaphragmatic paralyzed dogs using M-mode ultrasonography.

    Science.gov (United States)

    Choi, Mihyun; Lee, Namsoon; Kim, Ahyoung; Keh, Seoyeon; Lee, Jinsoo; Kim, Hyunwook; Choi, Mincheol

    2014-01-01

    Diagnosis of unilateral diaphragmatic paralysis in dogs is currently based on fluoroscopic detection of unequal movement between the crura. Bilateral paralysis may be more difficult to confirm with fluoroscopy because diaphragmatic movement is sometimes produced by compensatory abdominal muscle contractions. The purpose of this study was to develop a new method to evaluate diaphragmatic movement using M-mode ultrasonography and to describe findings for normal and diaphragmatic paralyzed dogs. Fifty-five clinically normal dogs and two dogs with diaphragmatic paralysis were recruited. Thoracic radiographs were acquired for all dogs and fluoroscopy studies were also acquired for clinically affected dogs. Two observers independently measured diaphragmatic direction of motion and amplitude of excursion using M-mode ultrasonography for dogs meeting study inclusion criteria. Eight of the clinically normal dogs were excluded due to abnormal thoracic radiographic findings. For the remaining normal dogs, the lower limit values of diaphragmatic excursion were 2.85-2.98 mm during normal breathing. One dog with bilateral diaphragmatic paralysis showed paradoxical movement of both crura at the end of inspiration. One dog with unilateral diaphragmatic paralysis had diaphragmatic excursion values of 2.00 ± 0.42 mm on the left side and 4.05 ± 1.48 mm on the right side. The difference between left and right diaphragmatic excursion values was 55%. Findings indicated that M-mode ultrasonography is a relatively simple and objective method for measuring diaphragmatic movement in dogs. Future studies are needed in a larger number of dogs with diaphragmatic paralysis to determine the diagnostic sensitivity of this promising new technique. © 2013 American College of Veterinary Radiology.

  12. Effect of cobratoxin binding on the normal mode vibration within acetylcholine binding protein.

    Science.gov (United States)

    Bertaccini, Edward J; Lindahl, Erik; Sixma, Titia; Trudell, James R

    2008-04-01

    Recent crystal structures of the acetylcholine binding protein (AChBP) have revealed surprisingly small structural alterations upon ligand binding. Here we investigate the extent to which ligand binding may affect receptor dynamics. AChBP is a homologue of the extracellular component of ligand-gated ion channels (LGICs). We have previously used an elastic network normal-mode analysis to propose a gating mechanism for the LGICs and to suggest the effects of various ligands on such motions. However, the difficulties with elastic network methods lie in their inability to account for the modest effects of a small ligand or mutation on ion channel motion. Here, we report the successful application of an elastic network normal mode technique to measure the effects of large ligand binding on receptor dynamics. The present calculations demonstrate a clear alteration in the native symmetric motions of a protein due to the presence of large protein cobratoxin ligands. In particular, normal-mode analysis revealed that cobratoxin binding to this protein significantly dampened the axially symmetric motion of the AChBP that may be associated with channel gating in the full nAChR. The results suggest that alterations in receptor dynamics could be a general feature of ligand binding.

  13. Eigenvalue translation method for mode calculations

    International Nuclear Information System (INIS)

    Gerck, E.; Cruz, C.H.B.

    1978-11-01

    A new method is described for the first few modes calculations in a interferometer that has several advantages over the ALLMAT subroutine, the Prony Method and the Fox and Li Method. In the illustrative results shown for the same cases it can be seen that the eigenvalue translation method is typically 100 fold times faster than the usual Fox and Li Method and 10 times faster than ALLMAT [pt

  14. Scalar-gravitational perturbations and quasi normal modes in the five dimensional Schwarzschild black hole

    International Nuclear Information System (INIS)

    Cardoso, Vitor; Lemos, Jose P.S.; Yoshida, Shijun

    2003-01-01

    We calculate the quasi normal modes (QNMs) for gravitational perturbations of the Schwarzschild black hole in the five dimensional (5D) spacetime with a continued fraction method. For all the types of perturbations (scalar-gravitational, vector-gravitational, and tensor-gravitational perturbations), the QNMs associated with l = 2, l 3, and l = 4 are calculated. Our numerical results are summarized as follows: (i) The three types of gravitational perturbations associated with the same angular quantum number l have a different set of the quasi normal (QN) frequencies; (ii) There is no purely imaginary frequency mode; (iii) The three types of gravitational perturbations have the same asymptotic behavior of the QNMs in the limit of the large imaginary frequencies, which are given by ωT H -1 → log 3+ 2πi(n+1/2) as n → ∞, where ω, T H , and n are the oscillation frequency, the Hawking temperature of the black hole, and the mode number, respectively. (author)

  15. A Proposed Arabic Handwritten Text Normalization Method

    Directory of Open Access Journals (Sweden)

    Tarik Abu-Ain

    2014-11-01

    Full Text Available Text normalization is an important technique in document image analysis and recognition. It consists of many preprocessing stages, which include slope correction, text padding, skew correction, and straight the writing line. In this side, text normalization has an important role in many procedures such as text segmentation, feature extraction and characters recognition. In the present article, a new method for text baseline detection, straightening, and slant correction for Arabic handwritten texts is proposed. The method comprises a set of sequential steps: first components segmentation is done followed by components text thinning; then, the direction features of the skeletons are extracted, and the candidate baseline regions are determined. After that, selection of the correct baseline region is done, and finally, the baselines of all components are aligned with the writing line.  The experiments are conducted on IFN/ENIT benchmark Arabic dataset. The results show that the proposed method has a promising and encouraging performance.

  16. Nonlinear normal vibration modes in the dynamics of nonlinear elastic systems

    International Nuclear Information System (INIS)

    Mikhlin, Yu V; Perepelkin, N V; Klimenko, A A; Harutyunyan, E

    2012-01-01

    Nonlinear normal modes (NNMs) are a generalization of the linear normal vibrations. By the Kauderer-Rosenberg concept in the regime of the NNM all position coordinates are single-values functions of some selected position coordinate. By the Shaw-Pierre concept, the NNM is such a regime when all generalized coordinates and velocities are univalent functions of a couple of dominant (active) phase variables. The NNMs approach is used in some applied problems. In particular, the Kauderer-Rosenberg NNMs are analyzed in the dynamics of some pendulum systems. The NNMs of forced vibrations are investigated in a rotor system with an isotropic-elastic shaft. A combination of the Shaw-Pierre NNMs and the Rauscher method is used to construct the forced NNMs and the frequency responses in the rotor dynamics.

  17. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. From explicit to implicit normal mode initialization of a limited-area model

    Energy Technology Data Exchange (ETDEWEB)

    Bijlsma, S.J.

    2013-02-15

    In this note the implicit normal mode initialization of a limited-area model is discussed from a different point of view. To that end it is shown that the equations describing the explicit normal mode initialization applied to the shallow water equations in differentiated form on the sphere can readily be derived in normal mode space if the model equations are separable, but only in the case of stationary Rossby modes can be transformed into the implicit equations in physical space. This is a consequence of the simple relations between the components of the different modes in that case. In addition a simple eigenvalue problem is given for the frequencies of the gravity waves. (orig.)

  19. A normal mode treatment of semi-diurnal body tides on an aspherical, rotating and anelastic Earth

    Science.gov (United States)

    Lau, Harriet C. P.; Yang, Hsin-Ying; Tromp, Jeroen; Mitrovica, Jerry X.; Latychev, Konstantin; Al-Attar, David

    2015-08-01

    Normal mode treatments of the Earth's body tide response were developed in the 1980s to account for the effects of Earth rotation, ellipticity, anelasticity and resonant excitation within the diurnal band. Recent space-geodetic measurements of the Earth's crustal displacement in response to luni-solar tidal forcings have revealed geographical variations that are indicative of aspherical deep mantle structure, thus providing a novel data set for constraining deep mantle elastic and density structure. In light of this, we make use of advances in seismic free oscillation literature to develop a new, generalized normal mode theory for the tidal response within the semi-diurnal and long-period tidal band. Our theory involves a perturbation method that permits an efficient calculation of the impact of aspherical structure on the tidal response. In addition, we introduce a normal mode treatment of anelasticity that is distinct from both earlier work in body tides and the approach adopted in free oscillation seismology. We present several simple numerical applications of the new theory. First, we compute the tidal response of a spherically symmetric, non-rotating, elastic and isotropic Earth model and demonstrate that our predictions match those based on standard Love number theory. Second, we compute perturbations to this response associated with mantle anelasticity and demonstrate that the usual set of seismic modes adopted for this purpose must be augmented by a family of relaxation modes to accurately capture the full effect of anelasticity on the body tide response. Finally, we explore aspherical effects including rotation and we benchmark results from several illustrative case studies of aspherical Earth structure against independent finite-volume numerical calculations of the semi-diurnal body tide response. These tests confirm the accuracy of the normal mode methodology to at least the level of numerical error in the finite-volume predictions. They also demonstrate

  20. Modeling guided wave excitation in plates with surface mounted piezoelectric elements: coupled physics and normal mode expansion

    Science.gov (United States)

    Ren, Baiyang; Lissenden, Cliff J.

    2018-04-01

    Guided waves have been extensively studied and widely used for structural health monitoring because of their large volumetric coverage and good sensitivity to defects. Effectively and preferentially exciting a desired wave mode having good sensitivity to a certain defect is of great practical importance. Piezoelectric discs and plates are the most common types of surface-mounted transducers for guided wave excitation and reception. Their geometry strongly influences the proportioning between excited modes as well as the total power of the excited modes. It is highly desirable to predominantly excite the selected mode while the total transduction power is maximized. In this work, a fully coupled multi-physics finite element analysis, which incorporates the driving circuit, the piezoelectric element and the wave guide, is combined with the normal mode expansion method to study both the mode tuning and total wave power. The excitation of circular crested waves in an aluminum plate with circular piezoelectric discs is numerically studied for different disc and adhesive thicknesses. Additionally, the excitation of plane waves in an aluminum plate, using a stripe piezoelectric element is studied both numerically and experimentally. It is difficult to achieve predominant single mode excitation as well as maximum power transmission simultaneously, especially for higher order modes. However, guidelines for designing the geometry of piezoelectric elements for optimal mode excitation are recommended.

  1. Low-emittance tuning of storage rings using normal mode beam position monitor calibration

    Directory of Open Access Journals (Sweden)

    A. Wolski

    2011-07-01

    Full Text Available We describe a new technique for low-emittance tuning of electron and positron storage rings. This technique is based on calibration of the beam position monitors (BPMs using excitation of the normal modes of the beam motion, and has benefits over conventional methods. It is relatively fast and straightforward to apply, it can be as easily applied to a large ring as to a small ring, and the tuning for low emittance becomes completely insensitive to BPM gain and alignment errors that can be difficult to determine accurately. We discuss the theory behind the technique, present some simulation results illustrating that it is highly effective and robust for low-emittance tuning, and describe the results of some initial experimental tests on the CesrTA storage ring.

  2. Low-emittance tuning of storage rings using normal mode beam position monitor calibration

    Science.gov (United States)

    Wolski, A.; Rubin, D.; Sagan, D.; Shanks, J.

    2011-07-01

    We describe a new technique for low-emittance tuning of electron and positron storage rings. This technique is based on calibration of the beam position monitors (BPMs) using excitation of the normal modes of the beam motion, and has benefits over conventional methods. It is relatively fast and straightforward to apply, it can be as easily applied to a large ring as to a small ring, and the tuning for low emittance becomes completely insensitive to BPM gain and alignment errors that can be difficult to determine accurately. We discuss the theory behind the technique, present some simulation results illustrating that it is highly effective and robust for low-emittance tuning, and describe the results of some initial experimental tests on the CesrTA storage ring.

  3. Extended Majorana zero modes in a topological superconducting-normal T-junction

    Science.gov (United States)

    Spånslätt, Christian; Ardonne, Eddy

    2017-03-01

    We investigate the sub gap properties of a three terminal Josephson T-junction composed of topologically superconducting wires connected by a normal metal region. This system naturally hosts zero energy Andreev bound states which are of self-conjugate Majorana nature and we show that they are, in contrast to ordinary Majorana zero modes, spatially extended in the normal metal region. If the T-junction respects time-reversal symmetry, we show that a zero mode is distributed only in two out of three arms in the junction and tuning the superconducting phases allows for transfer of the mode between the junction arms. We further provide tunneling conductance calculations showing that these features can be detected in experiments. Our findings suggest an experimental platform for studying the nature of spatially extended Majorana zero modes.

  4. Roundtrip matrix method for calculating the leaky resonant modes of open nanophotonic structures

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2014-01-01

    We present a numerical method for calculating quasi-normal modes of open nanophotonic structures. The method is based on scattering matrices and a unity eigenvalue of the roundtrip matrix of an internal cavity, and we develop it in detail with electromagnetic fields expanded on Bloch modes...

  5. Normal Mode Analysis in Zeolites: Toward an Efficient Calculation of Adsorption Entropies.

    Science.gov (United States)

    De Moor, Bart A; Ghysels, An; Reyniers, Marie-Françoise; Van Speybroeck, Veronique; Waroquier, Michel; Marin, Guy B

    2011-04-12

    An efficient procedure for normal-mode analysis of extended systems, such as zeolites, is developed and illustrated for the physisorption and chemisorption of n-octane and isobutene in H-ZSM-22 and H-FAU using periodic DFT calculations employing the Vienna Ab Initio Simulation Package. Physisorption and chemisorption entropies resulting from partial Hessian vibrational analysis (PHVA) differ at most 10 J mol(-1) K(-1) from those resulting from full Hessian vibrational analysis, even for PHVA schemes in which only a very limited number of atoms are considered free. To acquire a well-conditioned Hessian, much tighter optimization criteria than commonly used for electronic energy calculations in zeolites are required, i.e., at least an energy cutoff of 400 eV, maximum force of 0.02 eV/Å, and self-consistent field loop convergence criteria of 10(-8) eV. For loosely bonded complexes the mobile adsorbate method is applied, in which frequency contributions originating from translational or rotational motions of the adsorbate are removed from the total partition function and replaced by free translational and/or rotational contributions. The frequencies corresponding with these translational and rotational modes can be selected unambiguously based on a mobile block Hessian-PHVA calculation, allowing the prediction of physisorption entropies within an accuracy of 10-15 J mol(-1) K(-1) as compared to experimental values. The approach presented in this study is useful for studies on other extended catalytic systems.

  6. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    Science.gov (United States)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  7. A net normal dispersion all-fiber laser using a hybrid mode-locking mechanism

    International Nuclear Information System (INIS)

    Xu, Bo; Martinez, Amos; Yamashita, Shinji; Set, Sze Yun; Goh, Chee Seong

    2014-01-01

    We propose and demonstrate an all-fiber, dispersion-mapped, erbium-doped fiber laser with net normal dispersion generating dissipative solitons. The laser is mode-locked by a hybrid mode-locking mechanism consisting of a nonlinear amplifying loop mirror and a carbon nanotube saturable absorber. We achieve self-starting, mode-locked operation generating 2.75 nJ pulses at a fundamental repetition rate of 10.22 MHz with remarkable long term stability. (letter)

  8. Cyclotron operating mode determination based on intelligent methods

    International Nuclear Information System (INIS)

    Ouda, M.M.E.M.

    2011-01-01

    adjust the parameters of the operating mode from the acceleration- extraction- focusing and steering until end of the experiment. This process is tedious and also time consuming and these were the main reasons to search better, faster and efficient method to determine the parameters of a new operating mode. As a result the artificial neural networks as a basis for intelligent system have been used to determine new operating systems for the MGC-20 cyclotron.In this thesis; an intelligent system has been designed and developed to determine new operating systems for the MGC-20 cyclotron, Nuclear Research Center, Atomic Energy Authority. This system based on Feed Forward Back Propagation Neural Networks (FFBPNN). The system consists of five neural networks work in parallel. Every neural network consists of three layers, input, hidden, and output layers. The outputs of the five neural networks represent the normalized values (from 0 to 1 and from -1 to 0) of the 19 parameters of the new operating mode. The inputs for every neural network are the normalized values (from 0 to 1) of the particle name, the particle energy, the beam current intensity, and the duty factor. The outputs of the five neural networks must be calibrated to obtain the real values of the parameters of the new operating mode. These elements of the outputs are the magnetic lenses, the magnetic correctors, the concentric coils, and the harmonic coils. The FFBPNNs are learned by using the feed forward back propagation training algorithm. The learning has been done with different values of the learning factor , the momentum factor and also the number of the hidden layers. The best structure which needs the shortest time to learn and achieve the allowed maximum error has been used.

  9. Normal force of magnetorheological fluids with foam metal under oscillatory shear modes

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Xingyan, E-mail: yaoxingyan-jsj@163.com [Research Center of System Health Maintenance, Chongqing Technology and Business University, Chongqing 400067 (China); Chongqing Engineering Laboratory for Detection Control and Integrated System, Chongqing 400067 (China); Liu, Chuanwen; Liang, Huang; Qin, Huafeng [Chongqing Engineering Laboratory for Detection Control and Integrated System, Chongqing 400067 (China); Yu, Qibing; Li, Chuan [Research Center of System Health Maintenance, Chongqing Technology and Business University, Chongqing 400067 (China); Chongqing Engineering Laboratory for Detection Control and Integrated System, Chongqing 400067 (China)

    2016-04-01

    The normal force of magnetorheological (MR) fluids in porous foam metal was investigated in this paper. The dynamic repulsive normal force was studied using an advanced commercial rheometer under oscillatory shear modes. In the presence of magnetic fields, the influences of time, strain amplitude, frequency and shear rate on the normal force of MR fluids drawn from the porous foam metal were systematically analysed. The experimental results indicated that the magnetic field had the greatest effect on the normal force, and the effect increased incrementally with the magnetic field. Increasing the magnetic field produced a step-wise increase in the shear gap. However, other factors in the presence of a constant magnetic field only had weak effects on the normal force. This behaviour can be regarded as a magnetic field-enhanced normal force, as increases in the magnetic field resulted in more MR fluids being released from the porous foam metal, and the chain-like magnetic particles in the MR fluids becoming more elongated with aggregates spanning the gap between the shear plates. - Highlights: • Normal force of MR fluids with metal foam under oscillatory shear modes was studied. • The shear gap was step-wise increased with magnetic fields. • The magnetic field has a greater impact on the normal force.

  10. Normal force of magnetorheological fluids with foam metal under oscillatory shear modes

    International Nuclear Information System (INIS)

    Yao, Xingyan; Liu, Chuanwen; Liang, Huang; Qin, Huafeng; Yu, Qibing; Li, Chuan

    2016-01-01

    The normal force of magnetorheological (MR) fluids in porous foam metal was investigated in this paper. The dynamic repulsive normal force was studied using an advanced commercial rheometer under oscillatory shear modes. In the presence of magnetic fields, the influences of time, strain amplitude, frequency and shear rate on the normal force of MR fluids drawn from the porous foam metal were systematically analysed. The experimental results indicated that the magnetic field had the greatest effect on the normal force, and the effect increased incrementally with the magnetic field. Increasing the magnetic field produced a step-wise increase in the shear gap. However, other factors in the presence of a constant magnetic field only had weak effects on the normal force. This behaviour can be regarded as a magnetic field-enhanced normal force, as increases in the magnetic field resulted in more MR fluids being released from the porous foam metal, and the chain-like magnetic particles in the MR fluids becoming more elongated with aggregates spanning the gap between the shear plates. - Highlights: • Normal force of MR fluids with metal foam under oscillatory shear modes was studied. • The shear gap was step-wise increased with magnetic fields. • The magnetic field has a greater impact on the normal force.

  11. Polaritonic normal-mode splitting and light localization in a one-dimensional nanoguide

    NARCIS (Netherlands)

    Haakh, Harald R.; Faez, Sanli; Sandoghdar, Vahid

    2016-01-01

    We theoretically investigate the interaction of light and a collection of emitters in a subwavelength one-dimensional medium (nanoguide), where enhanced emitter-photon coupling leads to efficient multiple scattering of photons. We show that the spectrum of the transmitted light undergoes normal-mode

  12. Relation between Protein Intrinsic Normal Mode Weights and Pre-Existing Conformer Populations.

    Science.gov (United States)

    Ozgur, Beytullah; Ozdemir, E Sila; Gursoy, Attila; Keskin, Ozlem

    2017-04-20

    Intrinsic fluctuations of a protein enable it to sample a large repertoire of conformers including the open and closed forms. These distinct forms of the protein called conformational substates pre-exist together in equilibrium as an ensemble independent from its ligands. The role of ligand might be simply to alter the equilibrium toward the most appropriate form for binding. Normal mode analysis is proved to be useful in identifying the directions of conformational changes between substates. In this study, we demonstrate that the ratios of normalized weights of a few normal modes driving the protein between its substates can give insights about the ratios of kinetic conversion rates of the substates, although a direct relation between the eigenvalues and kinetic conversion rates or populations of each substate could not be observed. The correlation between the normalized mode weight ratios and the kinetic rate ratios is around 83% on a set of 11 non-enzyme proteins and around 59% on a set of 17 enzymes. The results are suggestive that mode motions carry intrinsic relations with thermodynamics and kinetics of the proteins.

  13. Quasi-normal modes of extremal BTZ black holes in TMG

    Science.gov (United States)

    Afshar, Hamid R.; Alishahiha, Mohsen; Mosaffa, Amir E.

    2010-08-01

    We study the spectrum of tensor perturbations on extremal BTZ black holes in topologically massive gravity for arbitrary values of the coefficient of the Chern-Simons term, μ. Imposing proper boundary conditions at the boundary of the space and at the horizon, we find that the spectrum contains quasi-normal modes.

  14. Basic mode of nonlinear spin-wave resonance in normally magnetized ferrite films

    International Nuclear Information System (INIS)

    Gulyaev, Yu.V.; Zil'berman, P.E.; Timiryazev, A.G.; Tikhomirova, M.P.

    2000-01-01

    Modes of nonlinear and spin-wave resonance (SWR) in the normally magnetized ferrite films were studied both theoretically and experimentally. The particular emphasis was placed on the basic mode of SWR. One showed theoretically that with the growth of the precession amplitude the profile of the basic mode changed. The nonlinear shift of the resonance field depends on the parameters of fixing of the surface spins. Films of ferroyttrium garnet (FYG) with strong gradient of the single-axis anisotropy field along the film thickness, as well as, FYG films of the submicron thickness where investigated experimentally. With the intensification of Uhf-power one observed the sublinear shift of the basic mode resonance field following by the superlinear growth of the absorbed power. That kind of behaviour is explained by variation of the profile of the varying magnetization space distribution [ru

  15. An experimental randomized study of six different ventilatory modes in a piglet model with normal lungs

    DEFF Research Database (Denmark)

    Nielsen, J B; Sjöstrand, U H; Henneberg, S W

    1991-01-01

    A randomized study of 6 ventilatory modes was made in 7 piglets with normal lungs. Using a Servo HFV 970 (prototype system) and a Servo ventilator 900 C the ventilatory modes examined were as follows: SV-20V, i.e. volume-controlled intermittent positive-pressure ventilation (IPPV); SV-20VIosc, i...... ventilatory modes. Also the mean airway pressures were lower with the HFV modes 8-9 cm H2O compared to 11-14 cm H2O for the other modes. The gas distribution was evaluated by N2 wash-out and a modified lung clearance index. All modes showed N2 wash-out according to a two-compartment model. The SV-20P mode had.......e. volume-controlled ventilation (IPPV) with superimposed inspiratory oscillations; and SV-20VEf, i.e. volume-controlled ventilation (IPPV) with expiratory flush of fresh gas; HFV-60 denotes low-compressive high-frequency positive-pressure ventilation (HFPPV) and HVF-20 denotes low-compressive volume...

  16. Structural improvement of unliganded simian immunodeficiency virus gp120 core by normal-mode-based X-ray crystallographic refinement

    International Nuclear Information System (INIS)

    Chen, Xiaorui; Lu, Mingyang; Poon, Billy K.; Wang, Qinghua; Ma, Jianpeng

    2009-01-01

    The structural model of the unliganded and fully glycosylated simian immunodeficiency virus gp120 core determined to 4.0 Å resolution was substantially improved using a recently developed normal-mode-based anisotropic B-factor refinement method. The envelope protein gp120/gp41 of simian and human immunodeficiency viruses plays a critical role in viral entry into host cells. However, the extraordinarily high structural flexibility and heavy glycosylation of the protein have presented enormous difficulties in the pursuit of high-resolution structural investigation of some of its conformational states. An unliganded and fully glycosylated gp120 core structure was recently determined to 4.0 Å resolution. The rather low data-to-parameter ratio limited refinement efforts in the original structure determination. In this work, refinement of this gp120 core structure was carried out using a normal-mode-based refinement method that has been shown in previous studies to be effective in improving models of a supramolecular complex at 3.42 Å resolution and of a membrane protein at 3.2 Å resolution. By using only the first four nonzero lowest-frequency normal modes to construct the anisotropic thermal parameters, combined with manual adjustments and standard positional refinement using REFMAC5, the structural model of the gp120 core was significantly improved in many aspects, including substantial decreases in R factors, better fitting of several flexible regions in electron-density maps, the addition of five new sugar rings at four glycan chains and an excellent correlation of the B-factor distribution with known structural flexibility. These results further underscore the effectiveness of this normal-mode-based method in improving models of protein and nonprotein components in low-resolution X-ray structures

  17. Free and forced Rossby normal modes in a rectangular gulf of arbitrary orientation

    Science.gov (United States)

    Graef, Federico

    2016-09-01

    A free Rossby normal mode in a rectangular gulf of arbitrary orientation is constructed by considering the reflection of a Rossby mode in a channel at the head of the gulf. Therefore, it is the superposition of four Rossby waves in an otherwise unbounded ocean with the same frequency and wavenumbers perpendicular to the gulf axis whose difference is equal to 2mπ/W, where m is a positive integer and W the gulf's width. The lower (or higher) modes with small m (or large m) are oscillatory (evanescent) in the coordinate along the gulf; these are elucidated geometrically. However for oceanographically realistic parameter values, most of the modes are evanescent. When the gulf is forced at the mouth with a single Fourier component, the response is in general an infinite sum of modes that are needed to match the value of the streamfunction at the gulf's entrance. The dominant mode of the response is the resonant one, which corresponds to forcing with a frequency ω and wavenumber normal to the gulf axis η appropriate to a gulf mode: η =- β sin α/(2ω) ± Mπ/W, where α is the angle between the gulf's axis and the eastern direction (+ve clockwise) and M the resonant's mode number. For zonal gulfs ω drops out of the resonance condition. For the special cases η = 0 in which the free surface goes up and down at the mouth with no flow through it, or a flow with a sinusoidal profile, resonant modes can get excited for very specific frequencies (only for non-zonal gulfs in the η = 0 case). The resonant mode is around the annual frequency for a wide range of gulf orientations α ∈ [40°, 130°] or α ∈ [220°, 310°] and gulf widths between 150 and 200 km; these include the Gulf of California and the Adriatic Sea. If η is imaginary, i.e. a flow with an exponential profile, there is no resonance. In general less modes get excited if the gulf is zonally oriented.

  18. Vertical discretizations for compressible Euler equation atmospheric models giving optimal representation of normal modes

    International Nuclear Information System (INIS)

    Thuburn, J.; Woollings, T.J.

    2005-01-01

    Accurate representation of different kinds of wave motion is essential for numerical models of the atmosphere, but is sensitive to details of the discretization. In this paper, numerical dispersion relations are computed for different vertical discretizations of the compressible Euler equations and compared with the analytical dispersion relation. A height coordinate, an isentropic coordinate, and a terrain-following mass-based coordinate are considered, and, for each of these, different choices of prognostic variables and grid staggerings are considered. The discretizations are categorized according to whether their dispersion relations are optimal, are near optimal, have a single zero-frequency computational mode, or are problematic in other ways. Some general understanding of the factors that affect the numerical dispersion properties is obtained: heuristic arguments concerning the normal mode structures, and the amount of averaging and coarse differencing in the finite difference scheme, are shown to be useful guides to which configurations will be optimal; the number of degrees of freedom in the discretization is shown to be an accurate guide to the existence of computational modes; there is only minor sensitivity to whether the equations for thermodynamic variables are discretized in advective form or flux form; and an accurate representation of acoustic modes is found to be a prerequisite for accurate representation of inertia-gravity modes, which, in turn, is found to be a prerequisite for accurate representation of Rossby modes

  19. Bifurcations of the normal modes of the Ne...Br{sub 2} complex

    Energy Technology Data Exchange (ETDEWEB)

    Blesa, Fernando [Departamento de Fisica Aplicada, Universidad de Zaragoza, Zaragoza (Spain); Mahecha, Jorge [Instituto de Fisica, Universidad de Antioquia, Medellin (Colombia); Salas, J. Pablo [Area de Fisica Aplicada, Universidad de La Rioja, Logrono (Spain); Inarrea, Manuel, E-mail: manuel.inarrea@unirioja.e [Area de Fisica Aplicada, Universidad de La Rioja, Logrono (Spain)

    2009-12-28

    We study the classical dynamics of the rare gas-dihalogen Ne...Br{sub 2} complex in its ground electronic state. By considering the dihalogen bond frozen at its equilibrium distance, the system has two degrees of freedom and its potential energy surface presents linear and T-shape isomers. We find the nonlinear normal modes of both isomers that determine the phase space structure of the system. By means of surfaces of section and applying the numerical continuation of families of periodic orbits, we detect and identify the different bifurcations suffered by the normal modes as a function of the system energy. Finally, using the Orthogonal Fast Lyapunov Indicator (OFLI), we study the evolution of the fraction of the phase space volume occupied by regular motions.

  20. Investigating Equations Used to Design a Very Small Normal-Mode Helical Antenna in Free Space

    Directory of Open Access Journals (Sweden)

    Dang Tien Dung

    2018-01-01

    Full Text Available A normal-mode helical antenna (NMHA has been applied in some small devices such as tire pressure monitoring systems (TPMS and radio frequency identification (RFID tags. Previously, electrical characteristics of NMHA were obtained through electromagnetic simulations. In practical design of NMHA, equational expressions for the main electrical characteristics are more convenient. Electrical performances of NMHA can be expressed by a combination of a short dipole and small loops. Applicability of equations for a short dipole and a small loop to very small normal-mode helical antennas such as antennas around 1/100 wavelengths was not clear. In this paper, accuracies of equations for input resistances, antenna efficiency, and axial ratios are verified by comparisons with electromagnetic simulation results by FEKO software at 402 MHz. In addition, the structure of the antenna equal to 0.021 λ is fabricated, and measurements are performed to confirm the design accuracy.

  1. Calculation of normal modes of the closed waveguides in general vector case

    Science.gov (United States)

    Malykh, M. D.; Sevastianov, L. A.; Tiutiunnik, A. A.

    2018-04-01

    The article is devoted to the calculation of normal modes of the closed waveguides with an arbitrary filling ɛ, μ in the system of computer algebra Sage. Maxwell equations in the cylinder are reduced to the system of two bounded Helmholtz equations, the notion of weak solution of this system is given and then this system is investigated as a system of ordinary differential equations. The normal modes of this system are an eigenvectors of a matrix pencil. We suggest to calculate the matrix elements approximately and to truncate the matrix by usual way but further to solve the truncated eigenvalue problem exactly in the field of algebraic numbers. This approach allows to keep the symmetry of the initial problem and in particular the multiplicity of the eigenvalues. In the work would be presented some results of calculations.

  2. A phylogenetic analysis of normal modes evolution in enzymes and its relationship to enzyme function.

    Science.gov (United States)

    Lai, Jason; Jin, Jing; Kubelka, Jan; Liberles, David A

    2012-09-21

    Since the dynamic nature of protein structures is essential for enzymatic function, it is expected that functional evolution can be inferred from the changes in protein dynamics. However, dynamics can also diverge neutrally with sequence substitution between enzymes without changes of function. In this study, a phylogenetic approach is implemented to explore the relationship between enzyme dynamics and function through evolutionary history. Protein dynamics are described by normal mode analysis based on a simplified harmonic potential force field applied to the reduced C(α) representation of the protein structure while enzymatic function is described by Enzyme Commission numbers. Similarity of the binding pocket dynamics at each branch of the protein family's phylogeny was analyzed in two ways: (1) explicitly by quantifying the normal mode overlap calculated for the reconstructed ancestral proteins at each end and (2) implicitly using a diffusion model to obtain the reconstructed lineage-specific changes in the normal modes. Both explicit and implicit ancestral reconstruction identified generally faster rates of change in dynamics compared with the expected change from neutral evolution at the branches of potential functional divergences for the α-amylase, D-isomer-specific 2-hydroxyacid dehydrogenase, and copper-containing amine oxidase protein families. Normal mode analysis added additional information over just comparing the RMSD of static structures. However, the branch-specific changes were not statistically significant compared to background function-independent neutral rates of change of dynamic properties and blind application of the analysis would not enable prediction of changes in enzyme specificity. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Normal modes and time evolution of a holographic superconductor after a quantum quench

    International Nuclear Information System (INIS)

    Gao, Xin; García-García, Antonio M.; Zeng, Hua Bi; Zhang, Hai-Qing

    2014-01-01

    We employ holographic techniques to investigate the dynamics of the order parameter of a strongly coupled superconductor after a perturbation that drives the system out of equilibrium. The gravity dual that we employ is the AdS_5 Soliton background at zero temperature. We first analyze the normal modes associated to the superconducting order parameter which are purely real since the background has no horizon. We then study the full time evolution of the order parameter after a quench. For sufficiently a weak and slow perturbation we show that the order parameter undergoes simple undamped oscillations in time with a frequency that agrees with the lowest normal model computed previously. This is expected as the soliton background has no horizon and therefore, at least in the probe and large N limits considered, the system will never return to equilibrium. For stronger and more abrupt perturbations higher normal modes are excited and the pattern of oscillations becomes increasingly intricate. We identify a range of parameters for which the time evolution of the order parameter become quasi chaotic. The details of the chaotic evolution depend on the type of perturbation used. Therefore it is plausible to expect that it is possible to engineer a perturbation that leads to the almost complete destruction of the oscillating pattern and consequently to quasi equilibration induced by superposition of modes with different frequencies

  4. Identification of surface species by vibrational normal mode analysis. A DFT study

    Science.gov (United States)

    Zhao, Zhi-Jian; Genest, Alexander; Rösch, Notker

    2017-10-01

    Infrared spectroscopy is an important experimental tool for identifying molecular species adsorbed on a metal surface that can be used in situ. Often vibrational modes in such IR spectra of surface species are assigned and identified by comparison with vibrational spectra of related (molecular) compounds of known structure, e. g., an organometallic cluster analogue. To check the validity of this strategy, we carried out a computational study where we compared the normal modes of three C2Hx species (x = 3, 4) in two types of systems, as adsorbates on the Pt(111) surface and as ligands in an organometallic cluster compound. The results of our DFT calculations reproduce the experimental observed frequencies with deviations of at most 50 cm-1. However, the frequencies of the C2Hx species in both types of systems have to be interpreted with due caution if the coordination mode is unknown. The comparative identification strategy works satisfactorily when the coordination mode of the molecular species (ethylidyne) is similar on the surface and in the metal cluster. However, large shifts are encountered when the molecular species (vinyl) exhibits different coordination modes on both types of substrates.

  5. Quasi-normal frequencies: Semi-analytic results for highly damped modes

    International Nuclear Information System (INIS)

    Skakala, Jozef; Visser, Matt

    2011-01-01

    Black hole highly-damped quasi-normal frequencies (QNFs) are very often of the form ω n = (offset) + in (gap). We have investigated the genericity of this phenomenon for the Schwarzschild-deSitter (SdS) black hole by considering a model potential that is piecewise Eckart (piecewise Poschl-Teller), and developing an analytic 'quantization condition' for the highly-damped quasi-normal frequencies. We find that the ω n = (offset) + in (gap) behaviour is common but not universal, with the controlling feature being whether or not the ratio of the surface gravities is a rational number. We furthermore observed that the relation between rational ratios of surface gravities and periodicity of QNFs is very generic, and also occurs within different analytic approaches applied to various types of black hole spacetimes. These observations are of direct relevance to any physical situation where highly-damped quasi-normal modes are important.

  6. Preventive Methods for ATM Mode Control

    OpenAIRE

    Ivan Baronak; Robert Trska

    2004-01-01

    Broadband transfer mode ATM represent one of alternative solutions for growing requirements on transfer capabilities. Its advantage is an effort for provisions of guaranteed quality of transport services with preservations of high transfer rate. This property is covered by several mechanisms, which role is to control not only the traffic of existing connections, but also the admission of new ones and prevent the violation of requirements on transport quality of existing and new connections.

  7. Earth's Outer Core Properties Estimated Using Bayesian Inversion of Normal Mode Eigenfrequencies

    Science.gov (United States)

    Irving, J. C. E.; Cottaar, S.; Lekic, V.

    2016-12-01

    The outer core is arguably Earth's most dynamic region, and consists of an iron-nickel liquid with an unknown combination of lighter alloying elements. Frequencies of Earth's normal modes provide the strongest constraints on the radial profiles of compressional wavespeed, VΦ, and density, ρ, in the outer core. Recent great earthquakes have yielded new normal mode measurements; however, mineral physics experiments and calculations are often compared to the Preliminary reference Earth model (PREM), which is 35 years old and does not provide uncertainties. Here we investigate the thermo-elastic properties of the outer core using Earth's free oscillations and a Bayesian framework. To estimate radial structure of the outer core and its uncertainties, we choose to exploit recent datasets of normal mode centre frequencies. Under the self-coupling approximation, centre frequencies are unaffected by lateral heterogeneities in the Earth, for example in the mantle. Normal modes are sensitive to both VΦ and ρ in the outer core, with each mode's specific sensitivity depending on its eigenfunctions. We include a priori bounds on outer core models that ensure compatibility with measurements of mass and moment of inertia. We use Bayesian Monte Carlo Markov Chain techniques to explore different choices in parameterizing the outer core, each of which represents different a priori constraints. We test how results vary (1) assuming a smooth polynomial parametrization, (2) allowing for structure close to the outer core's boundaries, (3) assuming an Equation-of-State and adiabaticity and inverting directly for thermo-elastic parameters. In the second approach we recognize that the outer core may have distinct regions close to the core-mantle and inner core boundaries and investigate models which parameterize the well mixed outer core separately from these two layers. In the last approach we seek to map the uncertainties directly into thermo-elastic parameters including the bulk

  8. Plantar fascia softening in plantar fasciitis with normal B-mode sonography.

    Science.gov (United States)

    Wu, Chueh-Hung; Chen, Wen-Shiang; Wang, Tyng-Guey

    2015-11-01

    To investigate plantar fascia elasticity in patients with typical clinical manifestations of plantar fasciitis but normal plantar fascia morphology on B-mode sonography. Twenty patients with plantar fasciitis (10 unilateral and 10 bilateral) and 30 healthy volunteers, all with normal plantar fascia morphology on B-mode sonography, were included in the study. Plantar fascia elasticity was evaluated by sonoelastographic examination. All sonoelastograms were quantitatively analyzed, and less red pixel intensity was representative of softer tissue. Pixel intensity was compared among unilateral plantar fasciitis patients, bilateral plantar fasciitis patients, and healthy volunteers by one-way ANOVA. A post hoc Scheffé's test was used to identify where the differences occurred. Compared to healthy participants (red pixel intensity: 146.9 ± 9.1), there was significantly less red pixel intensity in the asymptomatic sides of unilateral plantar fasciitis (140.4 ± 7.3, p = 0.01), symptomatic sides of unilateral plantar fasciitis (127.1 ± 7.4, p plantar fasciitis (129.4 ± 7.5, p plantar fascia thickness or green or blue pixel intensity among these groups. Sonoelastography revealed that the plantar fascia is softer in patients with typical clinical manifestations of plantar fasciitis, even if they exhibit no abnormalities on B-mode sonography.

  9. Accretion onto magnetized neutron stars: Normal mode analysis of the interchange instability at the magnetopause

    International Nuclear Information System (INIS)

    Arons, J.; Lea, S.M.

    1976-01-01

    We describe the results of a linearized hydromagnetic stability analysis of the magnetopause of an accreting neutron star. The magnetosphere is assumed to be slowly rotating, and the plasma just outside of the magnetopause is assumed to be weakly magnetized. The plasma layer is assumed to be bounded above by a shock wave, and to be thin compared with the radius of the magnetosphere. Under these circumstances, the growing modes are shown to be localized in the direction parallel to the zero-order magnetic field. The structure of the modes is still similar to the flute mode, however. The growth rate at each magnetic latitude is lambda given by γ 2 =g/sub n/kα/sub eff/(lambda) tanh [kz/sub s/(lambda)] where g/sub n/ is the magnitude of the gravitational acceleration normal to the surface, kapprox. =vertical-barmvertical-bar/R (lambda)cos lambda, vertical-barmvertical-bar is the azimuthal mode number, R (lambda) is the radius of the magnetosphere, z/sub s/ is the height of the shock above the magnetopause, and α/sub eff/(lambda) <1 is the effective Atwood number which embodies the stabilizing effects of favorable curvature and magnetic tension. We calculate α/sub eff/(lambda), and also discuss the stabilizing effects of viscosity and of aligned flow parallel to the magnetopause

  10. Time-dependent local-to-normal mode transition in triatomic molecules

    Science.gov (United States)

    Cruz, Hans; Bermúdez-Montaña, Marisol; Lemus, Renato

    2018-01-01

    Time-evolution of the vibrational states of two interacting harmonic oscillators in the local mode scheme is presented. A local-to-normal mode transition (LNT) is identified and studied from temporal perspective through time-dependent frequencies of the oscillators. The LNT is established as a polyad-breaking phenomenon from the local standpoint for the stretching degrees of freedom in a triatomic molecule. This study is carried out in the algebraic representation of bosonic operators. The dynamics of the states are determined via the solutions of the corresponding nonlinear Ermakov equation and a local time-dependent polyad is obtained as a tool to identify the LNT. Applications of this formalism to H2O, CO2, O3 and NO2 molecules in the adiabatic, sudden and linear regime are considered.

  11. Normal Mode Derived Models of the Physical Properties of Earth's Outer Core

    Science.gov (United States)

    Irving, J. C. E.; Cottaar, S.; Lekic, V.; Wu, W.

    2017-12-01

    Earth's outer core, the largest reservoir of metal in our planet, is comprised of an iron alloy of an uncertain composition. Its dynamical behaviour is responsible for the generation of Earth's magnetic field, with convection driven both by thermal and chemical buoyancy fluxes. Existing models of the seismic velocity and density of the outer core exhibit some variation, and there are only a small number of models which aim to represent the outer core's density.It is therefore important that we develop a better understanding of the physical properties of the outer core. Though most of the outer core is likely to be well mixed, it is possible that the uppermost outer core is stably stratified: it may be enriched in light elements released during the growth of the solid, iron enriched, inner core; by elements dissolved from the mantle into the outer core; or by exsolution of compounds previously dissolved in the liquid metal which will eventually be swept into the mantle. The stratified layer may host MAC or Rossby waves and it could impede communication between the chemically differentiated mantle and outer core, including screening out some of the geodynamo's signal. We use normal mode center frequencies to estimate the physical properties of the outer core in a Bayesian framework. We estimate the mineral physical parameters needed to best produce velocity and density models of the outer core which are consistent with the normal mode observations. We require that our models satisfy realistic physical constraints. We create models of the outer core with and without a distinct uppermost layer and assess the importance of this region.Our normal mode-derived models are compared with observations of body waves which travel through the outer core. In particular, we consider SmKS waves which are especially sensitive to the uppermost outer core and are therefore an important way to understand the robustness of our models.

  12. Normal-mode-based analysis of electron plasma waves with second-order Hermitian formalism

    Science.gov (United States)

    Ramos, J. J.; White, R. L.

    2018-03-01

    The classic problem of the dynamic evolution and Landau damping of linear Langmuir electron waves in a collisionless plasma with Maxwellian background is cast as a second-order, self-adjoint problem with a continuum spectrum of real and positive squared frequencies. The corresponding complete basis of singular normal modes is obtained, along with their orthogonality relation. This yields easily the general expression of the time-reversal-invariant solution for any initial-value problem. Examples are given for specific initial conditions that illustrate different behaviors of the Landau-damped macroscopic moments of the perturbations.

  13. Twist–radial normal mode analysis in double-stranded DNA chains

    International Nuclear Information System (INIS)

    Torrellas, Germán; Maciá, Enrique

    2012-01-01

    We study the normal modes of a duplex DNA chain at low temperatures. We consider the coupling between the hydrogen-bond radial oscillations and the twisting motion of each base pair within the Peyrard–Bishop–Dauxois model. The coupling is mediated by the stacking interaction between adjacent base pairs along the helix. We explicitly consider different mass values for different nucleotides, extending previous works. We disclose several resonance conditions of interest, determined by the fine-tuning of certain model parameters. The role of these dynamical effects on the DNA chain charge transport properties is discussed.

  14. Modified Block Newton method for the lambda modes problem

    Energy Technology Data Exchange (ETDEWEB)

    González-Pintor, S., E-mail: segonpin@isirym.upv.es [Departamento de Ingeniería Química y Nuclear, Universidad Politécnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Ginestar, D., E-mail: dginestar@mat.upv.es [Instituto de Matemática Multidisciplinar, Universidad Politécnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Verdú, G., E-mail: gverdu@iqn.upv.es [Departamento de Ingeniería Química y Nuclear, Universidad Politécnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain)

    2013-06-15

    Highlights: ► The Modal Method is based on expanding the solution in a set of dominant modes. ► Updating the set of dominant modes improve its performance. ► A Modified Block Newton Method, which use previous calculated modes, is proposed. ► The method exhibits a very good local convergence with few iterations. ► Good performance results are also obtained for heavy perturbations. -- Abstract: To study the behaviour of nuclear power reactors it is necessary to solve the time dependent neutron diffusion equation using either a rectangular mesh for PWR and BWR reactors or a hexagonal mesh for VVER reactors. This problem can be solved by means of a modal method, which uses a set of dominant modes to expand the neutron flux. For the transient calculations using the modal method with a moderate number of modes, these modes must be updated each time step to maintain the accuracy of the solution. The updating modes process is also interesting to study perturbed configurations of a reactor. A Modified Block Newton method is studied to update the modes. The performance of the Newton method has been tested for a steady state perturbation analysis of two 2D hexagonal reactors, a perturbed configuration of the IAEA PWR 3D reactor and two configurations associated with a boron dilution transient in a BWR reactor.

  15. Normal Mode Analysis to a Poroelastic Half-Space Problem under Generalized Thermoelasticity

    Directory of Open Access Journals (Sweden)

    Chunbao Xiong

    Full Text Available Abstract The thermo-hydro-mechanical problems associated with a poroelastic half-space soil medium with variable properties under generalized thermoelasticity theory were investigated in this study. By remaining faithful to Biot’s theory of dynamic poroelasticity, we idealized the foundation material as a uniform, fully saturated, poroelastic half-space medium. We first subjected this medium to time harmonic loads consisting of normal or thermal loads, then investigated the differences between the coupled thermohydro-mechanical dynamic models and the thermo-elastic dynamic models. We used normal mode analysis to solve the resulting non-dimensional coupled equations, then investigated the effects that non-dimensional vertical displacement, excess pore water pressure, vertical stress, and temperature distribution exerted on the poroelastic half-space medium and represented them graphically.

  16. Robust Seismic Normal Modes Computation in Radial Earth Models and A Novel Classification Based on Intersection Points of Waveguides

    Science.gov (United States)

    Ye, J.; Shi, J.; De Hoop, M. V.

    2017-12-01

    We develop a robust algorithm to compute seismic normal modes in a spherically symmetric, non-rotating Earth. A well-known problem is the cross-contamination of modes near "intersections" of dispersion curves for separate waveguides. Our novel computational approach completely avoids artificial degeneracies by guaranteeing orthonormality among the eigenfunctions. We extend Wiggins' and Buland's work, and reformulate the Sturm-Liouville problem as a generalized eigenvalue problem with the Rayleigh-Ritz Galerkin method. A special projection operator incorporating the gravity terms proposed by de Hoop and a displacement/pressure formulation are utilized in the fluid outer core to project out the essential spectrum. Moreover, the weak variational form enables us to achieve high accuracy across the solid-fluid boundary, especially for Stoneley modes, which have exponentially decaying behavior. We also employ the mixed finite element technique to avoid spurious pressure modes arising from discretization schemes and a numerical inf-sup test is performed following Bathe's work. In addition, the self-gravitation terms are reformulated to avoid computations outside the Earth, thanks to the domain decomposition technique. Our package enables us to study the physical properties of intersection points of waveguides. According to Okal's classification theory, the group velocities should be continuous within a branch of the same mode family. However, we have found that there will be a small "bump" near intersection points, which is consistent with Miropol'sky's observation. In fact, we can loosely regard Earth's surface and the CMB as independent waveguides. For those modes that are far from the intersection points, their eigenfunctions are localized in the corresponding waveguides. However, those that are close to intersection points will have physical features of both waveguides, which means they cannot be classified in either family. Our results improve on Okal

  17. The signal of mantle anisotropy in the coupling of normal modes

    Science.gov (United States)

    Beghein, Caroline; Resovsky, Joseph; van der Hilst, Robert D.

    2008-12-01

    We investigate whether the coupling of normal mode (NM) multiplets can help us constrain mantle anisotropy. We first derive explicit expressions of the generalized structure coefficients of coupled modes in terms of elastic coefficients, including the Love parameters describing radial anisotropy and the parameters describing azimuthal anisotropy (Jc, Js, Kc, Ks, Mc, Ms, Bc, Bs, Gc, Gs, Ec, Es, Hc, Hs, Dc and Ds). We detail the selection rules that describe which modes can couple together and which elastic parameters govern their coupling. We then focus on modes of type 0Sl - 0Tl+1 and determine whether they can be used to constrain mantle anisotropy. We show that they are sensitive to six elastic parameters describing azimuthal anisotropy, in addition to the two shear-wave elastic parameters L and N (i.e. VSV and VSH). We find that neither isotropic nor radially anisotropic mantle models can fully explain the observed degree two signal. We show that the NM signal that remains after correction for the effect of the crust and mantle radial anisotropy can be explained by the presence of azimuthal anisotropy in the upper mantle. Although the data favour locating azimuthal anisotropy below 400km, its depth extent and distribution is still not well constrained by the data. Consideration of NM coupling can thus help constrain azimuthal anisotropy in the mantle, but joint analyses with surface-wave phase velocities is needed to reduce the parameter trade-offs and improve our constraints on the individual elastic parameters and the depth location of the azimuthal anisotropy.

  18. Search for Long Period Solar Normal Modes in Ambient Seismic Noise

    Science.gov (United States)

    Caton, R.; Pavlis, G. L.

    2016-12-01

    We search for evidence of solar free oscillations (normal modes) in long period seismic data through multitaper spectral analysis of array stacks. This analysis is similar to that of Thomson & Vernon (2015), who used data from the most quiet single stations of the global seismic network. Our approach is to use stacks of large arrays of noisier stations to reduce noise. Arrays have the added advantage of permitting the use of nonparametic statistics (jackknife errors) to provide objective error estimates. We used data from the Transportable Array, the broadband borehole array at Pinyon Flat, and the 3D broadband array in Homestake Mine in Lead, SD. The Homestake Mine array has 15 STS-2 sensors deployed in the mine that are extremely quiet at long periods due to stable temperatures and stable piers anchored to hard rock. The length of time series used ranged from 50 days to 85 days. We processed the data by low-pass filtering with a corner frequency of 10 mHz, followed by an autoregressive prewhitening filter and median stack. We elected to use the median instead of the mean in order to get a more robust stack. We then used G. Prieto's mtspec library to compute multitaper spectrum estimates on the data. We produce delete-one jackknife error estimates of the uncertainty at each frequency by computing median stacks of all data with one station removed. The results from the TA data show tentative evidence for several lines between 290 μHz and 400 μHz, including a recurring line near 379 μHz. This 379 μHz line is near the Earth mode 0T2 and the solar mode 5g5, suggesting that 5g5 could be coupling into the Earth mode. Current results suggest more statistically significant lines may be present in Pinyon Flat data, but additional processing of the data is underway to confirm this observation.

  19. Possibility of observation of polaron normal modes at the far-infrared spectrum of acetanilide and related organics

    Science.gov (United States)

    Kalosakas, G.; Aubry, S.; Tsironis, G. P.

    1998-10-01

    We use a stationary and normal mode analysis of the semiclassical Holstein model in order to connect the low-frequency linear polaron modes to low-lying far-infrared lines of the acetanilide spectrum and through parameter fitting we comment on the validity of the polaron results in this system.

  20. Resonant and kinematical enhancement of He scattering from LiF(001) surface and pseudosurface vibrational normal modes

    International Nuclear Information System (INIS)

    Nichols, W.L.; Weare, J.H.

    1986-01-01

    One-phonon cross sections calculated from sagittally polarized vibrational normal modes account for most salient inelastic-scattering intensities seen in He-LiF(001) and measurements published by Brusdeylins, Doak, and Toennies. We have found that most inelastic intensities which cannot be attributed to potential resonances can be explained as kinematically enhanced scattering from both surface and pseudosurface bulk modes

  1. Evidence for Radial Anisotropy in Earth's Upper Inner Core from Normal Modes

    Science.gov (United States)

    Lythgoe, K.; Deuss, A. F.

    2017-12-01

    The structure of the uppermost inner core is related to solidification of outer core material at the inner core boundary. Previous seismic studies using body waves indicate an isotropic upper inner core, although radial anisotropy has not been considered since it cannot be uniquely determined by body waves. Normal modes, however, do constrain radial anisotropy in the inner core. Centre frequency measurements indicate 2-5 % radial anisotropy in the upper 100 km of the inner core, with a fast direction radially outwards and a slow direction along the inner core boundary. This seismic structure provides constraints on solidification processes at the inner core boundary and appears consistent with texture predicted due to anisotropic inner core growth.

  2. On the sensitivity of protein data bank normal mode analysis: an application to GH10 xylanases

    Science.gov (United States)

    Tirion, Monique M.

    2015-12-01

    Protein data bank entries obtain distinct, reproducible flexibility characteristics determined by normal mode analyses of their three dimensional coordinate files. We study the effectiveness and sensitivity of this technique by analyzing the results on one class of glycosidases: family 10 xylanases. A conserved tryptophan that appears to affect access to the active site can be in one of two conformations according to x-ray crystallographic electron density data. The two alternate orientations of this active site tryptophan lead to distinct flexibility spectra, with one orientation thwarting the oscillations seen in the other. The particular orientation of this sidechain furthermore affects the appearance of the motility of a distant, C terminal region we term the mallet. The mallet region is known to separate members of this family of enzymes into two classes.

  3. On the sensitivity of protein data bank normal mode analysis: an application to GH10 xylanases

    International Nuclear Information System (INIS)

    Tirion, Monique M

    2015-01-01

    Protein data bank entries obtain distinct, reproducible flexibility characteristics determined by normal mode analyses of their three dimensional coordinate files. We study the effectiveness and sensitivity of this technique by analyzing the results on one class of glycosidases: family 10 xylanases. A conserved tryptophan that appears to affect access to the active site can be in one of two conformations according to x-ray crystallographic electron density data. The two alternate orientations of this active site tryptophan lead to distinct flexibility spectra, with one orientation thwarting the oscillations seen in the other. The particular orientation of this sidechain furthermore affects the appearance of the motility of a distant, C terminal region we term the mallet. The mallet region is known to separate members of this family of enzymes into two classes. (paper)

  4. Causality analysis of leading singular value decomposition modes identifies rotor as the dominant driving normal mode in fibrillation

    Science.gov (United States)

    Biton, Yaacov; Rabinovitch, Avinoam; Braunstein, Doron; Aviram, Ira; Campbell, Katherine; Mironov, Sergey; Herron, Todd; Jalife, José; Berenfeld, Omer

    2018-01-01

    Cardiac fibrillation is a major clinical and societal burden. Rotors may drive fibrillation in many cases, but their role and patterns are often masked by complex propagation. We used Singular Value Decomposition (SVD), which ranks patterns of activation hierarchically, together with Wiener-Granger causality analysis (WGCA), which analyses direction of information among observations, to investigate the role of rotors in cardiac fibrillation. We hypothesized that combining SVD analysis with WGCA should reveal whether rotor activity is the dominant driving force of fibrillation even in cases of high complexity. Optical mapping experiments were conducted in neonatal rat cardiomyocyte monolayers (diameter, 35 mm), which were genetically modified to overexpress the delayed rectifier K+ channel IKr only in one half of the monolayer. Such monolayers have been shown previously to sustain fast rotors confined to the IKr overexpressing half and driving fibrillatory-like activity in the other half. SVD analysis of the optical mapping movies revealed a hierarchical pattern in which the primary modes corresponded to rotor activity in the IKr overexpressing region and the secondary modes corresponded to fibrillatory activity elsewhere. We then applied WGCA to evaluate the directionality of influence between modes in the entire monolayer using clear and noisy movies of activity. We demonstrated that the rotor modes influence the secondary fibrillatory modes, but influence was detected also in the opposite direction. To more specifically delineate the role of the rotor in fibrillation, we decomposed separately the respective SVD modes of the rotor and fibrillatory domains. In this case, WGCA yielded more information from the rotor to the fibrillatory domains than in the opposite direction. In conclusion, SVD analysis reveals that rotors can be the dominant modes of an experimental model of fibrillation. Wiener-Granger causality on modes of the rotor domains confirms their

  5. Adapting the mode profile of planar waveguides to single-mode fibers : a novel method

    NARCIS (Netherlands)

    Smit, M.K.; Vreede, De A.H.

    1991-01-01

    A novel method for coupling single-mode fibers to planar optical circuits with small waveguide dimensions is proposed. The method eliminates the need to apply microoptics or to adapt the waveguide dimensions within the planar circuit to the fiber dimensions. Alignment tolerances are comparable to

  6. Normal mode splitting and ground state cooling in a Fabry—Perot optical cavity and transmission line resonator

    International Nuclear Information System (INIS)

    Chen Hua-Jun; Mi Xian-Wu

    2011-01-01

    Optomechanical dynamics in two systems which are a transmission line resonator and Fabrya—Perot optical cavity via radiation—pressure are investigated by linearized quantum Langevin equation. We work in the resolved sideband regime where the oscillator resonance frequency exceeds the cavity linewidth. Normal mode splittings of the mechanical resonator as a pure result of the coupling interaction in the two optomechanical systems is studied, and we make a comparison of normal mode splitting of mechanical resonator between the two systems. In the optical cavity, the normal mode splitting of the movable mirror approaches the latest experiment very well. In addition, an approximation scheme is introduced to demonstrate the ground state cooling, and we make a comparison of cooling between the two systems dominated by two key factors, which are the initial bath temperature and the mechanical quality factor. Since both the normal mode splitting and cooling require working in the resolved sideband regime, whether the normal mode splitting influences the cooling of the mirror is considered. Considering the size of the mechanical resonator and precooling the system, the mechanical resonator in the transmission line resonator system is easier to achieve the ground state cooling than in optical cavity. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  7. Empirical evaluation of data normalization methods for molecular classification.

    Science.gov (United States)

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  8. Macromolecule biosynthesis assay and fluorescence spectroscopy methods to explore antimicrobial peptide mode(s) of action

    DEFF Research Database (Denmark)

    Jana, Bimal; Baker, Kristin Renee; Guardabassi, Luca

    2017-01-01

    the biosynthesis rate of macromolecules (e.g., DNA, RNA, protein, and cell wall) and the cytoplasmic membrane proton motive force (PMF) energy can help to unravel the diverse modes of action of AMPs. Here, we present an overview of macromolecule biosynthesis rate measurement and fluorescence spectroscopy methods...

  9. Startup methods for single-mode gyrotron operation

    International Nuclear Information System (INIS)

    Whaley, D.R.; Tran, M.Q.; Alberti, S.; Tran, T.M.; Antonsen, T.M. Jr.; Dubrovin, A.; Tran, C.

    1995-01-01

    Experimental results of startup studies on a 118 GHz TE 22,6 gyrotron are presented and compared with theory. The startup paths through the energy-velocity-pitch-angle plane are determined by the time evolution of the beam parameters during the startup phase. These startup paths are modified by changing the anode and cathode voltage rise from zero to their nominal values and are seen to determine the cavity oscillating mode. Experimental results show specifically that competition between the TE 22,6 and TE -19,7 mode can be completely eliminated by use of the proper startup method in a case where a typical triode startup results in oscillation in the competing TE -19,7 mode. These new results are shown to be in excellent agreement with the theory whose approach is general and therefore applicable to gyrotrons operating in any arbitrary cavity mode. (author) 3 figs., 4 refs

  10. NOMAD-Ref: visualization, deformation and refinement of macromolecular structures based on all-atom normal mode analysis.

    Science.gov (United States)

    Lindahl, Erik; Azuara, Cyril; Koehl, Patrice; Delarue, Marc

    2006-07-01

    Normal mode analysis (NMA) is an efficient way to study collective motions in biomolecules that bypasses the computational costs and many limitations associated with full dynamics simulations. The NOMAD-Ref web server presented here provides tools for online calculation of the normal modes of large molecules (up to 100,000 atoms) maintaining a full all-atom representation of their structures, as well as access to a number of programs that utilize these collective motions for deformation and refinement of biomolecular structures. Applications include the generation of sets of decoys with correct stereochemistry but arbitrary large amplitude movements, the quantification of the overlap between alternative conformations of a molecule, refinement of structures against experimental data, such as X-ray diffraction structure factors or Cryo-EM maps and optimization of docked complexes by modeling receptor/ligand flexibility through normal mode motions. The server can be accessed at the URL http://lorentz.immstr.pasteur.fr/nomad-ref.php.

  11. 'Good Vibrations': A workshop on oscillations and normal modes

    International Nuclear Information System (INIS)

    Barbieri, Sara R.; Carpineti, Marina; Giliberti, Marco; Stellato, Marco; Rigon, Enrico; Tamborini, Marina

    2015-01-01

    We describe some theatrical strategies adopted in a two hour workshop in order to show some meaningful experiments and the underlying useful ideas to describe a secondary school path on oscillations, that develops from harmonic motion to normal modes of oscillations, and makes extensive use of video analysis, data logging, slow motions and applet simulations. Theatre is an extremely useful tool to stimulate motivation starting from positive emotions. That is the reason why the theatrical approach to the presentation of physical themes has been explored by the group 'Lo spettacolo della Fisica' (http://spettacolo.fisica.unimi.it) of the Physics Department of University of Milano for the last ten years (Carpineti et al., JCOM, 10 (2011) 1; Nuovo Cimento B, 121 (2006) 901) and has been inserted also in the European FP7 Project TEMI (Teaching Enquiry with Mysteries Incorporated, see http://teachingmysteries.eu/en) which involves 13 different partners coming from 11 European countries, among which the Italian (Milan) group. According to the TEMI guidelines, this workshop has a written script based on emotionally engaging activities of presenting mysteries to be solved while participants have been involved in nice experiments following the developed path.

  12. Theory of the normal modes of vibrations in the lanthanide type crystals

    Science.gov (United States)

    Acevedo, Roberto; Soto-Bubert, Andres

    2008-11-01

    For the lanthanide type crystals, a vast and rich, though incomplete amount of experimental data has been accumulated, from linear and non linear optics, during the last decades. The main goal of the current research work is to report a new methodology and strategy to put forward a more representative approach to account for the normal modes of vibrations for a complex N-body system. For illustrative purposes, the chloride lanthanide type crystals Cs2NaLnCl6 have been chosen and we develop new convergence tests as well as a criterion to deal with the details of the F-matrix (potential energy matrix). A novel and useful concept of natural potential energy distributions (NPED) is introduced and examined throughout the course of this work. The diagonal and non diagonal contributions to these NPED-values, are evaluated for a series of these crystals explicitly. Our model is based upon a total of seventy two internal coordinates and ninety eight internal Hooke type force constants. An optimization mathematical procedure is applied with reference to the series of chloride lanthanide crystals and it is shown that the strategy and model adopted is sound from both a chemical and a physical viewpoints. We can argue that the current model is able to accommodate a number of interactions and to provide us with a very useful physical insight. The limitations and advantages of the current model and the most likely sources for improvements are discussed in detail.

  13. Detecting atmospheric normal modes with periods less than 6 h by barometric observations

    Science.gov (United States)

    Ermolenko, S. I.; Shved, G. M.; Jacobi, Ch.

    2018-04-01

    The theory of atmospheric normal modes (ANMs) predicts the existence of relatively short-period gravity-inertia ANMs. Simultaneous observations of surface air-pressure variations by barometers at distant stations of the Global Geodynamics Project network during an interval of 6 months were used to detect individual gravity-inertia ANMs with periods of ∼2-5 h. Evidence was found for five ANMs with a lifetime of ∼10 days. The data of the stations, which are close in both latitude and longitude, were utilized for deriving the phases of the detected ANMs. The phases revealed wave propagation to the west and increase of zonal wavenumbers with frequency. As all the detected gravity-inertia ANMs are westward propagating, they are suggested to be generated due to the breakdown of migrating solar tides and/or large-scale Rossby waves. The existence of an ANM background will complicate the detection of the translational motions of the Earth's inner core.

  14. Instantaneous normal mode analysis for intermolecular and intramolecular vibrations of water from atomic point of view.

    Science.gov (United States)

    Chen, Yu-Chun; Tang, Ping-Han; Wu, Ten-Ming

    2013-11-28

    By exploiting the instantaneous normal mode (INM) analysis for models of flexible molecules, we investigate intermolecular and intramolecular vibrations of water from the atomic point of view. With two flexible SPC/E models, our investigations include three aspects about their INM spectra, which are separated into the unstable, intermolecular, bending, and stretching bands. First, the O- and H-atom contributions in the four INM bands are calculated and their stable INM spectra are compared with the power spectra of the atomic velocity autocorrelation functions. The unstable and intermolecular bands of the flexible models are also compared with those of the SPC/E model of rigid molecules. Second, we formulate the inverse participation ratio (IPR) of the INMs, respectively, for the O- and H-atom and molecule. With the IPRs, the numbers of the three species participated in the INMs are estimated so that the localization characters of the INMs in each band are studied. Further, by the ratio of the IPR of the H atom to that of the O atom, we explore the number of involved OH bond per molecule participated in the INMs. Third, by classifying simulated molecules into subensembles according to the geometry of their local environments or their H-bond configurations, we examine the local-structure effects on the bending and stretching INM bands. All of our results are verified to be insensible to the definition of H-bond. Our conclusions about the intermolecular and intramolecular vibrations in water are given.

  15. Comparative analysis of guide mode of government - oriented industry guidance funds under china’s new normal of economic growth

    Science.gov (United States)

    Sun, Chunling; Cheng, Xuemei

    2017-11-01

    The government-oriented industry guidance Funds solve the problem of financing difficulty and high innovation under the background of China’s new normal. Through the provinces and cities of the policies and regulations of the collation and comparative analysis, it will be divided into three modes. And then compare among three modes and analyze applicability to guide the construction of provinces and cities.

  16. Reliable before-fabrication forecasting of normal and touch mode MEMS capacitive pressure sensor: modeling and simulation

    Science.gov (United States)

    Jindal, Sumit Kumar; Mahajan, Ankush; Raghuwanshi, Sanjeev Kumar

    2017-10-01

    An analytical model and numerical simulation for the performance of MEMS capacitive pressure sensors in both normal and touch modes is required for expected behavior of the sensor prior to their fabrication. Obtaining such information should be based on a complete analysis of performance parameters such as deflection of diaphragm, change of capacitance when the diaphragm deflects, and sensitivity of the sensor. In the literature, limited work has been carried out on the above-stated issue; moreover, due to approximation factors of polynomials, a tolerance error cannot be overseen. Reliable before-fabrication forecasting requires exact mathematical calculation of the parameters involved. A second-order polynomial equation is calculated mathematically for key performance parameters of both modes. This eliminates the approximation factor, and an exact result can be studied, maintaining high accuracy. The elimination of approximation factors and an approach of exact results are based on a new design parameter (δ) that we propose. The design parameter gives an initial hint to the designers on how the sensor will behave once it is fabricated. The complete work is aided by extensive mathematical detailing of all the parameters involved. Next, we verified our claims using MATLAB® simulation. Since MATLAB® effectively provides the simulation theory for the design approach, more complicated finite element method is not used.

  17. All-atom normal-mode analysis reveals an RNA-induced allostery in a bacteriophage coat protein.

    Science.gov (United States)

    Dykeman, Eric C; Twarock, Reidun

    2010-03-01

    Assembly of the T=3 bacteriophage MS2 is initiated by the binding of a 19 nucleotide RNA stem loop from within the phage genome to a symmetric coat protein dimer. This binding event effects a folding of the FG loop in one of the protein subunits of the dimer and results in the formation of an asymmetric dimer. Since both the symmetric and asymmetric forms of the dimer are needed for the assembly of the protein container, this allosteric switch plays an important role in the life cycle of the phage. We provide here details of an all-atom normal-mode analysis of this allosteric effect. The results suggest that asymmetric contacts between the A -duplex RNA phosphodiester backbone of the stem loop with the EF loop in one coat protein subunit results in an increased dynamic behavior of its FG loop. The four lowest-frequency modes, which encompass motions predominantly on the FG loops, account for over 90% of the increased dynamic behavior due to a localization of the vibrational pattern on a single FG loop. Finally, we show that an analysis of the allosteric effect using an elastic network model fails to predict this localization effect, highlighting the importance of using an all-atom full force field method for this problem.

  18. Comparative effectiveness of videotape and handout mode of instructions for teaching exercises: skill retention in normal children

    Directory of Open Access Journals (Sweden)

    Gupta Garima

    2012-01-01

    Full Text Available Abstract Background Teaching of motor skills is fundamental to physical therapy practice. In order to optimize the benefits of these teaching and training efforts, various forms of patient education material are developed and handed out to patients. One very important fact has been overlooked. While comparative effectiveness of various modes of instruction has been studied in adults, attention has not been paid to the fact that learning capabilities of children are different from that of adults. The intent of the present study is to compare the effectiveness of video and handout mode of instructions specifically on children. Methods A total of 115 normal elementary-age children aged 10 to 12 years of age were studied. The children were randomized into two groups: A the video group, and B the handout group. The video group viewed the video for physical therapy exercises while the handout group was provided with paper handouts especially designed according to the readability of their age group. Results Statistical analysis using the student's't' test showed that subjects of both the video and handout groups exhibited equal overall performance accuracy. There was no significant difference between the groups both in acquisition and retention accuracy tests. Conclusion The findings of the present study suggest that if the readability and instructional principles applicable to different target age groups are strictly adhered to, then both video as well as handout modes of instructions result in similar feedback and memory recall in ten to twelve year-old children. Principles of readability related to the patient age are of utmost importance when designing the patient education material. These findings suggest that the less expensive handouts can be an effective instructional aid for teaching exercises to children with various neuromuscular, rheumatic, and orthopedics conditions and the most costly videotape techniques are not necessarily better.

  19. Method for construction of normalized cDNA libraries

    Science.gov (United States)

    Soares, Marcelo B.; Efstratiadis, Argiris

    1998-01-01

    This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to appropriate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. This invention also provides normalized cDNA libraries generated by the above-described method and uses of the generated libraries.

  20. Matrix method for two-dimensional waveguide mode solution

    Science.gov (United States)

    Sun, Baoguang; Cai, Congzhong; Venkatesh, Balajee Seshasayee

    2018-05-01

    In this paper, we show that the transfer matrix theory of multilayer optics can be used to solve the modes of any two-dimensional (2D) waveguide for their effective indices and field distributions. A 2D waveguide, even composed of numerous layers, is essentially a multilayer stack and the transmission through the stack can be analysed using the transfer matrix theory. The result is a transfer matrix with four complex value elements, namely A, B, C and D. The effective index of a guided mode satisfies two conditions: (1) evanescent waves exist simultaneously in the first (cladding) layer and last (substrate) layer, and (2) the complex element D vanishes. For a given mode, the field distribution in the waveguide is the result of a 'folded' plane wave. In each layer, there is only propagation and absorption; at each boundary, only reflection and refraction occur, which can be calculated according to the Fresnel equations. As examples, we show that this method can be used to solve modes supported by the multilayer step-index dielectric waveguide, slot waveguide, gradient-index waveguide and various plasmonic waveguides. The results indicate the transfer matrix method is effective for 2D waveguide mode solution in general.

  1. Normalization methods in time series of platelet function assays

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  2. Combining Illumination Normalization Methods for Better Face Recognition

    NARCIS (Netherlands)

    Boom, B.J.; Tao, Q.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. There are two categories of illumination normalization methods. The first category performs a local preprocessing, where they correct a pixel value based on a local neighborhood in the images. The second

  3. Switched-mode power supply apparatus and method

    NARCIS (Netherlands)

    2013-01-01

    The present invention relates to a switched-mode power supply apparatus and a corresponding method. For an effective compensation of non-linearities caused by dead- time and voltage drops in the switching power amplifier of the apparatus, an apparatus is proposed comprising a switching power

  4. Switched-mode power supply apparatus and method

    NARCIS (Netherlands)

    2013-01-01

    The present invention relates to a switched-mode power supply apparatus and a corresponding method. For an effective compensation of non-linearities caused by dead-time and voltage drops in the switching power amplifier of the apparatus, an apparatus is proposed comprising a switching power

  5. Extracting surface waves, hum and normal modes: time-scale phase-weighted stack and beyond

    Science.gov (United States)

    Ventosa, Sergi; Schimmel, Martin; Stutzmann, Eleonore

    2017-10-01

    Stacks of ambient noise correlations are routinely used to extract empirical Green's functions (EGFs) between station pairs. The time-frequency phase-weighted stack (tf-PWS) is a physically intuitive nonlinear denoising method that uses the phase coherence to improve EGF convergence when the performance of conventional linear averaging methods is not sufficient. The high computational cost of a continuous approach to the time-frequency transformation is currently a main limitation in ambient noise studies. We introduce the time-scale phase-weighted stack (ts-PWS) as an alternative extension of the phase-weighted stack that uses complex frames of wavelets to build a time-frequency representation that is much more efficient and fast to compute and that preserve the performance and flexibility of the tf-PWS. In addition, we propose two strategies: the unbiased phase coherence and the two-stage ts-PWS methods to further improve noise attenuation, quality of the extracted signals and convergence speed. We demonstrate that these approaches enable to extract minor- and major-arc Rayleigh waves (up to the sixth Rayleigh wave train) from many years of data from the GEOSCOPE global network. Finally we also show that fundamental spheroidal modes can be extracted from these EGF.

  6. Photoneutron spectrum measured with Bonner Spheres in Planetary method mode

    Energy Technology Data Exchange (ETDEWEB)

    Benites R, J. [Centro Estatal de Cancerologia de Nayarit, Servicio de Seguridad Radiologica, Calz. de la Cruz 118 Sur, 63000 Tepic, Nayarit (Mexico); Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Apdo. Postal 336, 98000 Zacatecas (Mexico); Velazquez F, J., E-mail: jlbenitesr@prodigy.net.mx [Universidad Autonoma de Nayarit, Posgrado en Ciencias Biologico Agropecuarias, Carretera Tepic-Compostela Km 9, 63780 Jalisco-Nayarit (Mexico)

    2012-10-15

    We measured the spectrum of photoneutrons at 100 cm isocenter linear accelerator (Linac) Varian ix operating at 15 MV Bremsstrahlung mode. In this process was used a radiation field of 20 x 20 cm{sup 2} at a depth of 5 cm in a solid water phantom with dimensions of 30 x 30 x 15 cm{sup 3}. The measurement was performed with a system using it Bonner Spheres spectrometric method Planetary mode. As neutron detector of the spectrometer is used thermoluminescent dosimeters pairs of type 600 and 700. (Author)

  7. Photoneutron spectrum measured with Bonner Spheres in Planetary method mode

    International Nuclear Information System (INIS)

    Benites R, J.; Vega C, H. R.; Velazquez F, J.

    2012-10-01

    We measured the spectrum of photoneutrons at 100 cm isocenter linear accelerator (Linac) Varian ix operating at 15 MV Bremsstrahlung mode. In this process was used a radiation field of 20 x 20 cm 2 at a depth of 5 cm in a solid water phantom with dimensions of 30 x 30 x 15 cm 3 . The measurement was performed with a system using it Bonner Spheres spectrometric method Planetary mode. As neutron detector of the spectrometer is used thermoluminescent dosimeters pairs of type 600 and 700. (Author)

  8. A high precision method for normalization of cross sections

    International Nuclear Information System (INIS)

    Aguilera R, E.F.; Vega C, J.J.; Martinez Q, E.; Kolata, J.J.

    1988-08-01

    It was developed a system of 4 monitors and a program to eliminate, in the process of normalization of cross sections, the dependence of the alignment of the equipment and those condition of having centered of the beam. It was carried out a series of experiments with the systems 27 Al + 70, 72, 74, 76 Ge, 35 Cl + 58 Ni, 37 Cl + 58, 60, 62, 64 Ni and ( 81 Br, 109 Rh) + 60 Ni. For these experiments the typical precision of 1% was obtained in the normalization. It is demonstrated theoretical and experimentally the advantage of this method on those that use 1 or 2 monitors. (Author)

  9. Startup methods for single-mode gyrotron operation

    International Nuclear Information System (INIS)

    Whaley, D.R.; Tran, M.Q.; Alberti, S.; Tran, T.M.; Antonsen, T.M.; Tran, C.

    1995-03-01

    Experimental results of startup studies on a 118 GHz TE 22,6 gyrotron are presented and compared with theory. The theoretical excitation regimes of competing modes are computed in the energy-velocity-pitch-angle plane near the operation point. The startup paths through the plane are determined by the time evolution of the beam parameters during the startup phase. These startup paths are modified by changing the anode and cathode voltage rise from zero to their nominal values and are seen to determine the cavity oscillating mode. Experimental results show specifically that competition between the TE 22,6 and TE -19,7 mode can be completely eliminated by using the proper startup method in a case where a typical triode startup results in oscillation in the competing TE -19,7 mode. These new results are shown to be in excellent agreement with theory whose approach is general and therefore applicable to gyrotrons operating in any arbitrary cavity mode. (author) 5 figs., 1 tab., 13 refs

  10. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    Science.gov (United States)

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  11. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  12. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  13. Virtual screening for potential inhibitors of Mcl-1 conformations sampled by normal modes, molecular dynamics, and nuclear magnetic resonance

    Directory of Open Access Journals (Sweden)

    Glantz-Gashai Y

    2017-06-01

    Full Text Available Yitav Glantz-Gashai,* Tomer Meirson,* Eli Reuveni, Abraham O Samson Faculty of Medicine in the Galilee, Bar Ilan University, Safed, Israel *These authors contributed equally to this work Abstract: Myeloid cell leukemia-1 (Mcl-1 is often overexpressed in human cancer and is an important target for developing antineoplastic drugs. In this study, a data set containing 2.3 million lead-like molecules and a data set of all the US Food and Drug Administration (FDA-approved drugs are virtually screened for potential Mcl-1 ligands using Protein Data Bank (PDB ID 2MHS. The potential Mcl-1 ligands are evaluated and computationally docked on to three conformation ensembles generated by normal mode analysis (NMA, molecular dynamics (MD, and nuclear magnetic resonance (NMR, respectively. The evaluated potential Mcl-1 ligands are then compared with their clinical use. Remarkably, half of the top 30 potential drugs are used clinically to treat cancer, thus partially validating our virtual screen. The partial validation also favors the idea that the other half of the top 30 potential drugs could be used in the treatment of cancer. The normal mode-, MD-, and NMR-based conformation greatly expand the conformational sampling used herein for in silico identification of potential Mcl-1 inhibitors. Keywords: virtual screening, Mcl-1, molecular dynamics, NMR, normal modes

  14. On the structure and normal modes of hydrogenated Ti-fullerene compounds

    Energy Technology Data Exchange (ETDEWEB)

    Tlahuice-Flores, Alfredo, E-mail: tlahuicef@yahoo.com [Universidad Nacional Autonoma de Mexico, Instituto de Fisica (Mexico); Mejia-Rosales, Sergio, E-mail: sergio.mejiars@uanl.edu.mx [Universidad Autonoma de Nuevo Leon, CICFIM-Facultad de Ciencias Fisico Matematicas, and Centro de Innovacion, Investigacion y Desarrollo en Ingenieria y Tecnologia (Mexico); Galvan, Donald H., E-mail: donald@cnyn.unam.mx [Centro de Nanociencias y Nanotecnologia-Universidad Nacional Autonoma de Mexico (Mexico)

    2012-08-15

    When titanium covers a C{sub 60} core, the metal atoms may suppress the fullerene's capacity of storing hydrogen, depending on the number of Ti atoms covering the C{sub 60} framework, the Ti-C binding energy, and diffusion barriers. In this article, we study the structural and vibrational properties of the C{sub 60}TiH{sub n} (n = 2, 4, 6, and 8) and C{sub 60}Ti{sub 6}H{sub 48} compounds. The IR spectra of C{sub 60}TiH{sub n} compounds have a maximum attributable to the Ti-H stretching mode, which shifts to lower values in the structures with n = 4, 8, while their Raman spectra show two peaks corresponding to the stretching modes of H{sub 2} molecules at apical and azimuthal positions. On the other hand, the IR spectrum of C{sub 60}Ti{sub 6}H{sub 48} shows an intense peak due to the Ti-H in-phase stretching mode, while its Raman spectrum has a maximum attributed to the pentagonal pinch of the C{sub 60} core. Finally, we have found that the presence of one apical H{sub 2} molecule enhances the pentagonal pinch mode, becoming the maximum in the Raman spectrum.Graphical Abstract.

  15. An experimental randomized study of six different ventilatory modes in a piglet model with normal lungs

    DEFF Research Database (Denmark)

    Nielsen, J B; Sjöstrand, U H; Henneberg, S W

    1991-01-01

    -controlled intermittent positive-pressure ventilation; and SV-20P denotes pressure-controlled intermittent positive-pressure ventilation. With all modes of ventilation a PEEP of 7.5 cm H2O was used. In the abbreviations used, the number denotes the ventilatory frequency in breaths per minute (bpm). HFV indicates that all...

  16. Noise induced multidecadal variability in the North Atlantic: excitation of normal modes

    NARCIS (Netherlands)

    Frankcombe, L.M.; Dijkstra, H.A.; von der Heydt, A.S.

    2009-01-01

    In this paper it is proposed that the stochastic excitation of a multidecadal internal ocean mode is at the origin of the multidecadal sea surface temperature variability in the North Atlantic. The excitation processes of the spatial sea surface temperature pattern associated with this multidecadal

  17. A Formal Methods Approach to the Analysis of Mode Confusion

    Science.gov (United States)

    Butler, Ricky W.; Miller, Steven P.; Potts, James N.; Carreno, Victor A.

    2004-01-01

    The goal of the new NASA Aviation Safety Program (AvSP) is to reduce the civil aviation fatal accident rate by 80% in ten years and 90% in twenty years. This program is being driven by the accident data with a focus on the most recent history. Pilot error is the most commonly cited cause for fatal accidents (up to 70%) and obviously must be given major consideration in this program. While the greatest source of pilot error is the loss of situation awareness , mode confusion is increasingly becoming a major contributor as well. The January 30, 1995 issue of Aviation Week lists 184 incidents and accidents involving mode awareness including the Bangalore A320 crash 2/14/90, the Strasbourg A320 crash 1/20/92, the Mulhouse-Habsheim A320 crash 6/26/88, and the Toulouse A330 crash 6/30/94. These incidents and accidents reveal that pilots sometimes become confused about what the cockpit automation is doing. Consequently, human factors research is an obvious investment area. However, even a cursory look at the accident data reveals that the mode confusion problem is much deeper than just training deficiencies and a lack of human-oriented design. This is readily acknowledged by human factors experts. It seems that further progress in human factors must come through a deeper scrutiny of the internals of the automation. It is in this arena that formal methods can contribute. Formal methods refers to the use of techniques from logic and discrete mathematics in the specification, design, and verification of computer systems, both hardware and software. The fundamental goal of formal methods is to capture requirements, designs and implementations in a mathematically based model that can be analyzed in a rigorous manner. Research in formal methods is aimed at automating this analysis as much as possible. By capturing the internal behavior of a flight deck in a rigorous and detailed formal model, the dark corners of a design can be analyzed. This paper will explore how formal

  18. Coordenadas cartesianas moleculares a partir da geometria dos modos normais de vibração Molecular cartesian coordinates from vibrational normal modes geometry

    Directory of Open Access Journals (Sweden)

    Emílio Borges

    2007-04-01

    Full Text Available A simple method to obtain molecular Cartesian coordinates as a function of vibrational normal modes is presented in this work. The method does not require the definition of special matrices, like the F and G of Wilson, neither of group theory. The Eckart's conditions together with the diagonalization of kinetic and potential energy are the only required expressions. This makes the present approach appropriate to be used as a preliminary study for more advanced concepts concerning vibrational analysis. Examples are given for diatomic and triatomic molecules.

  19. New Graphical Methods and Test Statistics for Testing Composite Normality

    Directory of Open Access Journals (Sweden)

    Marc S. Paolella

    2015-07-01

    Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.

  20. A Pictorial Visualization of Normal Mode Vibrations of the Fullerene (C[subscript 60]) Molecule in Terms of Vibrations of a Hollow Sphere

    Science.gov (United States)

    Dunn, Janette L.

    2010-01-01

    Understanding the normal mode vibrations of a molecule is important in the analysis of vibrational spectra. However, the complicated 3D motion of large molecules can be difficult to interpret. We show how images of normal modes of the fullerene molecule C[subscript 60] can be made easier to understand by superimposing them on images of the normal…

  1. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    Science.gov (United States)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic

  2. Standard Test Method for Normal Spectral Emittance at Elevated Temperatures

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1972-01-01

    1.1 This test method describes a highly accurate technique for measuring the normal spectral emittance of electrically conducting materials or materials with electrically conducting substrates, in the temperature range from 600 to 1400 K, and at wavelengths from 1 to 35 μm. 1.2 The test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is suitable for research laboratories where the highest precision and accuracy are desired, but is not recommended for routine production or acceptance testing. However, because of its high accuracy this test method can be used as a referee method to be applied to production and acceptance testing in cases of dispute. 1.3 The values stated in SI units are to be regarded as the standard. The values in parentheses are for information only. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this stan...

  3. Different methods of measuring ADC values in normal human brain

    International Nuclear Information System (INIS)

    Wei Youping; Sheng Junkang; Zhang Caiyuan

    2009-01-01

    Objective: To investigate better method of measuring ADC values of normal brain, and provide reference for further research. Methods: Twenty healthy people's MR imaging were reviewed. All of them underwent routine MRI scans and echo-planar diffusion-weighted imaging (DWI), and ADC maps were reconstructed on work station. Six regions of interest (ROI) were selected for each object, the mean ADC values were obtained for each position on DWI and ADC maps respectively. Results: On the anisotropic DWI map calculated in the hypothalamus, ADC M , ADC P , ADC S values were no significant difference (P>0.05), in the frontal white matter and internal capsule hindlimb, there was a significant difference (P ave value exist significant difference to direct measurement on the anisotropic (isotropic) ADC map (P<0.001). Conclusion: Diffusion of water in the frontal white matter and internal capsule are anisotropic, but it is isotropic in the hypothalamus; different quantitative methods of diffusion measurement of 4ADC values have significant difference, but ADC values calculated through the DWI map is more accurate, quantitative diffusion study of brain tissue should also consider the diffusion measurement method. (authors)

  4. Endoscopic mode for three-dimensional CT display of normal and pathologic laryngeal structures

    International Nuclear Information System (INIS)

    Sanuki, Tetsuji; Hyodo, Masamitsu; Yumoto, Eiji; Yasuhara, Yoshifumi; Ochi, Takashi

    1997-01-01

    The recent development of helical (spiral) computed tomography allows collection of volumetric data to obtain high quality three-dimensional (3D) reconstructed images. The authors applied the 3D CT endoscopic imaging technique to asses normal and pathologic laryngeal structures. The latter included trauma, vocal fold atrophy, cancer of the larynx and recurrent nerve palsy. This technique was able to show normal laryngeal structures and characteristic findings of each pathology. The 3D CT endoscopic images can be rotated around any axis, allowing optimal depiction of pathologic lesion. The use of 3D CT endoscopic technique provides the display of the location and extent of pathology and affords accurate therapeutic planning. (author)

  5. Pump induced normal mode splittings in phase conjugation in a Kerr ...

    Indian Academy of Sciences (India)

    Abstract. Phase conjugation in a Kerr nonlinear waveguide is studied with counter-propagating normally incident pumps and a probe beam at an arbitrary angle of incidence. Detailed numerical results for the specular and phase conjugated reflectivities are obtained with full account of pump depletion. For sufficient ...

  6. An imbalance fault detection method based on data normalization and EMD for marine current turbines.

    Science.gov (United States)

    Zhang, Milu; Wang, Tianzhen; Tang, Tianhao; Benbouzid, Mohamed; Diallo, Demba

    2017-05-01

    This paper proposes an imbalance fault detection method based on data normalization and Empirical Mode Decomposition (EMD) for variable speed direct-drive Marine Current Turbine (MCT) system. The method is based on the MCT stator current under the condition of wave and turbulence. The goal of this method is to extract blade imbalance fault feature, which is concealed by the supply frequency and the environment noise. First, a Generalized Likelihood Ratio Test (GLRT) detector is developed and the monitoring variable is selected by analyzing the relationship between the variables. Then, the selected monitoring variable is converted into a time series through data normalization, which makes the imbalance fault characteristic frequency into a constant. At the end, the monitoring variable is filtered out by EMD method to eliminate the effect of turbulence. The experiments show that the proposed method is robust against turbulence through comparing the different fault severities and the different turbulence intensities. Comparison with other methods, the experimental results indicate the feasibility and efficacy of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Environmental impact from different modes of transport. Method of comparison

    International Nuclear Information System (INIS)

    2002-03-01

    A prerequisite of long-term sustainable development is that activities of various kinds are adjusted to what humans and the natural world can tolerate. Transport is an activity that affects humans and the environment to a very great extent and in this project, several actors within the transport sector have together laid the foundation for the development of a comparative method to be able to compare the environmental impact at the different stages along the transport chain. The method analyses the effects of different transport concepts on the climate, noise levels, human health, acidification, land use and ozone depletion. Within the framework of the method, a calculation model has been created in Excel which acts as a basis for the comparisons. The user can choose to download the model from the Swedish EPA's on-line bookstore or order it on a floppy disk. Neither the method nor the model are as yet fully developed but our hope is that they can still be used in their present form as a basis and inspire further efforts and research in the field. In the report, we describe most of these shortcomings, the problems associated with the work and the existing development potential. This publication should be seen as the first stage in the development of a method of comparison between different modes of transport in non-monetary terms where there remains a considerable need for further development and amplification

  8. Nanomechanical inverse electromagnetically induced transparency and confinement of light in normal modes

    International Nuclear Information System (INIS)

    Agarwal, G S; Huang, Sumei

    2014-01-01

    We demonstrate the existence of the phenomenon of the inverse electromagnetically induced transparency (IEIT) in an opto mechanical system consisting of a nanomechanical mirror placed in an optical cavity. We show that two weak counter-propagating identical classical probe fields can be completely absorbed by the system in the presence of a strong coupling field so that the output probe fields are zero. The light is completely confined inside the cavity and the energy of the incoming probe fields is shared between the cavity field and creation of a coherent phonon and resides primarily in one of the polariton modes. The energy can be extracted by a perturbation of the external fields or by suddenly changing the Q of the cavity. (paper)

  9. ECHOCARDIOGRAPHIC FINDINGS OF BIDIMENSIONAL MODE, M-MODE, AND DOPPLER OF CLINICALLY NORMAL BLACK-RUMPED AGOUTI (DASYPROCTA PRYMNOLOPHA, WAGLER 1831).

    Science.gov (United States)

    Diniz, Anaemilia das Neves; Pessoa, Gerson Tavares; da Silva Moura, Laecio; de Sousa, André Braga; Sousa, Francisco das Chagas Araújo; de Sá Rodrigues, Renan Paraguassu; da Silva Barbosa, Maria Angélica Parente; de Almeida, Hatawa Melo; Freire, Larisse Danielle Silva; Sanches, Marina Pinto; Júnior, Antônio Augusto Nascimento Machado; Guerra, Porfírio Candanedo; Neves, Willams Costa; de Sousa, João Macedo; Bolfer, Luiz; Giglio, Robson Fortes; Alves, Flávio Ribeiro

    2017-06-01

    The black-rumped agouti ( Dasyprocta prymnolopha , Wagler 1831) is currently under intense ecologic pressure, which has resulted in its disappearance from some regions of Brazil. Echocardiography is widely used in veterinary medicine but it is not yet part of the clinical routine for wild animals. The objective of the present study was to assess the applicability of the echocardiographic exam in nonanesthetized agouti and to establish normal reference values for echocardiographic measurements in bidimensional mode (2D), M-mode, and Doppler for this species, and a lead II electrocardiogram was simultaneously recorded. Twenty agouti were used in this study. All the echocardiographic measurements were positively correlated with weight (P 0.05). Blood flow velocities in the pulmonary and aortic artery ranged from 67.32-71.28 cm/sec and 79.22-101.84 cm/sec, respectively. The isovolumic relaxation time was assessed in all the animals and ranged from 38.5 to 56.6 ms. The maximum value for the nonfused E and A waves and the Et and At waves was 158 beats/min for both. The results obtained for the morphologic and heart hemodynamic measurements can guide future studies and help in the clinical management of these animals in captivity.

  10. Coupling of Rayleigh-like waves with zero-sound modes in normal 3He

    International Nuclear Information System (INIS)

    Bogacz, S.A.; Ketterson, J.B.

    1985-01-01

    The Landau kinetic equation is solved in the collisionless regime for a sample of normal 3 He excited by a surface perturbation of arbitrary ω and k. The boundary condition for the nonequilibrium particle distribution is determined for the case of specular reflection of the elementary excitations at the interface. Using the above solution, the energy flux through the boundary is obtained as a function of the surface wave velocity ω/k. The absorption spectrum and its frequency derivative are calculated numerically for typical values of temperature and pressure. The spectrum displays a sharp, resonant-like maximum concentrated at the longitudinal sound velocity and a sharp maximum of the derivative concentrated at the transverse sound velocity. The energy transfer is cut off discontinuously below the Fermi velocity. An experimental measurement of the energy transfer spectrum would permit a determination of both zero-sound velocities and the Fermi velocity with spectroscopic precision

  11. Direct assignment of molecular vibrations via normal mode analysis of the neutron dynamic pair distribution function technique

    International Nuclear Information System (INIS)

    Fry-Petit, A. M.; Sheckelton, J. P.; McQueen, T. M.; Rebola, A. F.; Fennie, C. J.; Mourigal, M.; Valentine, M.; Drichko, N.

    2015-01-01

    For over a century, vibrational spectroscopy has enhanced the study of materials. Yet, assignment of particular molecular motions to vibrational excitations has relied on indirect methods. Here, we demonstrate that applying group theoretical methods to the dynamic pair distribution function analysis of neutron scattering data provides direct access to the individual atomic displacements responsible for these excitations. Applied to the molecule-based frustrated magnet with a potential magnetic valence-bond state, LiZn 2 Mo 3 O 8 , this approach allows direct assignment of the constrained rotational mode of Mo 3 O 13 clusters and internal modes of MoO 6 polyhedra. We anticipate that coupling this well known data analysis technique with dynamic pair distribution function analysis will have broad application in connecting structural dynamics to physical properties in a wide range of molecular and solid state systems

  12. Evaluation of normalization methods in mammalian microRNA-Seq data

    Science.gov (United States)

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  13. Modeling laser beam diffraction and propagation by the mode-expansion method.

    Science.gov (United States)

    Snyder, James J

    2007-08-01

    In the mode-expansion method for modeling propagation of a diffracted beam, the beam at the aperture can be expanded as a weighted set of orthogonal modes. The parameters of the expansion modes are chosen to maximize the weighting coefficient of the lowest-order mode. As the beam propagates, its field distribution can be reconstructed from the set of weighting coefficients and the Gouy phase of the lowest-order mode. We have developed a simple procedure to implement the mode-expansion method for propagation through an arbitrary ABCD matrix, and we have demonstrated that it is accurate in comparison with direct calculations of diffraction integrals and much faster.

  14. A One-Sample Test for Normality with Kernel Methods

    OpenAIRE

    Kellner , Jérémie; Celisse , Alain

    2015-01-01

    We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. O...

  15. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    Science.gov (United States)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  16. Experimental investigations of pulse shape control in passively mode-locked fiber lasers with net-normal dispersion

    International Nuclear Information System (INIS)

    Wang, L R; Han, D D

    2013-01-01

    Pulse shape control in passively mode-locked fiber lasers with net-normal dispersion is investigated experimentally. Three kinds of pulses with different spectral and temporal shapes are observed, and their pulse-shaping mechanisms are discussed. After a polarization-resolved system external to the cavity, the maximum intensity differences of the two polarization components for the rectangular-spectrum (RS), Gaussian-spectrum (GS), and super-broadband (SB) pulses are measured as ∼20 dB, ∼15 dB, and ∼1 dB, respectively. It is suggested that the equivalent saturable absorption effect plays an increasingly important role from the RS to GS and then to SB pulses in the pulse-shaping processes, while the spectral filtering effect declines. This work could help in systematically understanding pulse formation and proposing guidelines for the realization of pulses with better performance in fiber lasers. (paper)

  17. Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Duo Hao

    2017-11-01

    Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

  18. A systematic evaluation of normalization methods in quantitative label-free proteomics.

    Science.gov (United States)

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2018-01-01

    To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.

  19. Iterative method for obtaining the prompt and delayed alpha-modes of the diffusion equation

    International Nuclear Information System (INIS)

    Singh, K.P.; Degweker, S.B.; Modak, R.S.; Singh, Kanchhi

    2011-01-01

    Highlights: → A method for obtaining α-modes of the neutron diffusion equation has been developed. → The difference between the prompt and delayed modes is more pronounced for the higher modes. → Prompt and delayed modes differ more in reflector region. - Abstract: Higher modes of the neutron diffusion equation are required in some applications such as second order perturbation theory, and modal kinetics. In an earlier paper we had discussed a method for computing the α-modes of the diffusion equation. The discussion assumed that all neutrons are prompt. The present paper describes an extension of the method for finding the α-modes of diffusion equation with the inclusion of delayed neutrons. Such modes are particularly suitable for expanding the time dependent flux in a reactor for describing transients in a reactor. The method is illustrated by applying it to a three dimensional heavy water reactor model problem. The problem is solved in two and three neutron energy groups and with one and six delayed neutron groups. The results show that while the delayed α-modes are similar to λ-modes they are quite different from prompt modes. The difference gets progressively larger as we go to higher modes.

  20. A quantitative method for Failure Mode and Effects Analysis

    NARCIS (Netherlands)

    Braaksma, Anne Johannes Jan; Meesters, A.J.; Klingenberg, W.; Hicks, C.

    2012-01-01

    Failure Mode and Effects Analysis (FMEA) is commonly used for designing maintenance routines by analysing potential failures, predicting their effect and facilitating preventive action. It is used to make decisions on operational and capital expenditure. The literature has reported that despite its

  1. Computation of mode eigenfunctions in graded-index optical fibers by the propagating beam method

    International Nuclear Information System (INIS)

    Feit, M.D.; Fleck, J.A. Jr.

    1980-01-01

    The propagating beam method utilizes discrete Fourier transforms for generating configuration-space solutions to optical waveguide problems without reference to modes. The propagating beam method can also give a complete description of the field in terms of modes by a Fourier analysis with respect to axial distance of the computed fields. Earlier work dealt with the accurate determination of mode propagation constants and group delays. In this paper the method is extended to the computation of mode eigenfunctions. The method is efficient, allowing generation of a large number of eigenfunctions from a single propagation run. Computations for parabolic-index profiles show excellent agreement between analytic and numerically generated eigenfunctions

  2. Benefit Assessment for Urban Rainwater Measure Configuration Mode in Beijing Based on PROMETHEE Method

    Science.gov (United States)

    Tian, L.; Shu, A. P.; Huang, L.

    2017-12-01

    Along with accelerating in Chinese urbanization, a increasing number of urban construction projects have been built, which cause the growth of impervious surface ratio in cities. Large areas of impervious surface hinders city normal natural water cycles, increases surface runoff coefficient, brings flood peak forward, and increases risk of flooding . Therefore, with the view of reducing risk of urban waterlogging disaster, improving water resource cyclic utilization, and maximizing recovery of urban eco-hydrological process, China begins to promote Sponge city construction using LID as core idea. The paper take five kinds of collecting and utilization rainwater measure as research example, analysis their characteristic ,take investment cost, economic benefit and enviromental benefit as principle of assessment. The weight of the evaluation criterion are gained by entropy method. The final evaluation of urban stormwater measures configuration mode based on the low impact development with PROMETHEE method . The sensitivity of evaluation criterion are analysised by GAIA. Finally, the examples are given to explain the feasibility . The result shows that comprehensive benefit of the mode containing green roof, permeable pavement, Sunken green space and rainwater harvesting tank is the highest. It turn out that reasonable and various types rainwater measures and high land utilization is significant for increasing the its comprehensive efficiency. Besides, the environmental benefit of urban rainwater measures is significantly greater than the economic benefit. There is a positive correlation between plant significantly greater than the economic benefit. There is a positive correlation between plant shallow groove, sunken green space and comprehensive benefit of rainwater measure. Because they can effectively removes water pollutants in stormwater. The studies not only have a great significance in optimizing configuration mode of urban rainwater measures, but also push

  3. Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods

    Science.gov (United States)

    Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.

    2017-04-01

    In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.

  4. Impurity transport model for the normal confinement and high density H-mode discharges in Wendelstein 7-AS

    International Nuclear Information System (INIS)

    Ida, K; Burhenn, R; McCormick, K; Pasch, E; Yamada, H; Yoshinuma, M; Inagaki, S; Murakami, S; Osakabe, M; Liang, Y; Brakel, R; Ehmler, H; Giannone, L; Grigull, P; Knauer, J P; Maassberg, H; Weller, A

    2003-01-01

    An impurity transport model based on diffusivity and the radial convective velocity is proposed as a first approach to explain the differences in the time evolution of Al XII (0.776 nm), Al XI (55 nm) and Al X (33.3 nm) lines following Al-injection by laser blow-off between normal confinement discharges and high density H-mode (HDH) discharges. Both discharge types are in the collisional regime for impurities (central electron temperature is 0.4 keV and central density exceeds 10 20 m -3 ). In this model, the radial convective velocity is assumed to be determined by the radial electric field, as derived from the pressure gradient. The diffusivity coefficient is chosen to be constant in the plasma core but is significantly larger in the edge region, where it counteracts the high local values of the inward convective velocity. Under these conditions, the faster decay of aluminium in HDH discharges can be explained by the smaller negative electric field in the bulk plasma, and correspondingly smaller inward convective velocity, due to flattening of the density profiles

  5. Impact response analysis of cask for spent fuel by dimensional analysis and mode superposition method

    International Nuclear Information System (INIS)

    Kim, Y. J.; Kim, W. T.; Lee, Y. S.

    2006-01-01

    Full text: Full text: Due to the potentiality of accidents, the transportation safety of radioactive material has become extremely important in these days. The most important means of accomplishing the safety in transportation for radioactive material is the integrity of cask. The cask for spent fuel consists of a cask body and two impact limiters generally. The impact limiters are attached at the upper and the lower of the cask body. The cask comprises general requirements and test requirements for normal transport conditions and hypothetical accident conditions in accordance with IAEA regulations. Among the test requirements for hypothetical accident conditions, the 9 m drop test of dropping the cask from 9 m height to unyielding surface to get maximum damage becomes very important requirement because it can affect the structural soundness of the cask. So far the impact response analysis for 9 m drop test has been obtained by finite element method with complex computational procedure. In this study, the empirical equations of the impact forces for 9 m drop test are formulated by dimensional analysis. And then using the empirical equations the characteristics of material used for impact limiters are analysed. Also the dynamic impact response of the cask body is analysed using the mode superposition method and the analysis method is proposed. The results are also validated by comparing with previous experimental results and finite element analysis results. The present method is simpler than finite element method and can be used to predict the impact response of the cask

  6. Analysis of magnetic damping problem by the coupled mode superposition method

    International Nuclear Information System (INIS)

    Horie, Tomoyoshi; Niho, Tomoya

    1997-01-01

    In this paper we describe the coupled mode superposition method for the magnetic damping problem, which is produced by the coupled effect between the deformation and the induced eddy current of the structures for future fusion reactors and magnetically levitated vehicles. The formulation of the coupled mode superposition method is based on the matrix equation for the eddy current and the structure using the coupled mode vectors. Symmetric form of the coupled matrix equation is obtained. Coupled problems of a thin plate are solved to verify the formulation and the computer code. These problems are solved efficiently by this method using only a few coupled modes. Consideration of the coupled mode vectors shows that the coupled effects are included completely in each coupled mode. (author)

  7. Online probabilistic operational safety assessment of multi-mode engineering systems using Bayesian methods

    International Nuclear Information System (INIS)

    Lin, Yufei; Chen, Maoyin; Zhou, Donghua

    2013-01-01

    In the past decades, engineering systems become more and more complex, and generally work at different operational modes. Since incipient fault can lead to dangerous accidents, it is crucial to develop strategies for online operational safety assessment. However, the existing online assessment methods for multi-mode engineering systems commonly assume that samples are independent, which do not hold for practical cases. This paper proposes a probabilistic framework of online operational safety assessment of multi-mode engineering systems with sample dependency. To begin with, a Gaussian mixture model (GMM) is used to characterize multiple operating modes. Then, based on the definition of safety index (SI), the SI for one single mode is calculated. At last, the Bayesian method is presented to calculate the posterior probabilities belonging to each operating mode with sample dependency. The proposed assessment strategy is applied in two examples: one is the aircraft gas turbine, another is an industrial dryer. Both examples illustrate the efficiency of the proposed method

  8. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  9. A general mixed boundary model reduction method for component mode synthesis

    NARCIS (Netherlands)

    Voormeeren, S.N.; Van der Valk, P.L.C.; Rixen, D.J.

    2010-01-01

    A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the “Mixed

  10. On a computer implementation of the block Gauss–Seidel method for normal systems of equations

    OpenAIRE

    Alexander I. Zhdanov; Ekaterina Yu. Bogdanova

    2016-01-01

    This article focuses on the modification of the block option Gauss-Seidel method for normal systems of equations, which is a sufficiently effective method of solving generally overdetermined, systems of linear algebraic equations of high dimensionality. The main disadvantage of methods based on normal equations systems is the fact that the condition number of the normal system is equal to the square of the condition number of the original problem. This fact has a negative impact on the rate o...

  11. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    Science.gov (United States)

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  12. Comparison of Three Methods in Extracting Coherent Modes from a Doppler Backscatter System

    International Nuclear Information System (INIS)

    Zhang Xiao-Hui; Liu A-Di; Zhou Chu; Hu Jian-Qiang; Wang Ming-Yuan; Yu Chang-Xuan; Liu Wan-Dong; Li Hong; Lan Tao; Xie Jin-Lin

    2015-01-01

    We compare three different methods to extract coherent modes from Doppler backscattering (DBS), which are center of gravity (COG) of the complex amplitude spectrum, spectrum of DBS phase derivative (phase derivative method), and phase spectrum, respectively. These three methods are all feasible to extract coherent modes, for example, geodesic acoustic mode oscillation. However, there are still differences between dealing with high frequency modes (several hundred kHz) and low frequency modes (several kHz) hiding in DBS signal. There is a significant amount of power at low frequencies in the phase spectrum, which can be removed by using the phase derivative method and COG. High frequency modes are clearer by using the COG and the phase derivative method than the phase spectrum. The spectrum of DBS amplitude does not show the coherent modes detected by using COG, phase derivative method and phase spectrum. When two Doppler shifted peaks exist, coherent modes and their harmonics appear in the spectrum of DBS amplitude, which are introduced by the DBS phase. (paper)

  13. Multiscale virtual particle based elastic network model (MVP-ENM) for normal mode analysis of large-sized biomolecules.

    Science.gov (United States)

    Xia, Kelin

    2017-12-20

    In this paper, a multiscale virtual particle based elastic network model (MVP-ENM) is proposed for the normal mode analysis of large-sized biomolecules. The multiscale virtual particle (MVP) model is proposed for the discretization of biomolecular density data. With this model, large-sized biomolecular structures can be coarse-grained into virtual particles such that a balance between model accuracy and computational cost can be achieved. An elastic network is constructed by assuming "connections" between virtual particles. The connection is described by a special harmonic potential function, which considers the influence from both the mass distributions and distance relations of the virtual particles. Two independent models, i.e., the multiscale virtual particle based Gaussian network model (MVP-GNM) and the multiscale virtual particle based anisotropic network model (MVP-ANM), are proposed. It has been found that in the Debye-Waller factor (B-factor) prediction, the results from our MVP-GNM with a high resolution are as good as the ones from GNM. Even with low resolutions, our MVP-GNM can still capture the global behavior of the B-factor very well with mismatches predominantly from the regions with large B-factor values. Further, it has been demonstrated that the low-frequency eigenmodes from our MVP-ANM are highly consistent with the ones from ANM even with very low resolutions and a coarse grid. Finally, the great advantage of MVP-ANM model for large-sized biomolecules has been demonstrated by using two poliovirus virus structures. The paper ends with a conclusion.

  14. Omics Methods for Probing the Mode of Action of Natural and Synthetic Phytotoxins

    OpenAIRE

    Duke, Stephen O.; Bajsa, Joanna; Pan, Zhiqiang

    2013-01-01

    For a little over a decade, omics methods (transcriptomics, proteomics, metabolomics, and physionomics) have been used to discover and probe the mode of action of both synthetic and natural phytotoxins. For mode of action discovery, the strategy for each of these approaches is to generate an omics profile for phytotoxins with known molecular targets and to compare this library of responses to the responses of compounds with unknown modes of action. Using more than one omics approach enhances ...

  15. Calculation of mixed mode stress intensity factors using an alternating method

    International Nuclear Information System (INIS)

    Sakai, Takayuki

    1999-01-01

    In this study, mixed mode stress intensity factors (K I and K II ) of a square plate with a notch were calculated using a finite element alternating method. The obtained results were compared with the ones by a finite element method, and it was shown that the finite element alternating method can accurately estimate mixed mode stress intensity factors. Then, using this finite element alternating method, mixed mode stress intensity factors were calculated as changing the size and position of the notch, and its simplified equations were proposed. (author)

  16. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  17. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  18. A comparative study of two stochastic mode reduction methods

    Energy Technology Data Exchange (ETDEWEB)

    Stinis, Panagiotis

    2005-09-01

    We present a comparative study of two methods for thereduction of the dimensionality of a system of ordinary differentialequations that exhibits time-scale separation. Both methods lead to areduced system of stochastic differential equations. The novel feature ofthese methods is that they allow the use, in the reduced system, ofhigher order terms in the resolved variables. The first method, proposedby Majda, Timofeyev and Vanden-Eijnden, is based on an asymptoticstrategy developed by Kurtz. The second method is a short-memoryapproximation of the Mori-Zwanzig projection formalism of irreversiblestatistical mechanics, as proposed by Chorin, Hald and Kupferman. Wepresent conditions under which the reduced models arising from the twomethods should have similar predictive ability. We apply the two methodsto test cases that satisfy these conditions. The form of the reducedmodels and the numerical simulations show that the two methods havesimilar predictive ability as expected.

  19. Adaptive ACMS: A robust localized Approximated Component Mode Synthesis Method

    OpenAIRE

    Madureira, Alexandre L.; Sarkis, Marcus

    2017-01-01

    We consider finite element methods of multiscale type to approximate solutions for two-dimensional symmetric elliptic partial differential equations with heterogeneous $L^\\infty$ coefficients. The methods are of Galerkin type and follows the Variational Multiscale and Localized Orthogonal Decomposition--LOD approaches in the sense that it decouples spaces into multiscale and fine subspaces. In a first method, the multiscale basis functions are obtained by mapping coarse basis functions, based...

  20. Methods on TLD management be applicable in nuclear power plantsunder the multi-reactor operational mode

    International Nuclear Information System (INIS)

    Luo Huiyong; Wen Qinghua; Li Ruirong; Yu Enjian

    2006-01-01

    This paper discusses the methods on management of TLD dosimeters adopted in DNMC and other NPPs, analyzes and evaluates their both defects and advantages. Facing the coming of the multi-reactor operational mode applied in NPPs, a new method intelligent management mode is put forward, this optimized method not only assures the accuracy of TLD's measurement but also reduces the cost of production and improves the efficiency of management greatly. (authors)

  1. Omics methods for probing the mode of action of natural phytotoxins

    Science.gov (United States)

    For a little over a decade, omics methods (transcriptomics, proteomics, metabolomics, and physionomics) have been used to discover and probe the mode of action of both synthetic and natural phytotoxins. For mode of action discovery, the strategy for each of these approaches is to generate an omics...

  2. Soliton rains in a graphene-oxide passively mode-locked ytterbium-doped fiber laser with all-normal dispersion

    International Nuclear Information System (INIS)

    Huang, S S; Yan, P G; Zhang, G L; Zhao, J Q; Li, H Q; Lin, R Y; Wang, Y G

    2014-01-01

    We experimentally investigated soliton rains in an ytterbium-doped fiber (YDF) laser with a net normal dispersion cavity using a graphene-oxide (GO) saturable absorber (SA). The 195 m-long-cavity, the fiber birefringence filter and the inserted 2.5 nm narrow bandwidth filter play important roles in the formation of the soliton rains. The soliton rain states can be changed by the effective gain bandwidth of the laser. The experimental results can be conducive to an understanding of dissipative soliton features and mode-locking dynamics in all-normal dispersion fiber lasers with GOSAs. To the best of our knowledge, this is the first demonstration of soliton rains in a GOSA passively mode-locked YDF laser with a net normal dispersion cavity. (letter)

  3. An efficient mode-splitting method for a curvilinear nearshore circulation model

    Science.gov (United States)

    Shi, Fengyan; Kirby, James T.; Hanes, Daniel M.

    2007-01-01

    A mode-splitting method is applied to the quasi-3D nearshore circulation equations in generalized curvilinear coordinates. The gravity wave mode and the vorticity wave mode of the equations are derived using the two-step projection method. Using an implicit algorithm for the gravity mode and an explicit algorithm for the vorticity mode, we combine the two modes to derive a mixed difference–differential equation with respect to surface elevation. McKee et al.'s [McKee, S., Wall, D.P., and Wilson, S.K., 1996. An alternating direction implicit scheme for parabolic equations with mixed derivative and convective terms. J. Comput. Phys., 126, 64–76.] ADI scheme is then used to solve the parabolic-type equation in dealing with the mixed derivative and convective terms from the curvilinear coordinate transformation. Good convergence rates are found in two typical cases which represent respectively the motions dominated by the gravity mode and the vorticity mode. Time step limitations imposed by the vorticity convective Courant number in vorticity-mode-dominant cases are discussed. Model efficiency and accuracy are verified in model application to tidal current simulations in San Francisco Bight.

  4. Filtration of human EEG recordings from physiological artifacts with empirical mode method

    Science.gov (United States)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Khramova, Marina V.

    2017-03-01

    In the paper we propose the new method for dealing with noise and physiological artifacts in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We consider noises and physiological artifacts on EEG as specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from eye-moving artifacts and show high efficiency of the method.

  5. The simplex method for nonlinear sliding mode control

    Directory of Open Access Journals (Sweden)

    Bartolini G.

    1998-01-01

    Full Text Available General nonlinear control systems described by ordinary differential equations with a prescribed sliding manifold are considered. A method of designing a feedback control law such that the state variable fulfills the sliding condition in finite time is based on the construction of a suitable simplex of vectors in the tangent space of the manifold. The convergence of the method is proved under an obtuse angle condition and a way to build the required simplex is indicated. An example of engineering interest is presented.

  6. Alternative normalization methods demonstrate widespread cortical hypometabolism in untreated de novo Parkinson's disease

    DEFF Research Database (Denmark)

    Berti, Valentina; Polito, C; Borghammer, Per

    2012-01-01

    , recent studies suggested that conventional data normalization procedures may not always be valid, and demonstrated that alternative normalization strategies better allow detection of low magnitude changes. We hypothesized that these alternative normalization procedures would disclose more widespread...... metabolic alterations in de novo PD. METHODS: [18F]FDG PET scans of 26 untreated de novo PD patients (Hoehn & Yahr stage I-II) and 21 age-matched controls were compared using voxel-based analysis. Normalization was performed using gray matter (GM), white matter (WM) reference regions and Yakushev...... normalization. RESULTS: Compared to GM normalization, WM and Yakushev normalization procedures disclosed much larger cortical regions of relative hypometabolism in the PD group with extensive involvement of frontal and parieto-temporal-occipital cortices, and several subcortical structures. Furthermore...

  7. A Novel Vibration Mode Testing Method for Cylindrical Resonators Based on Microphones

    Directory of Open Access Journals (Sweden)

    Yongmeng Zhang

    2015-01-01

    Full Text Available Non-contact testing is an important method for the study of the vibrating characteristic of cylindrical resonators. For the vibratory cylinder gyroscope excited by piezo-electric electrodes, mode testing of the cylindrical resonator is difficult. In this paper, a novel vibration testing method for cylindrical resonators is proposed. This method uses a MEMS microphone, which has the characteristics of small size and accurate directivity, to measure the vibration of the cylindrical resonator. A testing system was established, then the system was used to measure the vibration mode of the resonator. The experimental results show that the orientation resolution of the node of the vibration mode is better than 0.1°. This method also has the advantages of low cost and easy operation. It can be used in vibration testing and provide accurate results, which is important for the study of the vibration mode and thermal stability of vibratory cylindrical gyroscopes.

  8. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.; Ghommem, Mehdi; Kagadis, George C.; Katsanos, Konstantinos H.; Loukopoulos, Vassilios C.; Burganos, Vasilis N.; Nikiforidis, George C.

    2014-01-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must

  9. Normal-Mode Analysis of Circular DNA at the Base-Pair Level. 2. Large-Scale Configurational Transformation of a Naturally Curved Molecule.

    Science.gov (United States)

    Matsumoto, Atsushi; Tobias, Irwin; Olson, Wilma K

    2005-01-01

    Fine structural and energetic details embedded in the DNA base sequence, such as intrinsic curvature, are important to the packaging and processing of the genetic material. Here we investigate the internal dynamics of a 200 bp closed circular molecule with natural curvature using a newly developed normal-mode treatment of DNA in terms of neighboring base-pair "step" parameters. The intrinsic curvature of the DNA is described by a 10 bp repeating pattern of bending distortions at successive base-pair steps. We vary the degree of intrinsic curvature and the superhelical stress on the molecule and consider the normal-mode fluctuations of both the circle and the stable figure-8 configuration under conditions where the energies of the two states are similar. To extract the properties due solely to curvature, we ignore other important features of the double helix, such as the extensibility of the chain, the anisotropy of local bending, and the coupling of step parameters. We compare the computed normal modes of the curved DNA model with the corresponding dynamical features of a covalently closed duplex of the same chain length constructed from naturally straight DNA and with the theoretically predicted dynamical properties of a naturally circular, inextensible elastic rod, i.e., an O-ring. The cyclic molecules with intrinsic curvature are found to be more deformable under superhelical stress than rings formed from naturally straight DNA. As superhelical stress is accumulated in the DNA, the frequency, i.e., energy, of the dominant bending mode decreases in value, and if the imposed stress is sufficiently large, a global configurational rearrangement of the circle to the figure-8 form takes place. We combine energy minimization with normal-mode calculations of the two states to decipher the configurational pathway between the two states. We also describe and make use of a general analytical treatment of the thermal fluctuations of an elastic rod to characterize the

  10. A general mixed boundary model reduction method for component mode synthesis

    International Nuclear Information System (INIS)

    Voormeeren, S N; Van der Valk, P L C; Rixen, D J

    2010-01-01

    A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the 'Mixed Craig-Bampton' method is proposed. The method is derived by dividing the substructure DoF into a set of internal DoF, free interface DoF and fixed interface DoF. To this end a simple but effective scheme is introduced that, for every pair of interface DoF, selects a free or fixed boundary condition for each DoF individually. Based on this selection a reduction basis is computed consisting of vibration modes, static constraint modes and static residual flexibility modes. In order to assemble the reduced substructures a novel mixed assembly procedure is developed. It is shown that this approach leads to relatively sparse reduced matrices, whereas other mixed boundary methods often lead to full matrices. As such, the Mixed Craig-Bampton method forms a natural generalization of the classic Craig-Bampton and more recent Dual Craig-Bampton methods. Finally, the method is applied to a finite element test model. Analysis reveals that the proposed method has comparable or better accuracy and superior versatility with respect to the existing methods.

  11. MO-DE-209-02: Tomosynthesis Reconstruction Methods

    International Nuclear Information System (INIS)

    Mainprize, J.

    2016-01-01

    Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support

  12. MO-DE-209-02: Tomosynthesis Reconstruction Methods

    Energy Technology Data Exchange (ETDEWEB)

    Mainprize, J. [Sunnybrook Health Sciences Centre, Toronto, ON (Canada)

    2016-06-15

    Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support

  13. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    International Nuclear Information System (INIS)

    Sevastianov, L. A.; Egorov, A. A.; Sevastyanov, A. L.

    2013-01-01

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell’s equations is made to obey “inclined” boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of “entanglement” of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lüneburg lens.

  14. Normal mode analysis based on an elastic network model for biomolecules in the Protein Data Bank, which uses dihedral angles as independent variables.

    Science.gov (United States)

    Wako, Hiroshi; Endo, Shigeru

    2013-06-01

    We have developed a computer program, named PDBETA, that performs normal mode analysis (NMA) based on an elastic network model that uses dihedral angles as independent variables. Taking advantage of the relatively small number of degrees of freedom required to describe a molecular structure in dihedral angle space and a simple potential-energy function independent of atom types, we aimed to develop a program applicable to a full-atom system of any molecule in the Protein Data Bank (PDB). The algorithm for NMA used in PDBETA is the same as the computer program FEDER/2, developed previously. Therefore, the main challenge in developing PDBETA was to find a method that can automatically convert PDB data into molecular structure information in dihedral angle space. Here, we illustrate the performance of PDBETA with a protein-DNA complex, a protein-tRNA complex, and some non-protein small molecules, and show that the atomic fluctuations calculated by PDBETA reproduce the temperature factor data of these molecules in the PDB. A comparison was also made with elastic-network-model based NMA in a Cartesian-coordinate system. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. On a separating method for mixed-modes crack growth in wood material using image analysis

    Science.gov (United States)

    Moutou Pitti, R.; Dubois, F.; Pop, O.

    2010-06-01

    Due to the complex wood anatomy and the loading orientation, the timber elements are subjected to a mixed-mode fracture. In these conditions, the crack tip advance is characterized by mixed-mode kinematics. In order to characterize the fracture process function versus the loading orientation, a new mixed-mode crack growth timber specimen is proposed. In the present paper, the design process and the experimental validation of this specimen are proposed. Using experimental results, the energy release rate is calculated for several modes. The calculi consist on the separation of each fracture mode. The design of the specimen is based on the analytical approach and numerical simulation by finite element method. The specimen particularity is the stability of the crack propagation under a force control.

  16. Normalization Methods and Selection Strategies for Reference Materials in Stable Isotope Analyses - Review

    International Nuclear Information System (INIS)

    Skrzypek, G.; Sadler, R.; Paul, D.; Forizs, I.

    2011-01-01

    A stable isotope analyst has to make a number of important decisions regarding how to best determine the 'true' stable isotope composition of analysed samples in reference to an international scale. It has to be decided which reference materials should be used, the number of reference materials and how many repetitions of each standard is most appropriate for a desired level of precision, and what normalization procedure should be selected. In this paper we summarise what is known about propagation of uncertainties associated with normalization procedures and propagation of uncertainties associated with reference materials used as anchors for the determination of 'true' values for δ''1''3C and δ''1''8O. Normalization methods Several normalization methods transforming the 'raw' value obtained from mass spectrometers to one of the internationally recognized scales has been developed. However, as summarised by Paul et al. different normalization transforms alone may lead to inconsistencies between laboratories. The most common normalization procedures are: single-point anchoring (versus working gas and certified reference standard), modified single-point normalization, linear shift between the measured and the true isotopic composition of two certified reference standards, two-point and multipoint linear normalization methods. The accuracy of these various normalization methods has been compared by using analytical laboratory data by Paul et al., with the single-point and normalization versus tank calibrations resulting in the largest normalization errors, and that also exceed the analytical uncertainty recommended for δ 13 C. The normalization error depends greatly on the relative differences between the stable isotope composition of the reference material and the sample. On the other hand, the normalization methods using two or more certified reference standards produces a smaller normalization error, if the reference materials are bracketing the whole range of

  17. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Application of normal form methods to the analysis of resonances in particle accelerators

    International Nuclear Information System (INIS)

    Davies, W.G.

    1992-01-01

    The transformation to normal form in a Lie-algebraic framework provides a very powerful method for identifying and analysing non-linear behaviour and resonances in particle accelerators. The basic ideas are presented and illustrated. (author). 4 refs

  19. Drug Use Normalization: A Systematic and Critical Mixed-Methods Review.

    Science.gov (United States)

    Sznitman, Sharon R; Taubman, Danielle S

    2016-09-01

    Drug use normalization, which is a process whereby drug use becomes less stigmatized and more accepted as normative behavior, provides a conceptual framework for understanding contemporary drug issues and changes in drug use trends. Through a mixed-methods systematic review of the normalization literature, this article seeks to (a) critically examine how the normalization framework has been applied in empirical research and (b) make recommendations for future research in this area. Twenty quantitative, 26 qualitative, and 4 mixed-methods studies were identified through five electronic databases and reference lists of published studies. Studies were assessed for relevance, study characteristics, quality, and aspects of normalization examined. None of the studies applied the most rigorous research design (experiments) or examined all of the originally proposed normalization dimensions. The most commonly assessed dimension of drug use normalization was "experimentation." In addition to the original dimensions, the review identified the following new normalization dimensions in the literature: (a) breakdown of demographic boundaries and other risk factors in relation to drug use; (b) de-normalization; (c) drug use as a means to achieve normal goals; and (d) two broad forms of micro-politics associated with managing the stigma of illicit drug use: assimilative and transformational normalization. Further development in normalization theory and methodology promises to provide researchers with a novel framework for improving our understanding of drug use in contemporary society. Specifically, quasi-experimental designs that are currently being made feasible by swift changes in cannabis policy provide researchers with new and improved opportunities to examine normalization processes.

  20. High Order Finite Element Method for the Lambda modes problem on hexagonal geometry

    International Nuclear Information System (INIS)

    Gonzalez-Pintor, S.; Ginestar, D.; Verdu, G.

    2009-01-01

    A High Order Finite Element Method to approximate the Lambda modes problem for reactors with hexagonal geometry has been developed. This method is based on the expansion of the neutron flux in terms of the modified Dubiner's polynomials on a triangular mesh. This mesh is fixed and the accuracy of the method is improved increasing the degree of the polynomial expansions without the necessity of remeshing. The performance of method has been tested obtaining the dominant Lambda modes of different 2D reactor benchmark problems.

  1. On a computer implementation of the block Gauss–Seidel method for normal systems of equations

    Directory of Open Access Journals (Sweden)

    Alexander I. Zhdanov

    2016-12-01

    Full Text Available This article focuses on the modification of the block option Gauss-Seidel method for normal systems of equations, which is a sufficiently effective method of solving generally overdetermined, systems of linear algebraic equations of high dimensionality. The main disadvantage of methods based on normal equations systems is the fact that the condition number of the normal system is equal to the square of the condition number of the original problem. This fact has a negative impact on the rate of convergence of iterative methods based on normal equations systems. To increase the speed of convergence of iterative methods based on normal equations systems, for solving ill-conditioned problems currently different preconditioners options are used that reduce the condition number of the original system of equations. However, universal preconditioner for all applications does not exist. One of the effective approaches that improve the speed of convergence of the iterative Gauss–Seidel method for normal systems of equations, is to use its version of the block. The disadvantage of the block Gauss–Seidel method for production systems is the fact that it is necessary to calculate the pseudoinverse matrix for each iteration. We know that finding the pseudoinverse is a difficult computational procedure. In this paper, we propose a procedure to replace the matrix pseudo-solutions to the problem of normal systems of equations by Cholesky. Normal equations arising at each iteration of Gauss–Seidel method, have a relatively low dimension compared to the original system. The results of numerical experimentation demonstrating the effectiveness of the proposed approach are given.

  2. Omics methods for probing the mode of action of natural and synthetic phytotoxins.

    Science.gov (United States)

    Duke, Stephen O; Bajsa, Joanna; Pan, Zhiqiang

    2013-02-01

    For a little over a decade, omics methods (transcriptomics, proteomics, metabolomics, and physionomics) have been used to discover and probe the mode of action of both synthetic and natural phytotoxins. For mode of action discovery, the strategy for each of these approaches is to generate an omics profile for phytotoxins with known molecular targets and to compare this library of responses to the responses of compounds with unknown modes of action. Using more than one omics approach enhances the probability of success. Generally, compounds with the same mode of action generate similar responses with a particular omics method. Stress and detoxification responses to phytotoxins can be much clearer than effects directly related to the target site. Clues to new modes of action must be validated with in vitro enzyme effects or genetic approaches. Thus far, the only new phytotoxin target site discovered with omics approaches (metabolomics and physionomics) is that of cinmethylin and structurally related 5-benzyloxymethyl-1,2-isoxazolines. These omics approaches pointed to tyrosine amino-transferase as the target, which was verified by enzyme assays and genetic methods. In addition to being a useful tool of mode of action discovery, omics methods provide detailed information on genetic and biochemical impacts of phytotoxins. Such information can be useful in understanding the full impact of natural phytotoxins in both agricultural and natural ecosystems.

  3. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    Science.gov (United States)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  4. Seismic analysis of structures of nuclear power plants by Lanczos mode superposition method

    International Nuclear Information System (INIS)

    Coutinho, A.L.G.A.; Alves, J.L.D.; Landau, L.; Lima, E.C.P. de; Ebecken, N.F.F.

    1986-01-01

    The Lanczos Mode Superposition Method is applied in the seismic analysis of nuclear power plants. The coordinate transformation matrix is generated by the Lanczos algorithm. It is shown that, through a convenient choice of the starting vector of the algorithm, modes with participation factors are automatically selected. It is performed the Response Spectra analysis of a typical reactor building. The obtained results are compared with those determined by the classical aproach stressing the remarkable computer effectiveness of the proposed methodology. (Author) [pt

  5. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Directory of Open Access Journals (Sweden)

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  6. Sliding mode control of photoelectric tracking platform based on the inverse system method

    Directory of Open Access Journals (Sweden)

    Yao Zong Chen

    2016-01-01

    Full Text Available In order to improve the photoelectric tracking platform tracking performance, an integral sliding mode control strategy based on inverse system decoupling method is proposed. The electromechanical dynamic model is established based on multi-body system theory and Newton-Euler method. The coupled multi-input multi-output (MIMO nonlinear system is transformed into two pseudo-linear single-input single-output (SISO subsystems based on the inverse system method. An integral sliding mode control scheme is designed for the decoupled pseudo-linear system. In order to eliminate system chattering phenomenon caused by traditional sign function in sliding-mode controller, the sign function is replaced by the Sigmoid function. Simulation results show that the proposed decoupling method and the control strategy can restrain the influences of internal coupling and disturbance effectively, and has better robustness and higher tracking accuracy.

  7. A nodal collocation method for the calculation of the lambda modes of the P L equations

    International Nuclear Information System (INIS)

    Capilla, M.; Talavera, C.F.; Ginestar, D.; Verdu, G.

    2005-01-01

    P L equations are classical approximations to the neutron transport equation admitting a diffusive form. Using this property, a nodal collocation method is developed for the P L approximations, which is based on the expansion of the flux in terms of orthonormal Legendre polynomials. This method approximates the differential lambda modes problem by an algebraic eigenvalue problem from which the fundamental and the subcritical modes of the system can be calculated. To test the performance of this method, two problems have been considered, a homogeneous slab, which admits an analytical solution, and a seven-region slab corresponding to a more realistic problem

  8. IMF-Slices for GPR Data Processing Using Variational Mode Decomposition Method

    Directory of Open Access Journals (Sweden)

    Xuebing Zhang

    2018-03-01

    Full Text Available Using traditional time-frequency analysis methods, it is possible to delineate the time-frequency structures of ground-penetrating radar (GPR data. A series of applications based on time-frequency analysis were proposed for the GPR data processing and imaging. With respect to signal processing, GPR data are typically non-stationary, which limits the applications of these methods moving forward. Empirical mode decomposition (EMD provides alternative solutions with a fresh perspective. With EMD, GPR data are decomposed into a set of sub-components, i.e., the intrinsic mode functions (IMFs. However, the mode-mixing effect may also bring some negatives. To utilize the IMFs’ benefits, and avoid the negatives of the EMD, we introduce a new decomposition scheme termed variational mode decomposition (VMD for GPR data processing for imaging. Based on the decomposition results of the VMD, we propose a new method which we refer as “the IMF-slice”. In the proposed method, the IMFs are generated by the VMD trace by trace, and then each IMF is sorted and recorded into different profiles (i.e., the IMF-slices according to its center frequency. Using IMF-slices, the GPR data can be divided into several IMF-slices, each of which delineates a main vibration mode, and some subsurface layers and geophysical events can be identified more clearly. The effectiveness of the proposed method is tested using synthetic benchmark signals, laboratory data and the field dataset.

  9. Low-mode truncation methods in the sine-Gordon equation

    International Nuclear Information System (INIS)

    Xiong Chuyu.

    1991-01-01

    In this dissertation, the author studies the chaotic and coherent motions (i.e., low-dimensional chaotic attractor) in some near integrable partial differential equations, particularly the sine-Gordon equation and the nonlinear Schroedinger equation. In order to study the motions, he uses low mode truncation methods to reduce these partial differential equations to some truncated models (low-dimensional ordinary differential equations). By applying many methods available to low-dimensional ordinary differential equations, he can understand the low-dimensional chaotic attractor of PDE's much better. However, there are two important questions one needs to answer: (1) How many modes is good enough for the low mode truncated models to capture the dynamics uniformly? (2) Is the chaotic attractor in a low mode truncated model close to the chaotic attractor in the original PDE? And how close is? He has developed two groups of powerful methods to help to answer these two questions. They are the computation methods of continuation and local bifurcation, and local Lyapunov exponents and Lyapunov exponents. Using these methods, he concludes that the 2N-nls ODE is a good model for the sine-Gordon equation and the nonlinear Schroedinger equation provided one chooses a 'good' basis and uses 'enough' modes (where 'enough' depends on the parameters of the system but is small for the parameter studied here). Therefore, one can use 2N-nls ODE to study the chaos of PDE's in more depth

  10. Comparison of normalization methods for the analysis of metagenomic gene abundance data.

    Science.gov (United States)

    Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik

    2018-04-20

    In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead

  11. Design of Normal Concrete Mixtures Using Workability-Dispersion-Cohesion Method

    OpenAIRE

    Qasrawi, Hisham

    2016-01-01

    The workability-dispersion-cohesion method is a new proposed method for the design of normal concrete mixes. The method uses special coefficients called workability-dispersion and workability-cohesion factors. These coefficients relate workability to mobility and stability of the concrete mix. The coefficients are obtained from special charts depending on mix requirements and aggregate properties. The method is practical because it covers various types of aggregates that may not be within sta...

  12. THE METHOD OF CONSTRUCTING A BOOLEAN FORMULA OF A POLYGON IN THE DISJUNCTIVE NORMAL FORM

    Directory of Open Access Journals (Sweden)

    A. A. Butov

    2014-01-01

    Full Text Available The paper focuses on finalizing the method of finding a polygon Boolean formula in disjunctive normal form, described in the previous article [1]. An improved method eliminates the drawback asso-ciated with the existence of a class of problems for which the solution is only approximate. The pro-posed method always allows to find an exact solution. The method can be used, in particular, in the systems of computer-aided design of integrated circuits topology.

  13. Face/core mixed mode debond fracture toughness characterization using the modified TSD test method

    DEFF Research Database (Denmark)

    Berggreen, Christian; Quispitupa, Amilcar; Costache, Andrei

    2014-01-01

    The modified tilted sandwich debond (TSD) test method is used to examine face/core debond fracture toughness of sandwich specimens with glass/polyester face sheets and PVC H45 and H100 foam cores over a large range of mode-mixities. The modification was achieved by reinforcing the loaded face sheet....... The fracture process was inspected visually during and after testing. For specimens with H45 core the crack propagated in the core. For specimens with an H100 core, the crack propagated between the resin-rich layer and the face sheet. © The Author(s) 2013 Reprints and permissions: sagepub...... with a steel bar, and fracture testing of the test specimens was conducted over a range of tilt angles. The fracture toughness exhibited mode-mixity phase angle dependence, especially for mode II dominated loadings; although, the fracture toughness remained quite constant for mode I dominated crack loadings...

  14. A new normalization method based on electrical field lines for electrical capacitance tomography

    International Nuclear Information System (INIS)

    Zhang, L F; Wang, H X

    2009-01-01

    Electrical capacitance tomography (ECT) is considered to be one of the most promising process tomography techniques. The image reconstruction for ECT is an inverse problem to find the spatially distributed permittivities in a pipe. Usually, the capacitance measurements obtained from the ECT system are normalized at the high and low permittivity for image reconstruction. The parallel normalization model is commonly used during the normalization process, which assumes the distribution of materials in parallel. Thus, the normalized capacitance is a linear function of measured capacitance. A recently used model is a series normalization model which results in the normalized capacitance as a nonlinear function of measured capacitance. The newest presented model is based on electrical field centre lines (EFCL), and is a mixture of two normalization models. The multi-threshold method of this model is presented in this paper. The sensitivity matrices based on different normalization models were obtained, and image reconstruction was carried out accordingly. Simulation results indicate that reconstructed images with higher quality can be obtained based on the presented model

  15. Simulation of operation modes of isochronous cyclotron by a new interactive method

    International Nuclear Information System (INIS)

    Taraszkiewicz, R.; Talach, M.; Sulikowski, J.; Doruch, H.; Norys, T.; Sroka, A.; Kiyan, I.N.; )

    2007-01-01

    Operation mode simulation methods are based on selection of trim coil currents in the isochronous cyclotron for formation of the required magnetic field at a certain level of the main coil current. The traditional current selection method is based on finding a solution for all trim coils simultaneously. After setting the calculated operation mode, it is usually necessary to perform a control measurement of the magnetic field map and to repeat the calculation for a more accurate solution. The new current selection method is based on successively finding solutions for each particular trim coil. The trim coils are taken one by one in reverse order from the edge to the center of the isochronous cyclotron. The new operation mode simulation method is based on the new current selection method. The new method, as against the traditional one, includes iterative calculation of the kinetic energy at the extraction radius. A series of experiments on proton beam formation within the range of working acceleration radii at extraction energies from 32 to 59 MeV, which were carried out at the AIC144 multipurpose isochronous cyclotron (designed mainly for the eye melanoma treatment and production of radioisotopes) at the INP PAS (Cracow), showed that the new method makes unnecessary any control measurements of magnetic fields for getting the desired operation mode, which indicates a high accuracy of the calculation. (authors)

  16. Comprehensive Deployment Method for Technical Characteristics Base on Multi-failure Modes Correlation Analysis

    Science.gov (United States)

    Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.

    2017-12-01

    This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.

  17. A Discrete-Time Chattering Free Sliding Mode Control with Multirate Sampling Method for Flight Simulator

    Directory of Open Access Journals (Sweden)

    Yunjie Wu

    2013-01-01

    Full Text Available In order to improve the tracking accuracy of flight simulator and expend its frequency response, a multirate-sampling-method-based discrete-time chattering free sliding mode control is developed and imported into the systems. By constructing the multirate sampling sliding mode controller, the flight simulator can perfectly track a given reference signal with an arbitrarily small dynamic tracking error, and the problems caused by a contradiction of reference signal period and control period in traditional design method can be eliminated. It is proved by theoretical analysis that the extremely high dynamic tracking precision can be obtained. Meanwhile, the robustness is guaranteed by sliding mode control even though there are modeling mismatch, external disturbances and measure noise. The validity of the proposed method is confirmed by experiments on flight simulator.

  18. A statistical analysis of count normalization methods used in positron-emission tomography

    International Nuclear Information System (INIS)

    Holmes, T.J.; Ficke, D.C.; Snyder, D.L.

    1984-01-01

    As part of the Positron-Emission Tomography (PET) reconstruction process, annihilation counts are normalized for photon absorption, detector efficiency and detector-pair duty-cycle. Several normalization methods of time-of-flight and conventional systems are analyzed mathematically for count bias and variance. The results of the study have some implications on hardware and software complexity and on image noise and distortion

  19. Vibrational normal modes of diazo-dimedone: A comparative study by Fourier infrared/Raman spectroscopies and conformational analysis by MM/QM

    Science.gov (United States)

    Téllez Soto, C. A.; Ramos, J. M.; Rianelli, R. S.; de Souza, M. C. B. V.; Ferreira, V. F.

    2007-07-01

    The 2-diazo-5,5-dimethyl-cyclohexane-1,3-dione ( 3) was synthesized and the FT-IR/Raman spectra were measured with the purpose of obtain a full assignment of the vibrational modes. Singular aspects concerning the -C dbnd N dbnd N oscillator are discussed in view of two strong bands observed in the region of 2300-2100 cm -1 in both, Infrared and Raman spectra. The density functional theory (DFT) was used to obtain the geometrical structure and for assisting in the vibrational assignment joint to the traditional normal coordinate analysis (NCA). The observed wavenumbers at 2145 (IR), 2144(R) are assigned as the coupled ν(N dbnd N) + ν(C dbnd N) vibrational mode with higher participation of the N dbnd N stretching. A 2188 cm -1 (IR) and at 2186 cm -1 (R) can be assigned as a overtone of one of ν(CC) normal mode or to a combination band of the fundamentals δ(CCH) found at 1169 cm -1 and the δ (CC dbnd N) found at 1017 cm -1 enhanced by Fermi resonance.

  20. Interactions of the Salience Network and Its Subsystems with the Default-Mode and the Central-Executive Networks in Normal Aging and Mild Cognitive Impairment.

    Science.gov (United States)

    Chand, Ganesh B; Wu, Junjie; Hajjar, Ihab; Qiu, Deqiang

    2017-09-01

    Previous functional magnetic resonance imaging (fMRI) investigations suggest that the intrinsically organized large-scale networks and the interaction between them might be crucial for cognitive activities. A triple network model, which consists of the default-mode network, salience network, and central-executive network, has been recently used to understand the connectivity patterns of the cognitively normal brains versus the brains with disorders. This model suggests that the salience network dynamically controls the default-mode and central-executive networks in healthy young individuals. However, the patterns of interactions have remained largely unknown in healthy aging or those with cognitive decline. In this study, we assess the patterns of interactions between the three networks using dynamical causal modeling in resting state fMRI data and compare them between subjects with normal cognition and mild cognitive impairment (MCI). In healthy elderly subjects, our analysis showed that the salience network, especially its dorsal subnetwork, modulates the interaction between the default-mode network and the central-executive network (Mann-Whitney U test; p control correlated significantly with lower overall cognitive performance measured by Montreal Cognitive Assessment (r = 0.295; p control, especially the dorsal salience network, over other networks provides a neuronal basis for cognitive decline and may be a candidate neuroimaging biomarker of cognitive impairment.

  1. Scaling Mode Shapes in Output-Only Structure by a Mass-Change-Based Method

    Directory of Open Access Journals (Sweden)

    Liangliang Yu

    2017-01-01

    Full Text Available A mass-change-based method based on output-only data for the rescaling of mode shapes in operational modal analysis (OMA is introduced. The mass distribution matrix, which is defined as a diagonal matrix whose diagonal elements represent the ratios among the diagonal elements of the mass matrix, is calculated using the unscaled mode shapes. Based on the theory of null space, the mass distribution vector or mass distribution matrix is obtained. A small mass with calibrated weight is added to a certain location of the structure, and then the mass distribution vector of the modified structure is estimated. The mass matrix is identified according to the difference of the mass distribution vectors between the original and modified structures. Additionally, the universal set of modes is unnecessary when calculating the mass distribution matrix, indicating that modal truncation is allowed in the proposed method. The mass-scaled mode shapes estimated in OMA according to the proposed method are compared with those obtained by experimental modal analysis. A simulation is employed to validate the feasibility of the method. Finally, the method is tested on output-only data from an experiment on a five-storey structure, and the results confirm the effectiveness of the method.

  2. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  3. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  4. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  5. Perturbation method for calculation of narrow-band impedance and trapped modes

    International Nuclear Information System (INIS)

    Heifets, S.A.

    1987-01-01

    An iterative method for calculation of the narrow-band impedance is described for a system with a small variation in boundary conditions, so that the variation can be considered as a perturbation. The results are compared with numeric calculations. The method is used to relate the origin of the trapped modes with the degeneracy of the spectrum of an unperturbed system. The method also can be applied to transverse impedance calculations. 6 refs., 6 figs., 1 tab

  6. The impact of sample non-normality on ANOVA and alternative methods.

    Science.gov (United States)

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  7. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    International Nuclear Information System (INIS)

    Li, Chengwei; Zhan, Liwei

    2015-01-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)

  8. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    International Nuclear Information System (INIS)

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-01

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline

  9. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    Energy Technology Data Exchange (ETDEWEB)

    Tam, Allison [Stanford Institutes of Medical Research Program, Stanford University School of Medicine, Stanford, California 94305 (United States); Barker, Jocelyn [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 (United States); Rubin, Daniel [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 and Department of Medicine (Biomedical Informatics Research), Stanford University School of Medicine, Stanford, California 94305 (United States)

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  10. Ab initio calculation of transition state normal mode properties and rate constants for the H(T)+CH4(CD4) abstraction and exchange reactions

    International Nuclear Information System (INIS)

    Schatz, G.C.; Walch, S.P.; Wagner, A.F.

    1980-01-01

    We present ab initio (GVB--POL--CI) calculations for enough of the region about the abstraction and exchange saddle points for H(T)+CH 4 (CD 4 ) to perform a full normal mode analysis of the transition states. The resulting normal mode frequencies are compared to four other published surfaces: an ab initio UHF--SCF calculation by Carsky and Zahradnik, a semiempirical surface by Raff, and two semiempirical surfaces by Kurylo, Hollinden, and Timmons. Significant quantitative and qualitative differences exist between the POL--CI results and those of the other surfaces. Transition state theory rate constants and vibrationally adiabatic reaction threshold energies were computed for all surfaces and compared to available experimental values. For abstraction, the POL--CI rates are in good agreement with experimental rates and in better agreement than are the rates of any of the other surfaces. For exchange, uncertainties in the experimental values and in the importance of vibrationally nonadiabatic effects cloud the comparison of theory to experiment. Tentative conclusions are that the POL--CI barrier is too low by several kcal. Unless vibrationaly nonadiabatic effects are severe, the POL--CI surface is still in better agreement with experiment than are the other surfaces. The rates for a simple 3-atom transition state theory model (where CH 3 is treated as an atom) are compared to the rates for the full 6-atom model. The kinetic energy coupling of reaction coordinate modes to methyl group modes is identified as being of primary importance in determining the accuracy of the 3-atom model for this system. Substantial coupling in abstraction, but not exchange, causes the model to fail for abstraction but succeed for exchange

  11. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. A pseudospectra-based approach to non-normal stability of embedded boundary methods

    Science.gov (United States)

    Rapaka, Narsimha; Samtaney, Ravi

    2017-11-01

    We present non-normal linear stability of embedded boundary (EB) methods employing pseudospectra and resolvent norms. Stability of the discrete linear wave equation is characterized in terms of the normalized distance of the EB to the nearest ghost node (α) in one and two dimensions. An important objective is that the CFL condition based on the Cartesian grid spacing remains unaffected by the EB. We consider various discretization methods including both central and upwind-biased schemes. Stability is guaranteed when α Funds under Award No. URF/1/1394-01.

  13. Analysis of a renormalization group method and normal form theory for perturbed ordinary differential equations

    Science.gov (United States)

    DeVille, R. E. Lee; Harkin, Anthony; Holzer, Matt; Josić, Krešimir; Kaper, Tasso J.

    2008-06-01

    For singular perturbation problems, the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. E. 49 (1994) 4502-4511] has been shown to be an effective general approach for deriving reduced or amplitude equations that govern the long time dynamics of the system. It has been applied to a variety of problems traditionally analyzed using disparate methods, including the method of multiple scales, boundary layer theory, the WKBJ method, the Poincaré-Lindstedt method, the method of averaging, and others. In this article, we show how the RG method may be used to generate normal forms for large classes of ordinary differential equations. First, we apply the RG method to systems with autonomous perturbations, and we show that the reduced or amplitude equations generated by the RG method are equivalent to the classical Poincaré-Birkhoff normal forms for these systems up to and including terms of O(ɛ2), where ɛ is the perturbation parameter. This analysis establishes our approach and generalizes to higher order. Second, we apply the RG method to systems with nonautonomous perturbations, and we show that the reduced or amplitude equations so generated constitute time-asymptotic normal forms, which are based on KBM averages. Moreover, for both classes of problems, we show that the main coordinate changes are equivalent, up to translations between the spaces in which they are defined. In this manner, our results show that the RG method offers a new approach for deriving normal forms for nonautonomous systems, and it offers advantages since one can typically more readily identify resonant terms from naive perturbation expansions than from the nonautonomous vector fields themselves. Finally, we establish how well the solution to the RG equations approximates the solution of the original equations on time scales of O(1/ɛ).

  14. An improved method for risk evaluation in failure modes and effects analysis of CNC lathe

    Science.gov (United States)

    Rachieru, N.; Belu, N.; Anghel, D. C.

    2015-11-01

    Failure mode and effects analysis (FMEA) is one of the most popular reliability analysis tools for identifying, assessing and eliminating potential failure modes in a wide range of industries. In general, failure modes in FMEA are evaluated and ranked through the risk priority number (RPN), which is obtained by the multiplication of crisp values of the risk factors, such as the occurrence (O), severity (S), and detection (D) of each failure mode. However, the crisp RPN method has been criticized to have several deficiencies. In this paper, linguistic variables, expressed in Gaussian, trapezoidal or triangular fuzzy numbers, are used to assess the ratings and weights for the risk factors S, O and D. A new risk assessment system based on the fuzzy set theory and fuzzy rule base theory is to be applied to assess and rank risks associated to failure modes that could appear in the functioning of Turn 55 Lathe CNC. Two case studies have been shown to demonstrate the methodology thus developed. It is illustrated a parallel between the results obtained by the traditional method and fuzzy logic for determining the RPNs. The results show that the proposed approach can reduce duplicated RPN numbers and get a more accurate, reasonable risk assessment. As a result, the stability of product and process can be assured.

  15. MAPPIN: a method for annotating, predicting pathogenicity and mode of inheritance for nonsynonymous variants.

    Science.gov (United States)

    Gosalia, Nehal; Economides, Aris N; Dewey, Frederick E; Balasubramanian, Suganthi

    2017-10-13

    Nonsynonymous single nucleotide variants (nsSNVs) constitute about 50% of known disease-causing mutations and understanding their functional impact is an area of active research. Existing algorithms predict pathogenicity of nsSNVs; however, they are unable to differentiate heterozygous, dominant disease-causing variants from heterozygous carrier variants that lead to disease only in the homozygous state. Here, we present MAPPIN (Method for Annotating, Predicting Pathogenicity, and mode of Inheritance for Nonsynonymous variants), a prediction method which utilizes a random forest algorithm to distinguish between nsSNVs with dominant, recessive, and benign effects. We apply MAPPIN to a set of Mendelian disease-causing mutations and accurately predict pathogenicity for all mutations. Furthermore, MAPPIN predicts mode of inheritance correctly for 70.3% of nsSNVs. MAPPIN also correctly predicts pathogenicity for 87.3% of mutations from the Deciphering Developmental Disorders Study with a 78.5% accuracy for mode of inheritance. When tested on a larger collection of mutations from the Human Gene Mutation Database, MAPPIN is able to significantly discriminate between mutations in known dominant and recessive genes. Finally, we demonstrate that MAPPIN outperforms CADD and Eigen in predicting disease inheritance modes for all validation datasets. To our knowledge, MAPPIN is the first nsSNV pathogenicity prediction algorithm that provides mode of inheritance predictions, adding another layer of information for variant prioritization. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. The Impact of Normalization Methods on RNA-Seq Data Analysis

    Science.gov (United States)

    Zyprych-Walczak, J.; Szabelska, A.; Handschuh, L.; Górczak, K.; Klamecka, K.; Figlerowicz, M.; Siatkowski, I.

    2015-01-01

    High-throughput sequencing technologies, such as the Illumina Hi-seq, are powerful new tools for investigating a wide range of biological and medical problems. Massive and complex data sets produced by the sequencers create a need for development of statistical and computational methods that can tackle the analysis and management of data. The data normalization is one of the most crucial steps of data processing and this process must be carefully considered as it has a profound effect on the results of the analysis. In this work, we focus on a comprehensive comparison of five normalization methods related to sequencing depth, widely used for transcriptome sequencing (RNA-seq) data, and their impact on the results of gene expression analysis. Based on this study, we suggest a universal workflow that can be applied for the selection of the optimal normalization procedure for any particular data set. The described workflow includes calculation of the bias and variance values for the control genes, sensitivity and specificity of the methods, and classification errors as well as generation of the diagnostic plots. Combining the above information facilitates the selection of the most appropriate normalization method for the studied data sets and determines which methods can be used interchangeably. PMID:26176014

  17. Evaluation of directional normalization methods for Landsat TM/ETM+ over primary Amazonian lowland forests

    Science.gov (United States)

    Van doninck, Jasper; Tuomisto, Hanna

    2017-06-01

    Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.

  18. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    2000-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed, higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  19. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    1991-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  20. AN INTELLIGENT NEURO-FUZZY TERMINAL SLIDING MODE CONTROL METHOD WITH APPLICATION TO ATOMIC FORCE MICROSCOPE

    Directory of Open Access Journals (Sweden)

    Seied Yasser Nikoo

    2016-11-01

    Full Text Available In this paper, a neuro-fuzzy fast terminal sliding mode control method is proposed for controlling a class of nonlinear systems with bounded uncertainties and disturbances. In this method, a nonlinear terminal sliding surface is firstly designed. Then, this sliding surface is considered as input for an adaptive neuro-fuzzy inference system which is the main controller. A proportinal-integral-derivative controller is also used to asist the neuro-fuzzy controller in order to improve the performance of the system at the begining stage of control operation. In addition, bee algorithm is used in this paper to update the weights of neuro-fuzzy system as well as the parameters of the proportinal-integral-derivative controller. The proposed control scheme is simulated for vibration control in a model of atomic force microscope system and the results are compared with conventional sliding mode controllers. The simulation results show that the chattering effect in the proposed controller is decreased in comparison with the sliding mode and the terminal sliding mode controllers. Also, the method provides the advantages of fast convergence and low model dependency compared to the conventional methods.

  1. Failure Mode and Effect Analysis using Soft Set Theory and COPRAS Method

    Directory of Open Access Journals (Sweden)

    Ze-Ling Wang

    2017-01-01

    Full Text Available Failure mode and effect analysis (FMEA is a risk management technique frequently applied to enhance the system performance and safety. In recent years, many researchers have shown an intense interest in improving FMEA due to inherent weaknesses associated with the classical risk priority number (RPN method. In this study, we develop a new risk ranking model for FMEA based on soft set theory and COPRAS method, which can deal with the limitations and enhance the performance of the conventional FMEA. First, trapezoidal fuzzy soft set is adopted to manage FMEA team membersr linguistic assessments on failure modes. Then, a modified COPRAS method is utilized for determining the ranking order of the failure modes recognized in FMEA. Especially, we treat the risk factors as interdependent and employ the Choquet integral to obtain the aggregate risk of failures in the new FMEA approach. Finally, a practical FMEA problem is analyzed via the proposed approach to demonstrate its applicability and effectiveness. The result shows that the FMEA model developed in this study outperforms the traditional RPN method and provides a more reasonable risk assessment of failure modes.

  2. SYNTHESIS METHODS OF ALGEBRAIC NORMAL FORM OF MANY-VALUED LOGIC FUNCTIONS

    Directory of Open Access Journals (Sweden)

    A. V. Sokolov

    2016-01-01

    Full Text Available The rapid development of methods of error-correcting coding, cryptography, and signal synthesis theory based on the principles of many-valued logic determines the need for a more detailed study of the forms of representation of functions of many-valued logic. In particular the algebraic normal form of Boolean functions, also known as Zhegalkin polynomial, that well describe many of the cryptographic properties of Boolean functions is widely used. In this article, we formalized the notion of algebraic normal form for many-valued logic functions. We developed a fast method of synthesis of algebraic normal form of 3-functions and 5-functions that work similarly to the Reed-Muller transform for Boolean functions: on the basis of recurrently synthesized transform matrices. We propose the hypothesis, which determines the rules of the synthesis of these matrices for the transformation from the truth table to the coefficients of the algebraic normal form and the inverse transform for any given number of variables of 3-functions or 5-functions. The article also introduces the definition of algebraic degree of nonlinearity of the functions of many-valued logic and the S-box, based on the principles of many-valued logic. Thus, the methods of synthesis of algebraic normal form of 3-functions applied to the known construction of recurrent synthesis of S-boxes of length N = 3k, whereby their algebraic degrees of nonlinearity are computed. The results could be the basis for further theoretical research and practical applications such as: the development of new cryptographic primitives, error-correcting codes, algorithms of data compression, signal structures, and algorithms of block and stream encryption, all based on the perspective principles of many-valued logic. In addition, the fast method of synthesis of algebraic normal form of many-valued logic functions is the basis for their software and hardware implementation.

  3. Power System Oscillation Modes Identifications: Guidelines for Applying TLS-ESPRIT Method

    Science.gov (United States)

    Gajjar, Gopal R.; Soman, Shreevardhan

    2013-05-01

    Fast measurements of power system quantities available through wide-area measurement systems enables direct observations for power system electromechanical oscillations. But the raw observations data need to be processed to obtain the quantitative measures required to make any inference regarding the power system state. A detailed discussion is presented for the theory behind the general problem of oscillatory mode indentification. This paper presents some results on oscillation mode identification applied to a wide-area frequency measurements system. Guidelines for selection of parametes for obtaining most reliable results from the applied method are provided. Finally, some results on real measurements are presented with our inference on them.

  4. An algorithm of α-and γ-mode eigenvalue calculations by Monte Carlo method

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori

    2003-01-01

    A new algorithm for Monte Carlo calculation was developed to obtain α- and γ-mode eigenvalues. The α is a prompt neutron time decay constant measured in subcritical experiments, and the γ is a spatial decay constant measured in an exponential method for determining the subcriticality. This algorithm can be implemented into existing Monte Carlo eigenvalue calculation codes with minimum modifications. The algorithm was implemented into MCNP code and the performance of calculating the both mode eigenvalues were verified through comparison of the calculated eigenvalues with the ones obtained by fixed source calculations. (author)

  5. Low-Mode Conformational Search Method with Semiempirical Quantum Mechanical Calculations: Application to Enantioselective Organocatalysis.

    Science.gov (United States)

    Kamachi, Takashi; Yoshizawa, Kazunari

    2016-02-22

    A conformational search program for finding low-energy conformations of large noncovalent complexes has been developed. A quantitatively reliable semiempirical quantum mechanical PM6-DH+ method, which is able to accurately describe noncovalent interactions at a low computational cost, was employed in contrast to conventional conformational search programs in which molecular mechanical methods are usually adopted. Our approach is based on the low-mode method whereby an initial structure is perturbed along one of its low-mode eigenvectors to generate new conformations. This method was applied to determine the most stable conformation of transition state for enantioselective alkylation by the Maruoka and cinchona alkaloid catalysts and Hantzsch ester hydrogenation of imines by chiral phosphoric acid. Besides successfully reproducing the previously reported most stable DFT conformations, the conformational search with the semiempirical quantum mechanical calculations newly discovered a more stable conformation at a low computational cost.

  6. A Practical Test Method for Mode I Fracture Toughness of Adhesive Joints with Dissimilar Substrates

    Energy Technology Data Exchange (ETDEWEB)

    Boeman, R.G.; Erdman, D.L.; Klett, L.B.; Lomax, R.D.

    1999-09-27

    A practical test method for determining the mode I fracture toughness of adhesive joints with dissimilar substrates will be discussed. The test method is based on the familiar Double Cantilever Beam (DCB) specimen geometry, but overcomes limitations in existing techniques that preclude their use when testing joints with dissimilar substrates. The test method is applicable to adhesive joints where the two bonded substrates have different flexural rigidities due to geometric and/or material considerations. Two specific features discussed are the use of backing beams to prevent substrate damage and a compliance matching scheme to achieve symmetric loading conditions. The procedure is demonstrated on a modified DCB specimen comprised of SRIM composite and thin-section, e-coat steel substrates bonded with an epoxy adhesive. Results indicate that the test method provides a practical means of characterizing the mode I fracture toughness of joints with dissimilar substrates.

  7. Algebraic method for analysis of nonlinear systems with a normal matrix

    International Nuclear Information System (INIS)

    Konyaev, Yu.A.; Salimova, A.F.

    2014-01-01

    A promising method has been proposed for analyzing a class of quasilinear nonautonomous systems of differential equations whose matrix can be represented as a sum of nonlinear normal matrices, which makes it possible to analyze stability without using the Lyapunov functions [ru

  8. Method of normal coordinates in the formulation of a system with dissipation: The harmonic oscillator

    International Nuclear Information System (INIS)

    Mshelia, E.D.

    1994-07-01

    The method of normal coordinates of the theory of vibrations is used in decoupling the motion of n oscillators (1 ≤ n ≤4) representing intrinsic degrees of freedom coupled to collective motion in a quantum mechanical model that allows the determination of the probability for energy transfer from collective to intrinsic excitations in a dissipative system. (author). 21 refs

  9. A study of the up-and-down method for non-normal distribution functions

    DEFF Research Database (Denmark)

    Vibholm, Svend; Thyregod, Poul

    1988-01-01

    The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...

  10. An asymptotic expression for the eigenvalues of the normalization kernel of the resonating group method

    International Nuclear Information System (INIS)

    Lomnitz-Adler, J.; Brink, D.M.

    1976-01-01

    A generating function for the eigenvalues of the RGM Normalization Kernel is expressed in terms of the diagonal matrix elements of thw GCM Overlap Kernel. An asymptotic expression for the eigenvalues is obtained by using the Method of Steepest Descent. (Auth.)

  11. A sparse-mode spectral method for the simulation of turbulent flows

    International Nuclear Information System (INIS)

    Meneguzzi, M.; Politano, H.; Pouquet, A.; Zolver, M.

    1996-01-01

    We propose a new algorithm belonging to the family of the sparsemode spectral method to simulate turbulent flows. In this method the number of Fourier modes k increases with k more slowly than k D-1 in dimension D, while retaining the advantage of the fast Fourier transform. Examples of applications of the algorithm are given for the one-dimensional Burger's equation and two-dimensional incompressible MHD flows

  12. Two Novel Methods and Multi-Mode Periodic Solutions for the Fermi-Pasta-Ulam Model

    Science.gov (United States)

    Arioli, Gianni; Koch, Hans; Terracini, Susanna

    2005-04-01

    We introduce two novel methods for studying periodic solutions of the FPU β-model, both numerically and rigorously. One is a variational approach, based on the dual formulation of the problem, and the other involves computer-assisted proofs. These methods are used e.g. to construct a new type of solutions, whose energy is spread among several modes, associated with closely spaced resonances.

  13. Vibrational Spectra And Potential Energy Distributions of Normal Modes of N,N'-Etilenbis(P-Toluen sulfonamide)

    International Nuclear Information System (INIS)

    Alyar, S.

    2008-01-01

    N-substituted sulfonamides are well known for their diuretic, antidiabetic, antibacterial and antifungal, anticancer e.g., and are widely used in the therapy of patients. These important bioactive properties are strongly affected by the special features of -CH 2 -SO 2 -NR-linker and intramolecular motion Thus, the studies of energetic and spatial properties on N-substituted sulfonamides are of great importance to improve our understanding of their biological activities and enhance abilities to predict new drugs. Density Functional Theory B3LYP /6-31G(d,p) level has been applied to obtain the vibrational force field for the most stable conformation of N,N'-etilenbis(p-toluensulfonamit)(ptsen)having sulfonamide moiety. The results of these calculation have been compared with spectroscopic data to verify accuracy of calculation and applicability of the DFT approach to ptsen. Additionally, complete normal coordinate analyses with quantum mechanical scaling (SQM) were performed to derive the potential energy distributions (PE)

  14. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    Energy Technology Data Exchange (ETDEWEB)

    Sevastianov, L. A., E-mail: sevast@sci.pfu.edu.ru [Peoples' Friendship University of Russia (Russian Federation); Egorov, A. A. [Russian Academy of Sciences, Prokhorov General Physics Institute (Russian Federation); Sevastyanov, A. L. [Peoples' Friendship University of Russia (Russian Federation)

    2013-02-15

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement' of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.

  15. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, Saira; Bissell, Mina J

    2004-12-17

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double

  16. EMG normalization method based on grade 3 of manual muscle testing: Within- and between-day reliability of normalization tasks and application to gait analysis.

    Science.gov (United States)

    Tabard-Fougère, Anne; Rose-Dulcina, Kevin; Pittet, Vincent; Dayer, Romain; Vuillerme, Nicolas; Armand, Stéphane

    2018-02-01

    Electromyography (EMG) is an important parameter in Clinical Gait Analysis (CGA), and is generally interpreted with timing of activation. EMG amplitude comparisons between individuals, muscles or days need normalization. There is no consensus on existing methods. The gold standard, maximum voluntary isometric contraction (MVIC), is not adapted to pathological populations because patients are often unable to perform an MVIC. The normalization method inspired by the isometric grade 3 of manual muscle testing (isoMMT3), which is the ability of a muscle to maintain a position against gravity, could be an interesting alternative. The aim of this study was to evaluate the within- and between-day reliability of the isoMMT3 EMG normalizing method during gait compared with the conventional MVIC method. Lower limb muscles EMG (gluteus medius, rectus femoris, tibialis anterior, semitendinosus) were recorded bilaterally in nine healthy participants (five males, aged 29.7±6.2years, BMI 22.7±3.3kgm -2 ) giving a total of 18 independent legs. Three repeated measurements of the isoMMT3 and MVIC exercises were performed with an EMG recording. EMG amplitude of the muscles during gait was normalized by these two methods. This protocol was repeated one week later. Within- and between-day reliability of normalization tasks were similar for isoMMT3 and MVIC methods. Within- and between-day reliability of gait EMG normalized by isoMMT3 was higher than with MVIC normalization. These results indicate that EMG normalization using isoMMT3 is a reliable method with no special equipment needed and will support CGA interpretation. The next step will be to evaluate this method in pathological populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Emission computer tomographic orthopan display of the jaws - method and normal values

    International Nuclear Information System (INIS)

    Bockisch, A.; Koenig, R.; Biersack, H.J.; Wahl, G.

    1990-01-01

    A tomoscintigraphic method is described to create orthopan-like projections of the jaws from SPECT bone scans using cylinder projection. On the basis of this projection a numerical analysis of the dental regions is performed in the same computer code. For each dental region the activity relative to the contralateral region and relative to the average activity of the corresponding jaw is calculated. Using this method, a set of normal activity relations has been established by investigation of 24 patients. (orig.) [de

  18. Planar quadrature RF transceiver design using common-mode differential-mode (CMDM transmission line method for 7T MR imaging.

    Directory of Open Access Journals (Sweden)

    Ye Li

    Full Text Available The use of quadrature RF magnetic fields has been demonstrated to be an efficient method to reduce transmit power and to increase the signal-to-noise (SNR in magnetic resonance (MR imaging. The goal of this project was to develop a new method using the common-mode and differential-mode (CMDM technique for compact, planar, distributed-element quadrature transmit/receive resonators for MR signal excitation and detection and to investigate its performance for MR imaging, particularly, at ultrahigh magnetic fields. A prototype resonator based on CMDM method implemented by using microstrip transmission line was designed and fabricated for 7T imaging. Both the common mode (CM and the differential mode (DM of the resonator were tuned and matched at 298MHz independently. Numerical electromagnetic simulation was performed to verify the orthogonal B1 field direction of the two modes of the CMDM resonator. Both workbench tests and MR imaging experiments were carried out to evaluate the performance. The intrinsic decoupling between the two modes of the CMDM resonator was demonstrated by the bench test, showing a better than -36 dB transmission coefficient between the two modes at resonance frequency. The MR images acquired by using each mode and the images combined in quadrature showed that the CM and DM of the proposed resonator provided similar B1 coverage and achieved SNR improvement in the entire region of interest. The simulation and experimental results demonstrate that the proposed CMDM method with distributed-element transmission line technique is a feasible and efficient technique for planar quadrature RF coil design at ultrahigh fields, providing intrinsic decoupling between two quadrature channels and high frequency capability. Due to its simple and compact geometry and easy implementation of decoupling methods, the CMDM quadrature resonator can possibly be a good candidate for design blocks in multichannel RF coil arrays.

  19. Semi-analytical quasi-normal mode theory for the local density of states in coupled photonic crystal cavity-waveguide structures

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2015-01-01

    We present and validate a semi-analytical quasi-normal mode (QNM) theory for the local density of states (LDOS) in coupled photonic crystal (PhC) cavity-waveguide structures. By means of an expansion of the Green's function on one or a few QNMs, a closed-form expression for the LDOS is obtained, ......-trivial spectrum with a peak and a dip is found, which is reproduced only when including both the two relevant QNMs in the theory. In both cases, we find relative errors below 1% in the bandwidth of interest.......We present and validate a semi-analytical quasi-normal mode (QNM) theory for the local density of states (LDOS) in coupled photonic crystal (PhC) cavity-waveguide structures. By means of an expansion of the Green's function on one or a few QNMs, a closed-form expression for the LDOS is obtained......, and for two types of two-dimensional PhCs, with one and two cavities side-coupled to an extended waveguide, the theory is validated against numerically exact computations. For the single cavity, a slightly asymmetric spectrum is found, which the QNM theory reproduces, and for two cavities a non...

  20. A systematic study of genome context methods: calibration, normalization and combination

    Directory of Open Access Journals (Sweden)

    Dale Joseph M

    2010-10-01

    Full Text Available Abstract Background Genome context methods have been introduced in the last decade as automatic methods to predict functional relatedness between genes in a target genome using the patterns of existence and relative locations of the homologs of those genes in a set of reference genomes. Much work has been done in the application of these methods to different bioinformatics tasks, but few papers present a systematic study of the methods and their combination necessary for their optimal use. Results We present a thorough study of the four main families of genome context methods found in the literature: phylogenetic profile, gene fusion, gene cluster, and gene neighbor. We find that for most organisms the gene neighbor method outperforms the phylogenetic profile method by as much as 40% in sensitivity, being competitive with the gene cluster method at low sensitivities. Gene fusion is generally the worst performing of the four methods. A thorough exploration of the parameter space for each method is performed and results across different target organisms are presented. We propose the use of normalization procedures as those used on microarray data for the genome context scores. We show that substantial gains can be achieved from the use of a simple normalization technique. In particular, the sensitivity of the phylogenetic profile method is improved by around 25% after normalization, resulting, to our knowledge, on the best-performing phylogenetic profile system in the literature. Finally, we show results from combining the various genome context methods into a single score. When using a cross-validation procedure to train the combiners, with both original and normalized scores as input, a decision tree combiner results in gains of up to 20% with respect to the gene neighbor method. Overall, this represents a gain of around 15% over what can be considered the state of the art in this area: the four original genome context methods combined using a

  1. Experimental Method for Characterizing Electrical Steel Sheets in the Normal Direction

    Directory of Open Access Journals (Sweden)

    Thierry Belgrand

    2010-10-01

    Full Text Available This paper proposes an experimental method to characterise magnetic laminations in the direction normal to the sheet plane. The principle, which is based on a static excitation to avoid planar eddy currents, is explained and specific test benches are proposed. Measurements of the flux density are made with a sensor moving in and out of an air-gap. A simple analytical model is derived in order to determine the permeability in the normal direction. The experimental results for grain oriented steel sheets are presented and a comparison is provided with values obtained from literature.

  2. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. I. The normal modes

    International Nuclear Information System (INIS)

    Comandi, G.L.; Chiofalo, M.L.; Toncelli, R.; Bramanti, D.; Polacco, E.; Nobili, A.M.

    2006-01-01

    Recent theoretical work suggests that violation of the equivalence principle might be revealed in a measurement of the fractional differential acceleration η between two test bodies-of different compositions, falling in the gravitational field of a source mass--if the measurement is made to the level of η≅10 -13 or better. This being within the reach of ground based experiments gives them a new impetus. However, while slowly rotating torsion balances in ground laboratories are close to reaching this level, only an experiment performed in a low orbit around the Earth is likely to provide a much better accuracy. We report on the progress made with the 'Galileo Galilei on the ground' (GGG) experiment, which aims to compete with torsion balances using an instrument design also capable of being converted into a much higher sensitivity space test. In the present and following articles (Part I and Part II), we demonstrate that the dynamical response of the GGG differential accelerometer set into supercritical rotation-in particular, its normal modes (Part I) and rejection of common mode effects (Part II)-can be predicted by means of a simple but effective model that embodies all the relevant physics. Analytical solutions are obtained under special limits, which provide the theoretical understanding. A simulation environment is set up, obtaining a quantitative agreement with the available experimental data on the frequencies of the normal modes and on the whirling behavior. This is a needed and reliable tool for controlling and separating perturbative effects from the expected signal, as well as for planning the optimization of the apparatus

  3. Application of empirical mode decomposition method for characterization of random vibration signals

    Directory of Open Access Journals (Sweden)

    Setyamartana Parman

    2016-07-01

    Full Text Available Characterization of finite measured signals is a great of importance in dynamical modeling and system identification. This paper addresses an approach for characterization of measured random vibration signals where the approach rests on a method called empirical mode decomposition (EMD. The applicability of proposed approach is tested in one numerical and experimental data from a structural system, namely spar platform. The results are three main signal components, comprising: noise embedded in the measured signal as the first component, first intrinsic mode function (IMF called as the wave frequency response (WFR as the second component and second IMF called as the low frequency response (LFR as the third component while the residue is the trend. Band-pass filter (BPF method is taken as benchmark for the results obtained from EMD method.

  4. Development of a magnet power supply with sub-ppm ripple performance for J-PARC with a novel common-mode rejection method with an NPC inverter

    International Nuclear Information System (INIS)

    Koseki, K.; Kurimoto, Y.

    2014-01-01

    The mechanism that generates common-mode noise in inverter circuits, which are widely used in magnet power supplies, was evaluated by a circuit simulation. By following asymmetric operational sequences, pulsed voltage is applied to the parasitic capacitance of power cables that causes a common-mode current at each switching period of the semiconductor switches. Common-mode noise was also found to disturb the normal-mode excitation current by inducing higher frequency components in the applied voltage to the magnet. To eliminate the disturbing effect by the common-mode noise, a newly developed operational method that uses a neutral point clamped, NPC, inverter with reduced switching sequences was evaluated both by a circuit simulation and experimentally. The operational method for the NPC inverter could sufficiently reduce the common-mode noise. A high-power test operation performed using 16 bending magnets at the J-PARC facility achieved a ripple of less than 1 ppm in the excitation current

  5. Development of a magnet power supply with sub-ppm ripple performance for J-PARC with a novel common-mode rejection method with an NPC inverter

    Energy Technology Data Exchange (ETDEWEB)

    Koseki, K., E-mail: kunio.koseki@kek.jp; Kurimoto, Y.

    2014-05-21

    The mechanism that generates common-mode noise in inverter circuits, which are widely used in magnet power supplies, was evaluated by a circuit simulation. By following asymmetric operational sequences, pulsed voltage is applied to the parasitic capacitance of power cables that causes a common-mode current at each switching period of the semiconductor switches. Common-mode noise was also found to disturb the normal-mode excitation current by inducing higher frequency components in the applied voltage to the magnet. To eliminate the disturbing effect by the common-mode noise, a newly developed operational method that uses a neutral point clamped, NPC, inverter with reduced switching sequences was evaluated both by a circuit simulation and experimentally. The operational method for the NPC inverter could sufficiently reduce the common-mode noise. A high-power test operation performed using 16 bending magnets at the J-PARC facility achieved a ripple of less than 1 ppm in the excitation current.

  6. The research on AP1000 nuclear main pumps’ complete characteristics and the normalization method

    International Nuclear Information System (INIS)

    Zhu, Rongsheng; Liu, Yong; Wang, Xiuli; Fu, Qiang; Yang, Ailing; Long, Yun

    2017-01-01

    Highlights: • Complete characteristics of main pump are researched into. • The quadratic character of head and torque under some operatings. • The characteristics tend to be the same under certain conditions. • The normalization method gives proper estimations on external characteristics. • The normalization method can efficiently improve the security computing. - Abstract: The paper summarizes the complete characteristics of nuclear main pumps based on experimental results and makes a detailed study, and then draws a series of important conclusions: with regard to the overall flow area, the runaway operating and 0-revolving-speed operating of nuclear main pumps both have quadratic characteristics; with regard to the infinite flow, the braking operation and the 0-revolving-speed operation show consistent external characteristics. To remedy the shortcomings of the traditional complete-characteristic expression with regards to only describing limited flow sections at specific revolving speeds, the paper proposes a normalization method. As an important boundary condition of the security computing of unstable transient process of the primary reactor coolant pump and the nuclear island primary circuit and secondary circuit, the precision of complete-characteristic data and curve impacts the precision of security computing. A normalization curve obtained by applying the normalization method to process complete-characteristic data could correctly, completely and precisely express the complete characteristics of the primary reactor coolant pump under any rotational speed and full flow, and is capable of giving proper estimations on external characteristics of the flow outside the test range and even of the infinite flow. These advantages are of great significance for the improvement of security computing of transient processes of the primary reactor coolant pump and the circuit system.

  7. A method for detecting the presence of organic fraction in nucleation mode sized particles

    Directory of Open Access Journals (Sweden)

    P. Vaattovaara

    2005-01-01

    Full Text Available New particle formation and growth has a very important role in many climate processes. However, the overall knowlegde of the chemical composition of atmospheric nucleation mode (particle diameter, d<20 nm and the lower end of Aitken mode particles (d≤50 nm is still insufficient. In this work, we have applied the UFO-TDMA (ultrafine organic tandem differential mobility analyzer method to shed light on the presence of an organic fraction in the nucleation mode size class in different atmospheric environments. The basic principle of the organic fraction detection is based on our laboratory UFO-TDMA measurements with organic and inorganic compounds. Our laboratory measurements indicate that the usefulness of the UFO-TDMA in the field experiments would arise especially from the fact that atmospherically the most relevant inorganic compounds do not grow in subsaturated ethanol vapor, when particle size is 10 nm in diameter and saturation ratio is about 86% or below it. Furthermore, internally mixed particles composed of ammonium bisulfate and sulfuric acid with sulfuric acid mass fraction ≤33% show no growth at 85% saturation ratio. In contrast, 10 nm particles composed of various oxidized organic compounds of atmospheric relevance are able to grow in those conditions. These discoveries indicate that it is possible to detect the presence of organics in atmospheric nucleation mode sized particles using the UFO-TDMA method. In the future, the UFO-TDMA is expected to be an important aid to describe the composition of atmospheric newly-formed particles.

  8. Design of Normal Concrete Mixtures Using Workability-Dispersion-Cohesion Method

    Directory of Open Access Journals (Sweden)

    Hisham Qasrawi

    2016-01-01

    Full Text Available The workability-dispersion-cohesion method is a new proposed method for the design of normal concrete mixes. The method uses special coefficients called workability-dispersion and workability-cohesion factors. These coefficients relate workability to mobility and stability of the concrete mix. The coefficients are obtained from special charts depending on mix requirements and aggregate properties. The method is practical because it covers various types of aggregates that may not be within standard specifications, different water to cement ratios, and various degrees of workability. Simple linear relationships were developed for variables encountered in the mix design and were presented in graphical forms. The method can be used in countries where the grading or fineness of the available materials is different from the common international specifications (such as ASTM or BS. Results were compared to the ACI and British methods of mix design. The method can be extended to cover all types of concrete.

  9. Developing TOPSIS method using statistical normalization for selecting knowledge management strategies

    Directory of Open Access Journals (Sweden)

    Amin Zadeh Sarraf

    2013-09-01

    Full Text Available Purpose: Numerous companies are expecting their knowledge management (KM to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. Design/methodology/approach: An extension of TOPSIS, a multi-attribute decision making (MADM technique, to a group decision environment is investigated. TOPSIS is a practical and useful technique for ranking and selection of a number of externally determined alternatives through distance measures. The entropy method is often used for assessing weights in the TOPSIS method. Entropy in information theory is a criterion uses for measuring the amount of disorder represented by a discrete probability distribution. According to decrease resistance degree of employees opposite of implementing a new strategy, it seems necessary to spot all managers’ opinion. The normal distribution considered the most prominent probability distribution in statistics is used to normalize gathered data. Findings: The results of this study show that by considering 6 criteria for alternatives Evaluation, the most appropriate KM strategy to implement  in our company was ‘‘Personalization’’. Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the approach such as normal distribution of sample and community. These assumptions can be changed in future work. Originality/value: This paper proposes an effective solution based on combined entropy and TOPSIS approach to help companies that need to evaluate and select KM strategies. In represented solution, opinions of all managers is gathered and normalized by using standard normal distribution and central limit theorem. Keywords: Knowledge management; strategy; TOPSIS; Normal distribution; entropy

  10. A CALCULATION METHOD OF TRANSIENT MODES OF ELECTRIC SHIPS’ PROPELLING ELECTRIC PLANTS

    Directory of Open Access Journals (Sweden)

    V. A. Yarovenko

    2017-12-01

    Full Text Available The purpose of the work is to develop the method for calculating the transient modes of electric ships’ propelling electric plants during maneuver. This will allow us to evaluate and improve the maneuverability of vessels with electric motion. Methodology. The solution to the problems is proposed to be carried out on the basis of mathematical modeling of maneuvering modes. The duration of transient modes in an electric power plant at electric ships’ maneuvers is commensurable with the transient operation modes of the vessel itself. Therefore, the analysis of the electric power plants’ maneuvering modes should be made in unity with all the components of the ship’s propulsion complex. Results. A specified mathematical model of transient regimes of electric ship’s propulsion complex, including thermal motors, synchronous generators, electric power converters, propulsion motors, propellers, rudder, ship’s hull is developed. The model is universal. It covers the vast majority of modern and promising electric ships with a traditional type of propulsors. It allows calculating the current values of the basic mode indicators and assessing the quality indicators of maneuvering. The model is made in relative units. Dimensionless parameters of the complex are obtained. These parameters influence the main indicators of the quality of maneuvering. The adequacy of the suggested specified mathematical model and the developed computation method based on it were confirmed. To do this, the results of mathematical modeling for a real electric ship were compared with the data obtained in the course of field experiments conducted by other researchers. Originality. The mathematical description of a generator unit, as an integral part of an indivisible ship’s propulsion complex, makes it possible to calculate the dynamic operation modes of electric power sources during electric vessels’ maneuvering. There is an opportunity to design the electric ships

  11. A general solution strategy of modified power method for higher mode solutions

    International Nuclear Information System (INIS)

    Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung

    2016-01-01

    A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the new strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.

  12. An automatic method to discriminate malignant masses from normal tissue in digital mammograms

    International Nuclear Information System (INIS)

    Brake, Guido M. te; Karssemeijer, Nico; Hendriks, Jan H.C.L.

    2000-01-01

    Specificity levels of automatic mass detection methods in mammography are generally rather low, because suspicious looking normal tissue is often hard to discriminate from real malignant masses. In this work a number of features were defined that are related to image characteristics that radiologists use to discriminate real lesions from normal tissue. An artificial neural network was used to map the computed features to a measure of suspiciousness for each region that was found suspicious by a mass detection method. Two data sets were used to test the method. The first set of 72 malignant cases (132 films) was a consecutive series taken from the Nijmegen screening programme, 208 normal films were added to improve the estimation of the specificity of the method. The second set was part of the new DDSM data set from the University of South Florida. A total of 193 cases (772 films) with 372 annotated malignancies was used. The measure of suspiciousness that was computed using the image characteristics was successful in discriminating tumours from false positive detections. Approximately 75% of all cancers were detected in at least one view at a specificity level of 0.1 false positive per image. (author)

  13. An anisotropic shear velocity model of the Earth's mantle using normal modes, body waves, surface waves and long-period waveforms

    Science.gov (United States)

    Moulik, P.; Ekström, G.

    2014-12-01

    We use normal-mode splitting functions in addition to surface wave phase anomalies, body wave traveltimes and long-period waveforms to construct a 3-D model of anisotropic shear wave velocity in the Earth's mantle. Our modelling approach inverts for mantle velocity and anisotropy as well as transition-zone discontinuity topographies, and incorporates new crustal corrections for the splitting functions that are consistent with the non-linear corrections we employ for the waveforms. Our preferred anisotropic model, S362ANI+M, is an update to the earlier model S362ANI, which did not include normal-mode splitting functions in its derivation. The new model has stronger isotropic velocity anomalies in the transition zone and slightly smaller anomalies in the lowermost mantle, as compared with S362ANI. The differences in the mid- to lowermost mantle are primarily restricted to features in the Southern Hemisphere. We compare the isotropic part of S362ANI+M with other recent global tomographic models and show that the level of agreement is higher now than in the earlier generation of models, especially in the transition zone and the lower mantle. The anisotropic part of S362ANI+M is restricted to the upper 300 km in the mantle and is similar to S362ANI. When radial anisotropy is allowed throughout the mantle, large-scale anisotropic patterns are observed in the lowermost mantle with vSV > vSH beneath Africa and South Pacific and vSH > vSV beneath several circum-Pacific regions. The transition zone exhibits localized anisotropic anomalies of ˜3 per cent vSH > vSV beneath North America and the Northwest Pacific and ˜2 per cent vSV > vSH beneath South America. However, small improvements in fits to the data on adding anisotropy at depth leave the question open on whether large-scale radial anisotropy is required in the transition zone and in the lower mantle. We demonstrate the potential of mode-splitting data in reducing the trade-offs between isotropic velocity and

  14. Tourism forecasting using modified empirical mode decomposition and group method of data handling

    Science.gov (United States)

    Yahya, N. A.; Samsudin, R.; Shabri, A.

    2017-09-01

    In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.

  15. EFFICIENCY ANALYSIS OF HASHING METHODS FOR FILE SYSTEMS IN USER MODE

    Directory of Open Access Journals (Sweden)

    E. Y. Ivanov

    2013-05-01

    Full Text Available The article deals with characteristics and performance of interaction protocols between virtual file system and file system, their influence on processing power of microkernel operating systems. User mode implementation of ext2 file system for MINIX 3 OS is used to show that in microkernel operating systems file object identification time might increase up to 26 times in comparison with monolithic systems. Therefore, we present efficiency analysis of various hashing methods for file systems, running in user mode. Studies have shown that using hashing methods recommended in this paper it is possible to achieve competitive performance of the considered component of I/O stacks in microkernel and monolithic operating systems.

  16. Method of detection of transition radiation by wire chambers operating in self-quenching streamer mode

    International Nuclear Information System (INIS)

    Akopdzhanov, G.A.; Bityukov, S.I.; Dzhelyadin, R.I.; Zaitsev, A.M.; Lapin, V.V.; Saraikin, A.I.

    1984-01-01

    A method for detecting X-ray transition radiation against the background of the signal from relativistic charged particles is suggested that is based on the use of peculiarities of the development of self-queenching streamer mode. The self-qunching streamer discharge in the Xe + isobutane mixture is experimentally registered. The effect of separation of signals from the relativistic particle and from soft X-ray, is obtained

  17. A graphical method for estimating the tunneling factor for mode conversion processes

    International Nuclear Information System (INIS)

    Swanson, D.G.

    1994-01-01

    The fundamental parameter characterizing the strength of any mode conversion process is the tunneling parameter, which is typically determined from a model dispersion relation which is transformed into a differential equation. Here a graphical method is described which gives the tunneling parameter from quantities directly measured from a simple graph of the dispersion relation. The accuracy of the estimate depends only on the accuracy of the measurements

  18. Application of specific gravity method for normalization of urinary excretion rates of radionuclides

    International Nuclear Information System (INIS)

    Thakur, Smita S.; Yadav, J.R.; Rao, D.D.

    2015-01-01

    In vitro bioassay monitoring is based on the determination of activity concentration in biological samples excreted from the body and is most suitable for alpha and beta emitters. For occupational workers handling actinides in reprocessing facilities possibility of internal exposure exists and urine assay is preferred method for monitoring such exposure. Urine samples collected for 24 h duration, is the true representative of bioassay sample and hence in the case of insufficient collection time, specific gravity applied method of normalization of urine sample is used. The present study reports the data of specific gravity generated for controlled group of Indian population by the use of densitometer and its application in urinary sample activity normalization. The average specific gravity value obtained for the controlled group was 1.008±0.005 gm/ml. (author)

  19. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  20. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... image analysis methods used in longitudinal studies. The implication is increased measurement variation and a risk of bias in the estimations (e.g. in the volume change for a structure). We proposed two quite different approaches for scanner drift normalization and demonstrated the performance...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  1. A new maximum power point method based on a sliding mode approach for solar energy harvesting

    International Nuclear Information System (INIS)

    Farhat, Maissa; Barambones, Oscar; Sbita, Lassaad

    2017-01-01

    Highlights: • Create a simple, easy of implement and accurate V_M_P_P estimator. • Stability analysis of the proposed system based on the Lyapunov’s theory. • A comparative study versus P&O, highlight SMC good performances. • Construct a new PS-SMC algorithm to include the partial shadow case. • Experimental validation of the SMC MPP tracker. - Abstract: This paper presents a photovoltaic (PV) system with a maximum power point tracking (MPPT) facility. The goal of this work is to maximize power extraction from the photovoltaic generator (PVG). This goal is achieved using a sliding mode controller (SMC) that drives a boost converter connected between the PVG and the load. The system is modeled and tested under MATLAB/SIMULINK environment. In simulation, the sliding mode controller offers fast and accurate convergence to the maximum power operating point that outperforms the well-known perturbation and observation method (P&O). The sliding mode controller performance is evaluated during steady-state, against load varying and panel partial shadow (PS) disturbances. To confirm the above conclusion, a practical implementation of the maximum power point tracker based sliding mode controller on a hardware setup is performed on a dSPACE real time digital control platform. The data acquisition and the control system are conducted all around dSPACE 1104 controller board and its RTI environment. The experimental results demonstrate the validity of the proposed control scheme over a stand-alone real photovoltaic system.

  2. Adaptive Sliding Mode Control Method Based on Nonlinear Integral Sliding Surface for Agricultural Vehicle Steering Control

    Directory of Open Access Journals (Sweden)

    Taochang Li

    2014-01-01

    Full Text Available Automatic steering control is the key factor and essential condition in the realization of the automatic navigation control of agricultural vehicles. In order to get satisfactory steering control performance, an adaptive sliding mode control method based on a nonlinear integral sliding surface is proposed in this paper for agricultural vehicle steering control. First, the vehicle steering system is modeled as a second-order mathematic model; the system uncertainties and unmodeled dynamics as well as the external disturbances are regarded as the equivalent disturbances satisfying a certain boundary. Second, a transient process of the desired system response is constructed in each navigation control period. Based on the transient process, a nonlinear integral sliding surface is designed. Then the corresponding sliding mode control law is proposed to guarantee the fast response characteristics with no overshoot in the closed-loop steering control system. Meanwhile, the switching gain of sliding mode control is adaptively adjusted to alleviate the control input chattering by using the fuzzy control method. Finally, the effectiveness and the superiority of the proposed method are verified by a series of simulation and actual steering control experiments.

  3. Normal Values of Tissue-Muscle Perfusion Indexes of Lower Limbs Obtained with a Scintigraphic Method.

    Science.gov (United States)

    Manevska, Nevena; Stojanoski, Sinisa; Pop Gjorceva, Daniela; Todorovska, Lidija; Miladinova, Daniela; Zafirova, Beti

    2017-09-01

    Introduction Muscle perfusion is a physiologic process that can undergo quantitative assessment and thus define the range of normal values of perfusion indexes and perfusion reserve. The investigation of the microcirculation has a crucial role in determining the muscle perfusion. Materials and method The study included 30 examinees, 24-74 years of age, without a history of confirmed peripheral artery disease and all had normal findings on Doppler ultrasonography and pedo-brachial index of lower extremity (PBI). 99mTc-MIBI tissue muscle perfusion scintigraphy of lower limbs evaluates tissue perfusion in resting condition "rest study" and after workload "stress study", through quantitative parameters: Inter-extremity index (for both studies), left thigh/right thigh (LT/RT) left calf/right calf (LC/RC) and perfusion reserve (PR) for both thighs and calves. Results In our investigated group we assessed the normal values of quantitative parameters of perfusion indexes. Indexes ranged for LT/RT in rest study 0.91-1.05, in stress study 0.92-1.04. LC/RC in rest 0.93-1.07 and in stress study 0.93-1.09. The examinees older than 50 years had insignificantly lower perfusion reserve of these parameters compared with those younger than 50, LC (p=0.98), and RC (p=0.6). Conclusion This non-invasive scintigraphic method allows in individuals without peripheral artery disease to determine the range of normal values of muscle perfusion at rest and stress condition and to clinically implement them in evaluation of patients with peripheral artery disease for differentiating patients with normal from those with impaired lower limbs circulation.

  4. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization.

    Science.gov (United States)

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.

  5. Extension of Tom Booth's Modified Power Method for Higher Eigen Modes

    International Nuclear Information System (INIS)

    Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung

    2015-01-01

    A possible technique to get the even higher modes is suggested, but it is difficult to be applied practically. In this paper, a general solution strategy is proposed, which can extend Tom Booth's modified power method to get the higher Eigenmodes and there is no limitation about the number of Eigenmodes that can be obtained with this method. In this paper, a general solution strategy is proposed, which can extend Tom Booth's modified power method to get the higher Eigenmodes and there is no limitation about the number of Eigenmodes that can be obtained with this method. It is more practical than the original solution strategy that Tom Booth proposed. The implementation of the method in Monte Carlo code shows significant advantages comparing to the original power method

  6. A method for named entity normalization in biomedical articles: application to diseases and plants.

    Science.gov (United States)

    Cho, Hyejin; Choi, Wonjun; Lee, Hyunju

    2017-10-13

    In biomedical articles, a named entity recognition (NER) technique that identifies entity names from texts is an important element for extracting biological knowledge from articles. After NER is applied to articles, the next step is to normalize the identified names into standard concepts (i.e., disease names are mapped to the National Library of Medicine's Medical Subject Headings disease terms). In biomedical articles, many entity normalization methods rely on domain-specific dictionaries for resolving synonyms and abbreviations. However, the dictionaries are not comprehensive except for some entities such as genes. In recent years, biomedical articles have accumulated rapidly, and neural network-based algorithms that incorporate a large amount of unlabeled data have shown considerable success in several natural language processing problems. In this study, we propose an approach for normalizing biological entities, such as disease names and plant names, by using word embeddings to represent semantic spaces. For diseases, training data from the National Center for Biotechnology Information (NCBI) disease corpus and unlabeled data from PubMed abstracts were used to construct word representations. For plants, a training corpus that we manually constructed and unlabeled PubMed abstracts were used to represent word vectors. We showed that the proposed approach performed better than the use of only the training corpus or only the unlabeled data and showed that the normalization accuracy was improved by using our model even when the dictionaries were not comprehensive. We obtained F-scores of 0.808 and 0.690 for normalizing the NCBI disease corpus and manually constructed plant corpus, respectively. We further evaluated our approach using a data set in the disease normalization task of the BioCreative V challenge. When only the disease corpus was used as a dictionary, our approach significantly outperformed the best system of the task. The proposed approach shows robust

  7. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    Science.gov (United States)

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  8. Evaluation of coupling terms between intra- and intermolecular vibrations in coarse-grained normal-mode analysis: Does a stronger acid make a stiffer hydrogen bond?

    Science.gov (United States)

    Houjou, Hirohiko

    2011-10-01

    Using theory of harmonic normal-mode vibration analysis, we developed a procedure for evaluating the anisotropic stiffness of intermolecular forces. Our scheme for coarse-graining of molecular motions is modified so as to account for intramolecular vibrations in addition to relative translational/rotational displacement. We applied this new analytical scheme to four carboxylic acid dimers, for which coupling between intra- and intermolecular vibrations is crucial for determining the apparent stiffness of the intermolecular double hydrogen bond. The apparent stiffness constant was analyzed on the basis of a conjunct spring model, which defines contributions from true intermolecular stiffness and molecular internal stiffness. Consequently, the true intermolecular stiffness was in the range of 43-48 N m-1 for all carboxylic acids studied, regardless of the molecules' acidity. We concluded that the difference in the apparent stiffness can be attributed to differences in the internal stiffness of the respective molecules.

  9. Numerical Validation of the Delaunay Normalization and the Krylov-Bogoliubov-Mitropolsky Method

    Directory of Open Access Journals (Sweden)

    David Ortigosa

    2014-01-01

    Full Text Available A scalable second-order analytical orbit propagator programme based on modern and classical perturbation methods is being developed. As a first step in the validation and verification of part of our orbit propagator programme, we only consider the perturbation produced by zonal harmonic coefficients in the Earth’s gravity potential, so that it is possible to analyze the behaviour of the mathematical expressions involved in Delaunay normalization and the Krylov-Bogoliubov-Mitropolsky method in depth and determine their limits.

  10. Partition functions in even dimensional AdS via quasinormal mode methods

    International Nuclear Information System (INIS)

    Keeler, Cynthia; Ng, Gim Seng

    2014-01-01

    In this note, we calculate the one-loop determinant for a massive scalar (with conformal dimension Δ) in even-dimensional AdS d+1 space, using the quasinormal mode method developed in http://dx.doi.org/10.1088/0264-9381/27/12/125001 by Denef, Hartnoll, and Sachdev. Working first in two dimensions on the related Euclidean hyperbolic plane H 2 , we find a series of zero modes for negative real values of Δ whose presence indicates a series of poles in the one-loop partition function Z(Δ) in the Δ complex plane; these poles contribute temperature-independent terms to the thermal AdS partition function computed in http://dx.doi.org/10.1088/0264-9381/27/12/125001. Our results match those in a series of papers by Camporesi and Higuchi, as well as Gopakumar et al. http://dx.doi.org/10.1007/JHEP11(2011)010 and Banerjee et al. http://dx.doi.org/10.1007/JHEP03(2011)147. We additionally examine the meaning of these zero modes, finding that they Wick-rotate to quasinormal modes of the AdS 2 black hole. They are also interpretable as matrix elements of the discrete series representations of SO(2,1) in the space of smooth functions on S 1 . We generalize our results to general even dimensional AdS 2n , again finding a series of zero modes which are related to discrete series representations of SO(2n,1), the motion group of H 2n .

  11. Novel method to control antenna currents based on theory of characteristic modes

    Science.gov (United States)

    Elghannai, Ezdeen Ahmed

    Characteristic Mode Theory is one of the very few numerical methods that provide a great deal of physical insight because it allows us to determine the natural modes of the radiating structure. The key feature of these modes is that the total induced antenna current, input impedance/admittance and radiation pattern can be expressed as a linear weighted combination of individual modes. Using this decomposition method, it is possible to study the behavior of the individual modes, understand them and therefore control the antennas behavior; in other words, control the currents induced on the antenna structure. This dissertation advances the topic of antenna design by carefully controlling the antenna currents over the desired frequency band to achieve the desired performance specifications for a set of constraints. Here, a systematic method based on the Theory of Characteristic Modes (CM) and lumped reactive loading to achieve the goal of current control is developed. The lumped reactive loads are determined based on the desired behavior of the antenna currents. This technique can also be used to impedance match the antenna to the source/generator connected to it. The technique is much more general than the traditional impedance matching. Generally, the reactive loads that properly control the currents exhibit a combination of Foster and non-Foster behavior. The former can be implemented with lumped passive reactive components, while the latter can be implemented with lumped non-Foster circuits (NFC). The concept of current control is applied to design antennas with a wide band (impedance/pattern) behavior using reactive loads. We successfully applied this novel technique to design multi band and wide band antennas for wireless applications. The technique was developed to match the antenna to resistive and/or complex source impedance and control the radiation pattern at these frequency bands, considering size and volume constraints. A wide band patch antenna was

  12. Asymptotic Method of Solution for a Problem of Construction of Optimal Gas-Lift Process Modes

    Directory of Open Access Journals (Sweden)

    Fikrat A. Aliev

    2010-01-01

    Full Text Available Mathematical model in oil extraction by gas-lift method for the case when the reciprocal value of well's depth represents a small parameter is considered. Problem of optimal mode construction (i.e., construction of optimal program trajectories and controls is reduced to the linear-quadratic optimal control problem with a small parameter. Analytic formulae for determining the solutions at the first-order approximation with respect to the small parameter are obtained. Comparison of the obtained results with known ones on a specific example is provided, which makes it, in particular, possible to use obtained results in realizations of oil extraction problems by gas-lift method.

  13. A boundary integral method for a dynamic, transient mode I crack problem with viscoelastic cohesive zone

    KAUST Repository

    Leise, Tanya L.

    2009-08-19

    We consider the problem of the dynamic, transient propagation of a semi-infinite, mode I crack in an infinite elastic body with a nonlinear, viscoelastic cohesize zone. Our problem formulation includes boundary conditions that preclude crack face interpenetration, in contrast to the usual mode I boundary conditions that assume all unloaded crack faces are stress-free. The nonlinear viscoelastic cohesive zone behavior is motivated by dynamic fracture in brittle polymers in which crack propagation is preceeded by significant crazing in a thin region surrounding the crack tip. We present a combined analytical/numerical solution method that involves reducing the problem to a Dirichlet-to-Neumann map along the crack face plane, resulting in a differo-integral equation relating the displacement and stress along the crack faces and within the cohesive zone. © 2009 Springer Science+Business Media B.V.

  14. Field analysis of TE and TM modes in photonic crystal Bragg fibers by transmission matrix method

    Directory of Open Access Journals (Sweden)

    M Hosseini Farzad

    2010-03-01

    Full Text Available In this article, we considered the field analysis in photonic crystal Bragg fibers. We apply the method of transmission matrix to calculater the dispersion curves, the longitudinal wave number over wave number versus incident wavelength, and the field distributions of TE and TM modes in the Bragg fiber. Our analysis shows that the field of guided modes is confined in the core and can exist only in particular wavelength bands corresponding to the band-gap of the periodic structure of the clad. From another point of view, light confinement is due to Bragg reflection from high-and low-refractive index layers of the clad. Also, the diagram of average angular frequency with respect to average longitudinal wave number is plotted so that the band gap regions of the clad are clearly observed.

  15. Comparison between the boundary layer and global resistivity methods for tearing modes in reversed field configurations

    International Nuclear Information System (INIS)

    Santiago, M.A.M.

    1987-01-01

    A review of the problem of growth rate calculations for tearing modes in field reversed Θ-pinches is presented. Its shown that in the several experimental data, the methods used for analysing the plasma with a global finite resistivity has a better quantitative agreement than the boundary layer analysis. A comparative study taking into account the m = 1 resistive kindmode and the m = 2 mode, which is more dangerous for the survey of rotational instabilities of the plasma column is done. It can see that the imaginary component of the eigenfrequency, which determinates the growth rate, has a good agreement with the experimental data and the real component is different from the rotational frequency as it has been measured in some experiments. (author) [pt

  16. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  17. Standard Test Method for Normal Spectral Emittance at Elevated Temperatures of Nonconducting Specimens

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1971-01-01

    1.1 This test method describes an accurate technique for measuring the normal spectral emittance of electrically nonconducting materials in the temperature range from 1000 to 1800 K, and at wavelengths from 1 to 35 μm. It is particularly suitable for measuring the normal spectral emittance of materials such as ceramic oxides, which have relatively low thermal conductivity and are translucent to appreciable depths (several millimetres) below the surface, but which become essentially opaque at thicknesses of 10 mm or less. 1.2 This test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is particularly suitable for research laboratories, where the highest precision and accuracy are desired, and is not recommended for routine production or acceptance testing. Because of its high accuracy, this test method may be used as a reference method to be applied to production and acceptance testing in case of dispute. 1.3 This test metho...

  18. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  19. 1H MR spectroscopy of the normal human brains : comparison of automated prescan method with manual method

    International Nuclear Information System (INIS)

    Lim, Myung Kwan; Suh, Chang Hae; Cho, Young Kook; Kim, Jin Hee

    1998-01-01

    The purpose of this paper is to evaluate regional differences in relative metabolite ratios in the normal human brain by 1 H MR spectroscopy (MRS), and compare the spectral quality obtained by the automated prescan method (PROBE) and the manual method. A total of 61 reliable spectra were obtained by PROBE (28/34=82% success) and by the manual method (33/33=100% success). Regional differences in the spectral patterns of the five regions were clearly demonstrated by both PROBE and the manual methods. for prescanning, the manual method took slightly longer than PROBE (3-5 mins and 2 mins, respectively). There were no significant differences in spectral patterns and relative metabolic ratios between the two methods. However, auto-prescan by PROBE seemed to be very vulnerable to slight movement by patients, and in three cases, an acceptable spectrum was thus not obtained. PROBE is a highly practical and reliable method for single voxel 1 H MRS of the human brain; the two methods of prescanning do not result in significantly different spectral patterns and the relative metabolite ratios. PROBE, however, is vulnerable to slight movement by patients, and if the success rate for obtaining quality spectra is to be increased, regardless of the patient's condition and the region of the brain, it must be used in conjunction with the manual method. (author). 23 refs., 2 tabs., 3 figs

  20. Coupling analysis of energy conversion in multi-mode vibration structural control using a synchronized switch damping method

    International Nuclear Information System (INIS)

    Ji, Hongli; Qiu, Jinhao; Xia, Pinqi; Inman, Daniel

    2012-01-01

    Modal coupling is an important issue in the analysis and control of structural systems with multi-degrees of freedom (MDOF). In this paper, modal coupling induced by energy conversion in the structural control of an MDOF system using a synchronized switch damping method is investigated theoretically and validated numerically. In the analysis, it is supposed that the voltage on the piezoelectric actuator is switched at the displacement extrema of a given mode. Two types of coupling in energy conversion are considered. The first is whether the switching action based on one mode induces energy conversion of the other modes. The second is whether the vibration of one mode affects the energy conversion of the other modes. The results indicate that the modal coupling in energy conversion is very complicated. In most cases the switching action based on one mode does induce energy conversion of another mode, but the efficiency depends on the frequency ratio of the two modes. The vibration of one mode affects the energy conversion of another mode only when the frequency ratio of the two modes takes some special values. Discussions are also given on the potential application of the theoretical results in the design of an energy harvesting device. (paper)

  1. Solution of the Lambda modes problem of a nuclear power reactor using an h–p finite element method

    International Nuclear Information System (INIS)

    Vidal-Ferrandiz, A.; Fayez, R.; Ginestar, D.; Verdú, G.

    2014-01-01

    Highlights: • An hp finite element method is proposed for the Lambda modes problem of a nuclear reactor. • Different strategies can be implemented for increasing the accuracy of the solutions. • 2D and 3D benchmarks have been studied obtaining accurate results. - Abstract: Lambda modes of a nuclear power reactor have interest in reactor physics since they have been used to develop modal methods and to study BWR reactor instabilities. An h–p-Adaptation finite element method has been implemented to compute the dominant modes the fundamental mode and the next subcritical modes of a nuclear reactor. The performance of this method has been studied in three benchmark problems, a homogeneous 2D reactor, the 2D BIBLIS reactor and the 3D IAEA reactor

  2. Analysis of Leaky Modes in Photonic Crystal Fibers Using the Surface Integral Equation Method

    Directory of Open Access Journals (Sweden)

    Jung-Sheng Chiang

    2018-04-01

    Full Text Available A fully vectorial algorithm based on the surface integral equation method for the modelling of leaky modes in photonic crystal fibers (PCFs by solely solving the complex propagation constants of characteristic equations is presented. It can be used for calculations of the complex effective index and confinement losses of photonic crystal fibers. As complex root examination is the key technique in the solution, the new algorithm which possesses this technique can be used to solve the leaky modes of photonic crystal fibers. The leaky modes of solid-core PCFs with a hexagonal lattice of circular air-holes are reported and discussed. The simulation results indicate how the confinement loss by the imaginary part of the effective index changes with air-hole size, the number of rings of air-holes, and wavelength. Confinement loss reductions can be realized by increasing the air-hole size and the number of air-holes. The results show that the confinement loss rises with wavelength, implying that the light leaks more easily for longer wavelengths; meanwhile, the losses are decreased significantly as the air-hole size d/Λ is increased.

  3. Research on FBG-based longitudinal-acousto-optic modulator with Fourier mode coupling method.

    Science.gov (United States)

    Li, Zhuoxuan; Pei, Li; Liu, Chao; Ning, Tigang; Yu, Shaowei

    2012-10-20

    Fourier mode coupling model was first applied to achieve the spectra property of a fiber Bragg grating (FBG)-based longitudinal-acousto-optic modulator. Compared with traditional analysis algorithms, such as the transfer matrix method, the Fourier mode coupling model could improve the computing efficiency up to 100 times with a guarantee of accuracy. In this paper, based on the theoretical analysis of this model, the spectra characteristics of the modulator in different frequencies and acoustically induced strains were numerically simulated. In the experiment, a uniform FBG was modulated by acoustic wave (AW) at 12 different frequencies. In particular, the modulator responses at 563 and 885.5 KHz with three different lead zirconate titanate (PZT) loads applied were plotted for illustration, and the linear fitting of experimental data demonstrated a good match with the simulation result. The acoustic excitation of the longitudinal wave is obtained using a conic silica horn attached to the surface of a shear-mode PZT plate paralleled to the fiber axis. This way of generating longitudinal AW with a transversal PZT may shed light on the optimal structural design for the FBG-based longitudinal-acousto-optic modulator.

  4. Study on compressive strength of self compacting mortar cubes under normal & electric oven curing methods

    Science.gov (United States)

    Prasanna Venkatesh, G. J.; Vivek, S. S.; Dhinakaran, G.

    2017-07-01

    In the majority of civil engineering applications, the basic building blocks were the masonry units. Those masonry units were developed as a monolithic structure by plastering process with the help of binding agents namely mud, lime, cement and their combinations. In recent advancements, the mortar study plays an important role in crack repairs, structural rehabilitation, retrofitting, pointing and plastering operations. The rheology of mortar includes flowable, passing and filling properties which were analogous with the behaviour of self compacting concrete. In self compacting (SC) mortar cubes, the cement was replaced by mineral admixtures namely silica fume (SF) from 5% to 20% (with an increment of 5%), metakaolin (MK) from 10% to 30% (with an increment of 10%) and ground granulated blast furnace slag (GGBS) from 25% to 75% (with an increment of 25%). The ratio between cement and fine aggregate was kept constant as 1: 2 for all normal and self compacting mortar mixes. The accelerated curing namely electric oven curing with the differential temperature of 128°C for the period of 4 hours was adopted. It was found that the compressive strength obtained from the normal and electric oven method of curing was higher for self compacting mortar cubes than normal mortar cube. The cement replacement by 15% SF, 20% MK and 25%GGBS obtained higher strength under both curing conditions.

  5. Modeling the Circle of Willis Using Electrical Analogy Method under both Normal and Pathological Circumstances

    Science.gov (United States)

    Abdi, Mohsen; Karimi, Alireza; Navidbakhsh, Mahdi; Rahmati, Mohammadali; Hassani, Kamran; Razmkon, Ali

    2013-01-01

    Background and objective: The circle of Willis (COW) supports adequate blood supply to the brain. The cardiovascular system, in the current study, is modeled using an equivalent electronic system focusing on the COW. Methods: In our previous study we used 42 compartments to model whole cardiovascular system. In the current study, nevertheless, we extended our model by using 63 compartments to model whole CS. Each cardiovascular artery is modeled using electrical elements, including resistor, capacitor, and inductor. The MATLAB Simulink software is used to obtain the left and right ventricles pressure as well as pressure distribution at efferent arteries of the circle of Willis. Firstly, the normal operation of the system is shown and then the stenosis of cerebral arteries is induced in the circuit and, consequently, the effects are studied. Results: In the normal condition, the difference between pressure distribution of right and left efferent arteries (left and right ACA–A2, left and right MCA, left and right PCA–P2) is calculated to indicate the effect of anatomical difference between left and right sides of supplying arteries of the COW. In stenosis cases, the effect of internal carotid artery occlusion on efferent arteries pressure is investigated. The modeling results are verified by comparing to the clinical observation reported in the literature. Conclusion: We believe the presented model is a useful tool for representing the normal operation of the cardiovascular system and study of the pathologies. PMID:25505747

  6. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  7. Histological versus stereological methods applied at spermatogonia during normal human development

    DEFF Research Database (Denmark)

    Cortes, D

    1990-01-01

    The number of spermatogonia per tubular transverse section (S/T), and the percentage of seminiferous tubulus containing spermatogonia (the fertility index (FI] were measured in 40 pairs of normal autopsy testes aged 28 weeks of gestation-40 years. S/T and FI showed similar changes during the whole...... period, and were minimal between 1 and 4 years. The number of spermatogonia per testis (S/testis) and the number of spermatogonia per cm3 testis tissue (S/cm3) were estimated by stereological methods in the same testes. S/T and FI respectively were significantly correlated both to S/testis and S/cm3. So...

  8. Molecular Structures, Vibrational Spectroscopy, and Normal-Mode Analysis of M(2)(C&tbd1;CR)(4)(PMe(3))(4) Dimetallatetraynes. Observation of Strongly Mixed Metal-Metal and Metal-Ligand Vibrational Modes.

    Science.gov (United States)

    John, Kevin D.; Miskowski, Vincent M.; Vance, Michael A.; Dallinger, Richard F.; Wang, Louis C.; Geib, Steven J.; Hopkins, Michael D.

    1998-12-28

    The nature of the skeletal vibrational modes of complexes of the type M(2)(C&tbd1;CR)(4)(PMe(3))(4) (M = Mo, W; R = H, Me, Bu(t)(), SiMe(3)) has been deduced. Metrical data from X-ray crystallographic studies of Mo(2)(C&tbd1;CR)(4)(PMe(3))(4) (R = Me, Bu(t)(), SiMe(3)) and W(2)(C&tbd1;CMe)(4)(PMe(3))(4) reveal that the core bond distances and angles are within normal ranges and do not differ in a statistically significant way as a function of the alkynyl substituent, indicating that their associated force constants should be similarly invariant among these compounds. The crystal structures of Mo(2)(C&tbd1;CSiMe(3))(4)(PMe(3))(4) and Mo(2)(C&tbd1;CBu(t)())(4)(PMe(3))(4) are complicated by 3-fold disorder of the Mo(2) unit within apparently ordered ligand arrays. Resonance-Raman spectra ((1)(delta-->delta) excitation, THF solution) of Mo(2)(C&tbd1;CSiMe(3))(4)(PMe(3))(4) and its isotopomers (PMe(3)-d(9), C&tbd1;CSiMe(3)-d(9), (13)C&tbd1;(13)CSiMe(3)) exhibit resonance-enhanced bands due to a(1)-symmetry fundamentals (nu(a) = 362, nu(b) = 397, nu(c) = 254 cm(-)(1) for the natural-abundance complex) and their overtones and combinations. The frequencies and relative intensities of the fundamentals are highly sensitive to isotopic substitution of the C&tbd1;CSiMe(3) ligands, but are insensitive to deuteration of the PMe(3) ligands. Nonresonance-Raman spectra (FT-Raman, 1064 nm excitation, crystalline samples) for the Mo(2)(C&tbd1;CSiMe(3))(4)(PMe(3))(4) compounds and for Mo(2)(C&tbd1;CR)(4)(PMe(3))(4) (R = H, D, Me, Bu(t)(), SiMe(3)) and W(2)(C&tbd1;CMe)(4)(PMe(3))(4) exhibit nu(a), nu(b), and nu(c) and numerous bands due to alkynyl- and phosphine-localized modes, the latter of which are assigned by comparisons to FT-Raman spectra of Mo(2)X(4)L(4) (X = Cl, Br, I; L = PMe(3), PMe(3)-d(9))(4) and Mo(2)Cl(4)(AsMe(3))(4). Valence force-field normal-coordinate calculations on the model compound Mo(2)(C&tbd1;CH)(4)P(4), using core force constants transferred from a calculation

  9. Fluxgate magnetometer offset vector determination by the 3D mirror mode method

    Science.gov (United States)

    Plaschke, F.; Goetz, C.; Volwerk, M.; Richter, I.; Frühauff, D.; Narita, Y.; Glassmeier, K.-H.; Dougherty, M. K.

    2017-07-01

    Fluxgate magnetometers on-board spacecraft need to be regularly calibrated in flight. In low fields, the most important calibration parameters are the three offset vector components, which represent the magnetometer measurements in vanishing ambient magnetic fields. In case of three-axis stabilized spacecraft, a few methods exist to determine offsets: (I) by analysis of Alfvénic fluctuations present in the pristine interplanetary magnetic field, (II) by rolling the spacecraft around at least two axes, (III) by cross-calibration against measurements from electron drift instruments or absolute magnetometers, and (IV) by taking measurements in regions of well-known magnetic fields, e.g. cometary diamagnetic cavities. In this paper, we introduce a fifth option, the 3-dimensional (3D) mirror mode method, by which 3D offset vectors can be determined using magnetic field measurements of highly compressional waves, e.g. mirror modes in the Earth's magnetosheath. We test the method by applying it to magnetic field data measured by the following: the Time History of Events and Macroscale Interactions during Substorms-C spacecraft in the terrestrial magnetosheath, the Cassini spacecraft in the Jovian magnetosheath and the Rosetta spacecraft in the vicinity of comet 67P/Churyumov-Gerasimenko. The tests reveal that the achievable offset accuracies depend on the ambient magnetic field strength (lower strength meaning higher accuracy), on the length of the underlying data interval (more data meaning higher accuracy) and on the stability of the offset that is to be determined.

  10. Automated PCR setup for forensic casework samples using the Normalization Wizard and PCR Setup robotic methods.

    Science.gov (United States)

    Greenspoon, S A; Sykes, K L V; Ban, J D; Pollard, A; Baisden, M; Farr, M; Graham, N; Collins, B L; Green, M M; Christenson, C C

    2006-12-20

    Human genome, pharmaceutical and research laboratories have long enjoyed the application of robotics to performing repetitive laboratory tasks. However, the utilization of robotics in forensic laboratories for processing casework samples is relatively new and poses particular challenges. Since the quantity and quality (a mixture versus a single source sample, the level of degradation, the presence of PCR inhibitors) of the DNA contained within a casework sample is unknown, particular attention must be paid to procedural susceptibility to contamination, as well as DNA yield, especially as it pertains to samples with little biological material. The Virginia Department of Forensic Science (VDFS) has successfully automated forensic casework DNA extraction utilizing the DNA IQ(trade mark) System in conjunction with the Biomek 2000 Automation Workstation. Human DNA quantitation is also performed in a near complete automated fashion utilizing the AluQuant Human DNA Quantitation System and the Biomek 2000 Automation Workstation. Recently, the PCR setup for casework samples has been automated, employing the Biomek 2000 Automation Workstation and Normalization Wizard, Genetic Identity version, which utilizes the quantitation data, imported into the software, to create a customized automated method for DNA dilution, unique to that plate of DNA samples. The PCR Setup software method, used in conjunction with the Normalization Wizard method and written for the Biomek 2000, functions to mix the diluted DNA samples, transfer the PCR master mix, and transfer the diluted DNA samples to PCR amplification tubes. Once the process is complete, the DNA extracts, still on the deck of the robot in PCR amplification strip tubes, are transferred to pre-labeled 1.5 mL tubes for long-term storage using an automated method. The automation of these steps in the process of forensic DNA casework analysis has been accomplished by performing extensive optimization, validation and testing of the

  11. A Bloch modal approach for engineering waveguide and cavity modes in two-dimensional photonic crystals

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2014-01-01

    uses no external excitation and determines the quasi-normal modes as unity eigenvalues of the cavity roundtrip matrix. We demonstrate the method and the quasi-normal modes for two types of two-dimensional photonic crystal structures, and discuss the quasi-normal mode eld distributions and Q-factors...

  12. Dual-mode nested search method for categorical uncertain multi-objective optimization

    Science.gov (United States)

    Tang, Long; Wang, Hu

    2016-10-01

    Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.

  13. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  14. Probable neuro sexual mode of action of Casimiroa edulis seed extract versus [correction of verses] sildenafil citrate (Viagra(tm)) on mating behavior in normal male rats.

    Science.gov (United States)

    Ali, Syed Tabrez; Rakkah, Nabeeh I

    2008-01-01

    The present study deals with the aphrodisiac actions of the aqueous extract of the seeds of the hypotensive plant Casimiroa edulis on the sexual behavior of normal male rats. In this investigation 30 healthy male Wister strain white albino rats showing the brisk sexual activity age 15 weeks, weighing 400-450 grams were included. Female rats were artificially brought into estrous by hormonal treatment. Receptivity was checked by exposing them to the male rats and the most receptive females were selected for the stud The mating responses including Mounting Frequency (MF), Intromission Frequency (IF), Mounting Latency (ML), Intromission Latency (IL), Ejaculatory Latency in first and second series (EL1 and EL2) and Post Ejaculatory Interval (PEI) were recorded after treating the animals with 250 mg/kg casimiroa edulis extract (test reference) and 5 mg/kg sildenafil citrate (standard reference) respectively orally per day for 7 days. Both the groups exhibited a significant increase in Mounting Frequency, Intromission Frequency, and first and second ejaculatory latencies, where as Mounting and Intromission latencies and the Post Ejaculatory Interval showed a significant reduction than the controls. Although a similar pattern of mating behavior was observed among the test and the standard groups, however in all the cases as expected, sildenafil produced greater activity than the casimiroa edulis extract. These results suggest the possibility of a similar mode of action of casimiroa edulis and sildenafil citrate on mating behavior in these animals. Our work reported in this research thus provide preliminary evidence that the aqueous seed extract of casimiroa edulis possesses alphrodisiac activity and may be used as an alternative drug therapy to restore sexual functions probably via a neurogenic mode of action.

  15. Usage of Failure Mode & EffectAnalysis Method (FMEA forsafety assessment in a drug manufacture

    Directory of Open Access Journals (Sweden)

    Y Nazari

    2006-04-01

    Full Text Available Background and Aims: This study was hold in purpose of recognizing and controlling workplacehazards in production units of a drag ManufactureMethod:So for recognition and assessment of hazards, FMEA Method was used. FMEASystematically investigates the effects of equipment and system failures leading often toequipment design improvements. At first the level of the study defined as system. Then accordingto observations, accident statistic, and interview with managers, supervisory, and workers highrisk system were determiner. So the boundaries of the system established and informationregarding the relevant Components, their function and interactions gathered. To preventConfusion between Similar pieces of equipment, a unique system identifier developed. After thatall failure modes and their causes for each equipment or system listed, the immediate effects ofeach failure mode and interactive effect on other equipment or system was described too. Riskpriority number was determined according to global and local criteriaResults: After all some actions and solution proposed to reduce the likelihood and severity offailures and raise their delectability.Conclusion :This study illustrated that although of the first step drug manufacture may seem safe,but there are still many hazardous condition that could cause serious accidents, The result proposedit is necessary: (1 to develop comprehensive manual for periodical and regular inspection ofinstruments of workplaces in purpose of recognize unknown failures and their causes, (2 developa comprehensive program for systems maintenance and repair, and (3 conduct worker training.

  16. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.

    2013-01-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  17. SERS-Fluorescence Dual-Mode pH-Sensing Method Based on Janus Microparticles.

    Science.gov (United States)

    Yue, Shuai; Sun, Xiaoting; Wang, Ning; Wang, Yaning; Wang, Yue; Xu, Zhangrun; Chen, Mingli; Wang, Jianhua

    2017-11-15

    A surface-enhanced Raman scattering (SERS)-fluorescence dual-mode pH-sensing method based on Janus microgels was developed, which combined the advantages of high specificity offered by SERS and fast imaging afforded by fluorescence. Dual-mode probes, pH-dependent 4-mercaptobenzoic acid, and carbon dots were individually encapsulated in the independent hemispheres of Janus microparticles fabricated via a centrifugal microfluidic chip. On the basis of the obvious volumetric change of hydrogels in different pHs, the Janus microparticles were successfully applied for sensitive and reliable pH measurement from 1.0 to 8.0, and the two hemispheres showed no obvious interference. The proposed method addressed the limitation that sole use of the SERS-based pH sensing usually failed in strong acidic media. The gastric juice pH and extracellular pH change were measured separately in vitro using the Janus microparticles, which confirmed the validity of microgels for pH sensing. The microparticles exhibited good stability, reversibility, biocompatibility, and ideal semipermeability for avoiding protein contamination, and they have the potential to be implantable sensors to continuously monitor pH in vivo.

  18. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi

    2013-11-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  19. Multi-class Mode of Action Classification of Toxic Compounds Using Logic Based Kernel Methods.

    Science.gov (United States)

    Lodhi, Huma; Muggleton, Stephen; Sternberg, Mike J E

    2010-09-17

    Toxicity prediction is essential for drug design and development of effective therapeutics. In this paper we present an in silico strategy, to identify the mode of action of toxic compounds, that is based on the use of a novel logic based kernel method. The technique uses support vector machines in conjunction with the kernels constructed from first order rules induced by an Inductive Logic Programming system. It constructs multi-class models by using a divide and conquer reduction strategy that splits multi-classes into binary groups and solves each individual problem recursively hence generating an underlying decision list structure. In order to evaluate the effectiveness of the approach for chemoinformatics problems like predictive toxicology, we apply it to toxicity classification in aquatic systems. The method is used to identify and classify 442 compounds with respect to the mode of action. The experimental results show that the technique successfully classifies toxic compounds and can be useful in assessing environmental risks. Experimental comparison of the performance of the proposed multi-class scheme with the standard multi-class Inductive Logic Programming algorithm and multi-class Support Vector Machine yields statistically significant results and demonstrates the potential power and benefits of the approach in identifying compounds of various toxic mechanisms. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Optimization of accelerator parameters using normal form methods on high-order transfer maps

    Energy Technology Data Exchange (ETDEWEB)

    Snopok, Pavel [Michigan State Univ., East Lansing, MI (United States)

    2007-05-01

    Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented

  1. Methods and Technologies of XML Data Modeling for IP Mode Intelligent Measuring and Controlling System

    International Nuclear Information System (INIS)

    Liu, G X; Hong, X B; Liu, J G

    2006-01-01

    This paper presents the IP mode intelligent measuring and controlling system (IMIMCS). Based on object-oriented modeling technology of UML and XML Schema, the innovative methods and technologies of some key problems for XML data modeling in the IMIMCS were especially discussed, including refinement for systemic business by means of use-case diagram of UML, the confirmation of the content of XML data model and logic relationship of the objects of XML Schema with the aid of class diagram of UML, the mapping rules from the UML object model to XML Schema. Finally, the application of the IMIMCS based on XML for a modern greenhouse was presented. The results show that the modeling methods of the measuring and controlling data in the IMIMCS involving the multi-layer structure and many operating systems process strong reliability and flexibility, guarantee uniformity of complex XML documents and meet the requirement of data communication across platform

  2. Research on Weak Fault Extraction Method for Alleviating the Mode Mixing of LMD

    Directory of Open Access Journals (Sweden)

    Lin Zhang

    2018-05-01

    Full Text Available Compared with the strong background noise, the energy entropy of early fault signals of bearings are weak under actual working conditions. Therefore, extracting the bearings’ early fault features has always been a major difficulty in fault diagnosis of rotating machinery. Based on the above problems, the masking method is introduced into the Local Mean Decomposition (LMD decomposition process, and a weak fault extraction method based on LMD and mask signal (MS is proposed. Due to the mode mixing of the product function (PF components decomposed by LMD in the noisy background, it is difficult to distinguish the authenticity of the fault frequency. Therefore, the MS method is introduced to deal with the PF components that are decomposed by the LMD and have strong correlation with the original signal, so as to suppress the modal aliasing phenomenon and extract the fault frequencies. In this paper, the actual fault signal of the rolling bearing is analyzed. By combining the MS method with the LMD method, the fault signal mixed with the noise is processed. The kurtosis value at the fault frequency is increased by eight-fold, and the signal-to-noise ratio (SNR is increased by 19.1%. The fault signal is successfully extracted by the proposed composite method.

  3. Using the Jacobi-Davidson method to obtain the dominant Lambda modes of a nuclear power reactor

    Energy Technology Data Exchange (ETDEWEB)

    Verdu, G. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain)]. E-mail: gverdu@iqn.upv.es; Ginestar, D. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Miro, R. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Vidal, V. [Departamento de Sistemas Informaticos y Computacion, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain)

    2005-07-15

    The Jacobi-Davidson method is a modification of Davidson method, which has shown to be very effective to compute the dominant eigenvalues and their corresponding eigenvectors of a large and sparse matrix. This method has been used to compute the dominant Lambda modes of two configurations of Cofrentes nuclear power reactor, showing itself a quite effective method, especially for perturbed configurations.

  4. Fabricating TiO2 nanocolloids by electric spark discharge method at normal temperature and pressure

    Science.gov (United States)

    Tseng, Kuo-Hsiung; Chang, Chaur-Yang; Chung, Meng-Yun; Cheng, Ting-Shou

    2017-11-01

    In this study, TiO2 nanocolloids were successfully fabricated in deionized water without using suspending agents through using the electric spark discharge method at room temperature and under normal atmospheric pressure. This method was exceptional because it did not create nanoparticle dispersion and the produced colloids contained no derivatives. The proposed method requires only traditional electrical discharge machines (EDMs), self-made magnetic stirrers, and Ti wires (purity, 99.99%). The EDM pulse on time (T on) and pulse off time (T off) were respectively set at 50 and 100 μs, 100 and 100 μs, 150 and 100 μs, and 200 and 100 μs to produce four types of TiO2 nanocolloids. Zetasizer analysis of the nanocolloids showed that a decrease in T on increased the suspension stability, but there were no significant correlations between T on and particle size. Colloids produced from the four production configurations showed a minimum particle size between 29.39 and 52.85 nm and a zeta-potential between -51.2 and -46.8 mV, confirming that the method introduced in this study can be used to produce TiO2 nanocolloids with excellent suspension stability. Scanning electron microscopy with energy dispersive spectroscopy also indicated that the TiO2 colloids did not contain elements other than Ti and oxygen.

  5. Fabricating TiO2 nanocolloids by electric spark discharge method at normal temperature and pressure.

    Science.gov (United States)

    Tseng, Kuo-Hsiung; Chang, Chaur-Yang; Chung, Meng-Yun; Cheng, Ting-Shou

    2017-11-17

    In this study, TiO 2 nanocolloids were successfully fabricated in deionized water without using suspending agents through using the electric spark discharge method at room temperature and under normal atmospheric pressure. This method was exceptional because it did not create nanoparticle dispersion and the produced colloids contained no derivatives. The proposed method requires only traditional electrical discharge machines (EDMs), self-made magnetic stirrers, and Ti wires (purity, 99.99%). The EDM pulse on time (T on ) and pulse off time (T off ) were respectively set at 50 and 100 μs, 100 and 100 μs, 150 and 100 μs, and 200 and 100 μs to produce four types of TiO 2 nanocolloids. Zetasizer analysis of the nanocolloids showed that a decrease in T on increased the suspension stability, but there were no significant correlations between T on and particle size. Colloids produced from the four production configurations showed a minimum particle size between 29.39 and 52.85 nm and a zeta-potential between -51.2 and -46.8 mV, confirming that the method introduced in this study can be used to produce TiO 2 nanocolloids with excellent suspension stability. Scanning electron microscopy with energy dispersive spectroscopy also indicated that the TiO 2 colloids did not contain elements other than Ti and oxygen.

  6. Contrast sensitivity measured by two different test methods in healthy, young adults with normal visual acuity.

    Science.gov (United States)

    Koefoed, Vilhelm F; Baste, Valborg; Roumes, Corinne; Høvding, Gunnar

    2015-03-01

    This study reports contrast sensitivity (CS) reference values obtained by two different test methods in a strictly selected population of healthy, young adults with normal uncorrected visual acuity. Based on these results, the index of contrast sensitivity (ICS) is calculated, aiming to establish ICS reference values for this population and to evaluate the possible usefulness of ICS as a tool to compare the degree of agreement between different CS test methods. Military recruits with best eye uncorrected visual acuity 0.00 LogMAR or better, normal colour vision and age 18-25 years were included in a study to record contrast sensitivity using Optec 6500 (FACT) at spatial frequencies of 1.5, 3, 6, 12 and 18 cpd in photopic and mesopic light and CSV-1000E at spatial frequencies of 3, 6, 12 and 18 cpd in photopic light. Index of contrast sensitivity was calculated based on data from the three tests, and the Bland-Altman technique was used to analyse the agreement between ICS obtained by the different test methods. A total of 180 recruits were included. Contrast sensitivity frequency data for all tests were highly skewed with a marked ceiling effect for the photopic tests. The median ICS for Optec 6500 at 85 cd/m2 was -0.15 (95% percentile 0.45), compared with -0.00 (95% percentile 1.62) for Optec at 3 cd/m2 and 0.30 (95% percentile 1.20) FOR CSV-1000E. The mean difference between ICSFACT 85 and ICSCSV was -0.43 (95% CI -0.56 to -0.30, p<0.00) with limits of agreement (LoA) within -2.10 and 1.22. The regression line on the difference of average was near to zero (R2=0.03). The results provide reference CS and ICS values in a young, adult population with normal visual acuity. The agreement between the photopic tests indicated that they may be used interchangeably. There was little agreement between the mesopic and photopic tests. The mesopic test seemed best suited to differentiate between candidates and may therefore possibly be useful for medical selection purposes.

  7. A Classification Method of Normal and Overweight Females Based on Facial Features for Automated Medical Applications

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available Obesity and overweight have become serious public health problems worldwide. Obesity and abdominal obesity are associated with type 2 diabetes, cardiovascular diseases, and metabolic syndrome. In this paper, we first suggest a method of predicting normal and overweight females according to body mass index (BMI based on facial features. A total of 688 subjects participated in this study. We obtained the area under the ROC curve (AUC value of 0.861 and kappa value of 0.521 in Female: 21–40 (females aged 21–40 years group, and AUC value of 0.76 and kappa value of 0.401 in Female: 41–60 (females aged 41–60 years group. In two groups, we found many features showing statistical differences between normal and overweight subjects by using an independent two-sample t-test. We demonstrated that it is possible to predict BMI status using facial characteristics. Our results provide useful information for studies of obesity and facial characteristics, and may provide useful clues in the development of applications for alternative diagnosis of obesity in remote healthcare.

  8. Normalized impact factor (NIF): an adjusted method for calculating the citation rate of biomedical journals.

    Science.gov (United States)

    Owlia, P; Vasei, M; Goliaei, B; Nassiri, I

    2011-04-01

    The interests in journal impact factor (JIF) in scientific communities have grown over the last decades. The JIFs are used to evaluate journals quality and the papers published therein. JIF is a discipline specific measure and the comparison between the JIF dedicated to different disciplines is inadequate, unless a normalization process is performed. In this study, normalized impact factor (NIF) was introduced as a relatively simple method enabling the JIFs to be used when evaluating the quality of journals and research works in different disciplines. The NIF index was established based on the multiplication of JIF by a constant factor. The constants were calculated for all 54 disciplines of biomedical field during 2005, 2006, 2007, 2008 and 2009 years. Also, ranking of 393 journals in different biomedical disciplines according to the NIF and JIF were compared to illustrate how the NIF index can be used for the evaluation of publications in different disciplines. The findings prove that the use of the NIF enhances the equality in assessing the quality of research works produced by researchers who work in different disciplines. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Environmental dose-assessment methods for normal operations at DOE nuclear sites

    International Nuclear Information System (INIS)

    Strenge, D.L.; Kennedy, W.E. Jr.; Corley, J.P.

    1982-09-01

    Methods for assessing public exposure to radiation from normal operations at DOE facilities are reviewed in this report. The report includes a discussion of environmental doses to be calculated, a review of currently available environmental pathway models and a set of recommended models for use when environmental pathway modeling is necessary. Currently available models reviewed include those used by DOE contractors, the Environmental Protection Agency (EPA), the Nuclear Regulatory Commission (NRC), and other organizations involved in environmental assessments. General modeling areas considered for routine releases are atmospheric transport, airborne pathways, waterborne pathways, direct exposure to penetrating radiation, and internal dosimetry. The pathway models discussed in this report are applicable to long-term (annual) uniform releases to the environment: they do not apply to acute releases resulting from accidents or emergency situations

  10. Component mode synthesis methods applied to 3D heterogeneous core calculations, using the mixed dual finite element solver MINOS

    Energy Technology Data Exchange (ETDEWEB)

    Guerin, P.; Baudron, A. M.; Lautard, J. J. [Commissariat a l' Energie Atomique, DEN/DANS/DM2S/SERMA/LENR, CEA Saclay, 91191 Gif sur Yvette (France)

    2006-07-01

    This paper describes a new technique for determining the pin power in heterogeneous core calculations. It is based on a domain decomposition with overlapping sub-domains and a component mode synthesis technique for the global flux determination. Local basis functions are used to span a discrete space that allows fundamental global mode approximation through a Galerkin technique. Two approaches are given to obtain these local basis functions: in the first one (Component Mode Synthesis method), the first few spatial eigenfunctions are computed on each sub-domain, using periodic boundary conditions. In the second one (Factorized Component Mode Synthesis method), only the fundamental mode is computed, and we use a factorization principle for the flux in order to replace the higher order Eigenmodes. These different local spatial functions are extended to the global domain by defining them as zero outside the sub-domain. These methods are well-fitted for heterogeneous core calculations because the spatial interface modes are taken into account in the domain decomposition. Although these methods could be applied to higher order angular approximations - particularly easily to a SPN approximation - the numerical results we provide are obtained using a diffusion model. We show the methods' accuracy for reactor cores loaded with UOX and MOX assemblies, for which standard reconstruction techniques are known to perform poorly. Furthermore, we show that our methods are highly and easily parallelizable. (authors)

  11. Component mode synthesis methods applied to 3D heterogeneous core calculations, using the mixed dual finite element solver MINOS

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A. M.; Lautard, J. J.

    2006-01-01

    This paper describes a new technique for determining the pin power in heterogeneous core calculations. It is based on a domain decomposition with overlapping sub-domains and a component mode synthesis technique for the global flux determination. Local basis functions are used to span a discrete space that allows fundamental global mode approximation through a Galerkin technique. Two approaches are given to obtain these local basis functions: in the first one (Component Mode Synthesis method), the first few spatial eigenfunctions are computed on each sub-domain, using periodic boundary conditions. In the second one (Factorized Component Mode Synthesis method), only the fundamental mode is computed, and we use a factorization principle for the flux in order to replace the higher order Eigenmodes. These different local spatial functions are extended to the global domain by defining them as zero outside the sub-domain. These methods are well-fitted for heterogeneous core calculations because the spatial interface modes are taken into account in the domain decomposition. Although these methods could be applied to higher order angular approximations - particularly easily to a SPN approximation - the numerical results we provide are obtained using a diffusion model. We show the methods' accuracy for reactor cores loaded with UOX and MOX assemblies, for which standard reconstruction techniques are known to perform poorly. Furthermore, we show that our methods are highly and easily parallelizable. (authors)

  12. Review of clinically accessible methods to determine lean body mass for normalization of standardized uptake values

    International Nuclear Information System (INIS)

    DEVRIESE, Joke; POTTEL, Hans; BEELS, Laurence; MAES, Alex; VAN DE WIELE, Christophe; GHEYSENS, Olivier

    2016-01-01

    With the routine use of 2-deoxy-2-[ 18 F]-fluoro-D-glucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) scans, metabolic activity of tumors can be quantitatively assessed through calculation of SUVs. One possible normalization parameter for the standardized uptake value (SUV) is lean body mass (LBM), which is generally calculated through predictive equations based on height and body weight. (Semi-)direct measurements of LBM could provide more accurate results in cancer populations than predictive equations based on healthy populations. In this context, four methods to determine LBM are reviewed: bioelectrical impedance analysis, dual-energy X-ray absorptiometry. CT, and magnetic resonance imaging. These methods were selected based on clinical accessibility and are compared in terms of methodology, precision and accuracy. By assessing each method’s specific advantages and limitations, a well-considered choice of method can hopefully lead to more accurate SUVLBM values, hence more accurate quantitative assessment of 18F-FDG PET images.

  13. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  14. Adjustment technique without explicit formation of normal equations /conjugate gradient method/

    Science.gov (United States)

    Saxena, N. K.

    1974-01-01

    For a simultaneous adjustment of a large geodetic triangulation system, a semiiterative technique is modified and used successfully. In this semiiterative technique, known as the conjugate gradient (CG) method, original observation equations are used, and thus the explicit formation of normal equations is avoided, 'huge' computer storage space being saved in the case of triangulation systems. This method is suitable even for very poorly conditioned systems where solution is obtained only after more iterations. A detailed study of the CG method for its application to large geodetic triangulation systems was done that also considered constraint equations with observation equations. It was programmed and tested on systems as small as two unknowns and three equations up to those as large as 804 unknowns and 1397 equations. When real data (573 unknowns, 965 equations) from a 1858-km-long triangulation system were used, a solution vector accurate to four decimal places was obtained in 2.96 min after 1171 iterations (i.e., 2.0 times the number of unknowns).

  15. Shack-Hartmann centroid detection method based on high dynamic range imaging and normalization techniques

    International Nuclear Information System (INIS)

    Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas

    2010-01-01

    In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.

  16. A Comparative Study of Voltage, Peak Current and Dual Current Mode Control Methods for Noninverting Buck-Boost Converter

    Directory of Open Access Journals (Sweden)

    M. Č. Bošković

    2016-06-01

    Full Text Available This paper presents a comparison of voltage mode control (VMC and two current mode control (CMC methods of noninverting buck-boost converter. The converter control-to-output transfer function, line-to-output transfer function and the output impedance are obtained for all methods by averaging converter equations over one switching period and applying small-signal linearization. The obtained results are required for the design procedure of feedback compensator to keep a system stable and robust. A comparative study of VMC, peak current mode control (PCMC and dual-current mode control (DCMC is performed. Performance evaluation of the closed-loop system with obtained compensator between these methods is performed via numerical simulations.

  17. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    Science.gov (United States)

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  18. Circular mode: a new scanning probe microscopy method for investigating surface properties at constant and continuous scanning velocities.

    Science.gov (United States)

    Nasrallah, Hussein; Mazeran, Pierre-Emmanuel; Noël, Olivier

    2011-11-01

    In this paper, we introduce a novel scanning probe microscopy mode, called the circular mode, which offers expanded capabilities for surface investigations especially for measuring physical properties that require high scanning velocities and/or continuous displacement with no rest periods. To achieve these specific conditions, we have implemented a circular horizontal displacement of the probe relative to the sample plane. Thus the relative probe displacement follows a circular path rather than the conventional back and forth linear one. The circular mode offers advantages such as high and constant scanning velocities, the possibility to be combined with other classical operating modes, and a simpler calibration method of the actuators generating the relative displacement. As application examples of this mode, we report its ability to (1) investigate the influence of scanning velocity on adhesion forces, (2) measure easily and instantly the friction coefficient, and (3) generate wear tracks very rapidly for tribological investigations. © 2011 American Institute of Physics

  19. Partition functions with spin in AdS2 via quasinormal mode methods

    International Nuclear Information System (INIS)

    Keeler, Cynthia; Lisbão, Pedro; Ng, Gim Seng

    2016-01-01

    We extend the results of http://dx.doi.org/10.1007/JHEP06(2014)099, computing one loop partition functions for massive fields with spin half in AdS 2 using the quasinormal mode method proposed by Denef, Hartnoll, and Sachdev http://dx.doi.org/10.1088/0264-9381/27/12/125001. We find the finite representations of SO(2,1) for spin zero and spin half, consisting of a highest weight state |h〉 and descendants with non-unitary values of h. These finite representations capture the poles and zeroes of the one loop determinants. Together with the asymptotic behavior of the partition functions (which can be easily computed using a large mass heat kernel expansion), these are sufficient to determine the full answer for the one loop determinants. We also discuss extensions to higher dimensional AdS 2n and higher spins.

  20. On the nonlinear dynamics of trolling-mode AFM: Analytical solution using multiple time scales method

    Science.gov (United States)

    Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza

    2018-06-01

    Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.

  1. Effect of tidal triggering on seismicity in Taiwan revealed by the empirical mode decomposition method

    Directory of Open Access Journals (Sweden)

    H.-J. Chen

    2012-07-01

    Full Text Available The effect of tidal triggering on earthquake occurrence has been controversial for many years. This study considered earthquakes that occurred near Taiwan between 1973 and 2008. Because earthquake data are nonlinear and non-stationary, we applied the empirical mode decomposition (EMD method to analyze the temporal variations in the number of daily earthquakes to investigate the effect of tidal triggering. We compared the results obtained from the non-declustered catalog with those from two kinds of declustered catalogs and discuss the aftershock effect on the EMD-based analysis. We also investigated stacking the data based on in-phase phenomena of theoretical Earth tides with statistical significance tests. Our results show that the effects of tidal triggering, particularly the lunar tidal effect, can be extracted from the raw seismicity data using the approach proposed here. Our results suggest that the lunar tidal force is likely a factor in the triggering of earthquakes.

  2. Method and apparatus for controlling a powertrain system including a multi-mode transmission

    Science.gov (United States)

    Hessell, Steven M.; Morris, Robert L.; McGrogan, Sean W.; Heap, Anthony H.; Mendoza, Gil J.

    2015-09-08

    A powertrain including an engine and torque machines is configured to transfer torque through a multi-mode transmission to an output member. A method for controlling the powertrain includes employing a closed-loop speed control system to control torque commands for the torque machines in response to a desired input speed. Upon approaching a power limit of a power storage device transferring power to the torque machines, power limited torque commands are determined for the torque machines in response to the power limit and the closed-loop speed control system is employed to determine an engine torque command in response to the desired input speed and the power limited torque commands for the torque machines.

  3. Mode decomposition methods for flows in high-contrast porous media. A global approach

    KAUST Repository

    Ghommem, Mehdi; Calo, Victor M.; Efendiev, Yalchin R.

    2014-01-01

    We apply dynamic mode decomposition (DMD) and proper orthogonal decomposition (POD) methods to flows in highly-heterogeneous porous media to extract the dominant coherent structures and derive reduced-order models via Galerkin projection. Permeability fields with high contrast are considered to investigate the capability of these techniques to capture the main flow features and forecast the flow evolution within a certain accuracy. A DMD-based approach shows a better predictive capability due to its ability to accurately extract the information relevant to long-time dynamics, in particular, the slowly-decaying eigenmodes corresponding to largest eigenvalues. Our study enables a better understanding of the strengths and weaknesses of the applicability of these techniques for flows in high-contrast porous media. Furthermore, we discuss the robustness of DMD- and POD-based reduced-order models with respect to variations in initial conditions, permeability fields, and forcing terms. © 2013 Elsevier Inc.

  4. Partition functions with spin in AdS{sub 2} via quasinormal mode methods

    Energy Technology Data Exchange (ETDEWEB)

    Keeler, Cynthia [Niels Bohr International Academy, Niels Bohr Institute,University of Copenhagen, Blegdamsvej 17, DK 2100, Copenhagen (Denmark); Lisbão, Pedro [Department of Physics, University of Michigan,Ann Arbor, MI-48109 (United States); Ng, Gim Seng [Department of Physics, McGill University,Montréal, QC H3A 2T8 (Canada)

    2016-10-12

    We extend the results of http://dx.doi.org/10.1007/JHEP06(2014)099, computing one loop partition functions for massive fields with spin half in AdS{sub 2} using the quasinormal mode method proposed by Denef, Hartnoll, and Sachdev http://dx.doi.org/10.1088/0264-9381/27/12/125001. We find the finite representations of SO(2,1) for spin zero and spin half, consisting of a highest weight state |h〉 and descendants with non-unitary values of h. These finite representations capture the poles and zeroes of the one loop determinants. Together with the asymptotic behavior of the partition functions (which can be easily computed using a large mass heat kernel expansion), these are sufficient to determine the full answer for the one loop determinants. We also discuss extensions to higher dimensional AdS{sub 2n} and higher spins.

  5. Simulation of neoclassical tearing mode stabilization via minimum seeking method on ITER

    Energy Technology Data Exchange (ETDEWEB)

    Park, M. H.; Kim, K.; Na, D. H.; Byun, C. S.; Na, Y. S. [Seoul National Univ., Seoul (Korea, Republic of); Kim, M. [FNC Technology Co. Ltd, Yongin (Korea, Republic of)

    2016-10-15

    Neoclassical tearing modes (NTMs) are well known resistive magnetohydrodynamic (MHD) instabilities. These instabilities are sustained by a helically perturbed bootstrap current. NTMs produce magnetic islands in tokamak plasmas that can degrade confinement and lead to plasma disruption. Because of this, the stabilization of NTMs is one of the key issues for tokamaks that achieve high fusion performance such as ITER. Compensating for the lack of bootstrap current by an Electron Cyclotron Current Drive (ECCD) has been proved experimentally as an effective method to stabilize NTMs. In order to stabilize NTMs, it is important to reduce misalignment. So that even ECCD can destabilize the NTMs when misalignment is large. Feedback control method that does not fully require delicate and accurate real-time measurements and calculations, such as equilibrium reconstruction and EC ray-tracing, has also been proposed. One of the feedback control methods is minimum seeking method. This control method minimizes the island width by tuning the misalignment, assuming that the magnetic island width is a function of the misalignment. As a robust and simple method of controlling NTM, minimum 'island width growth rate' seeking control is purposed and compared with performance of minimum ' island width' seeking control. At the integrated numerical system, simulations of the NTM suppression are performed with two types of minimum seeking controllers; one is a FDM based minimum seeking controller and the other is a sinusoidal perturbation based minimum seeking method. The full suppression is achieved both types of controller. The controllers adjust poloidal angle of EC beam and reduce misalignment to zero. The sinusoidal perturbation based minimum seeking control need to modify the adaptive gain.

  6. Study of Wave-Particle Interactions for Whistler Mode Waves at Oblique Angles by Utilizing the Gyroaveraging Method

    Science.gov (United States)

    Hsieh, Yi-Kai; Omura, Yoshiharu

    2017-10-01

    We investigate the properties of whistler mode wave-particle interactions at oblique wave normal angles to the background magnetic field. We find that electromagnetic energy of waves at frequencies below half the electron cyclotron frequency can flow nearly parallel to the ambient magnetic field. We thereby confirm that the gyroaveraging method, which averages the cyclotron motion to the gyrocenter and reduces the simulation from two-dimensional to one-dimensional, is valid for oblique wave-particle interaction. Multiple resonances appear for oblique propagation but not for parallel propagation. We calculate the possible range of resonances with the first-order resonance condition as a function of electron kinetic energy and equatorial pitch angle. To reveal the physical process and the efficiency of electron acceleration by multiple resonances, we assume a simple uniform wave model with constant amplitude and frequency in space and time. We perform test particle simulations with electrons starting at specific equatorial pitch angles and kinetic energies. The simulation results show that multiple resonances contribute to acceleration and pitch angle scattering of energetic electrons. Especially, we find that electrons with energies of a few hundred keV can be accelerated efficiently to a few MeV through the n = 0 Landau resonance.

  7. Plasma Modes

    Science.gov (United States)

    Dubin, D. H. E.

    This chapter explores several aspects of the linear electrostatic normal modes of oscillation for a single-species non-neutral plasma in a Penning trap. Linearized fluid equations of motion are developed, assuming the plasma is cold but collisionless, which allow derivation of the cold plasma dielectric tensor and the electrostatic wave equation. Upper hybrid and magnetized plasma waves in an infinite uniform plasma are described. The effect of the plasma surface in a bounded plasma system is considered, and the properties of surface plasma waves are characterized. The normal modes of a cylindrical plasma column are discussed, and finally, modes of spheroidal plasmas, and finite temperature effects on the modes, are briefly described.

  8. Analytical energy gradient for the two-component normalized elimination of the small component method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter, E-mail: dcremer@smu.edu [Computational and Theoretical Chemistry Group (CATCO), Department of Chemistry, Southern Methodist University, 3215 Daniel Ave, Dallas, Texas 75275-0314 (United States)

    2015-06-07

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg{sub 2} and Cn{sub 2}, which are due to the admixture of more bonding character to the highest occupied spinors.

  9. Analytical energy gradient for the two-component normalized elimination of the small component method

    Science.gov (United States)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter

    2015-06-01

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg2 and Cn2, which are due to the admixture of more bonding character to the highest occupied spinors.

  10. A Gauss-Newton method for the integration of spatial normal fields in shape Space

    KAUST Repository

    Balzer, Jonathan

    2011-08-09

    We address the task of adjusting a surface to a vector field of desired surface normals in space. The described method is entirely geometric in the sense, that it does not depend on a particular parametrization of the surface in question. It amounts to solving a nonlinear least-squares problem in shape space. Previously, the corresponding minimization has been performed by gradient descent, which suffers from slow convergence and susceptibility to local minima. Newton-type methods, although significantly more robust and efficient, have not been attempted as they require second-order Hadamard differentials. These are difficult to compute for the problem of interest and in general fail to be positive-definite symmetric. We propose a novel approximation of the shape Hessian, which is not only rigorously justified but also leads to excellent numerical performance of the actual optimization. Moreover, a remarkable connection to Sobolev flows is exposed. Three other established algorithms from image and geometry processing turn out to be special cases of ours. Our numerical implementation founds on a fast finite-elements formulation on the minimizing sequence of triangulated shapes. A series of examples from a wide range of different applications is discussed to underline flexibility and efficiency of the approach. © 2011 Springer Science+Business Media, LLC.

  11. A Bootstrap Based Measure Robust to the Choice of Normalization Methods for Detecting Rhythmic Features in High Dimensional Data.

    Science.gov (United States)

    Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A; Peddada, Shyamal D

    2018-01-01

    Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html.

  12. Dynamic analysis of large structures with uncertain parameters based on coupling component mode synthesis and perturbation method

    Directory of Open Access Journals (Sweden)

    D. Sarsri

    2016-03-01

    Full Text Available This paper presents a methodological approach to compute the stochastic eigenmodes of large FE models with parameter uncertainties based on coupling of second order perturbation method and component mode synthesis methods. Various component mode synthesis methods are used to optimally reduce the size of the model. The statistical first two moments of dynamic response of the reduced system are obtained by the second order perturbation method. Numerical results illustrating the accuracy and efficiency of the proposed coupled methodological procedures for large FE models with uncertain parameters are presented.

  13. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.

    2014-05-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.

  14. A method for spectral DNS of low Rm channel flows based on the least dissipative modes

    Science.gov (United States)

    Kornet, Kacper; Pothérat, Alban

    2015-10-01

    We put forward a new type of spectral method for the direct numerical simulation of flows where anisotropy or very fine boundary layers are present. The main idea is to take advantage of the fact that such structures are dissipative and that their presence should reduce the number of degrees of freedom of the flow, when paradoxically, their fine resolution incurs extra computational cost in most current methods. The principle of this method is to use a functional basis with elements that already include these fine structures so as to avoid these extra costs. This leads us to develop an algorithm to implement a spectral method for arbitrary functional bases, and in particular, non-orthogonal ones. We construct a basic implementation of this algorithm to simulate magnetohydrodynamic (MHD) channel flows with an externally imposed, transverse magnetic field, where very thin boundary layers are known to develop along the channel walls. In this case, the sought functional basis can be built out of the eigenfunctions of the dissipation operator, which incorporate these boundary layers, and it turns out to be non-orthogonal. We validate this new scheme against numerical simulations of freely decaying MHD turbulence based on a finite volume code and it is found to provide accurate results. Its ability to fully resolve wall-bounded turbulence with a number of modes close to that required by the dynamics is demonstrated on a simple example. This opens the way to full-blown simulations of MHD turbulence under very high magnetic fields. Until now such simulations were too computationally expensive. In contrast to traditional methods the computational cost of the proposed method, does not depend on the intensity of the magnetic field.

  15. Study of long-range orders of hard-core bosons coupled to cooperative normal modes in two-dimensional lattices

    Science.gov (United States)

    Ghosh, A.; Yarlagadda, S.

    2017-09-01

    Understanding the microscopic mechanism of coexisting long-range orders (such as lattice supersolidity) in strongly correlated systems is a subject of immense interest. We study the possible manifestations of long-range orders, including lattice-supersolid phases with differently broken symmetry, in a two-dimensional square lattice system of hard-core bosons (HCBs) coupled to archetypal cooperative/coherent normal-mode distortions such as those in perovskites. At strong HCB-phonon coupling, using a duality transformation to map the strong-coupling problem to a weak-coupling one, we obtain an effective Hamiltonian involving nearest-neighbor, next-nearest-neighbor, and next-to-next-nearest-neighbor hoppings and repulsions. Using stochastic series expansion quantum Monte Carlo, we construct the phase diagram of the system. As coupling strength is increased, we find that the system undergoes a first-order quantum phase transition from a superfluid to a checkerboard solid at half-filling and from a superfluid to a diagonal striped solid [with crystalline ordering wave vector Q ⃗=(2 π /3 ,2 π /3 ) or (2 π /3 ,4 π /3 )] at one-third filling without showing any evidence of supersolidity. On tuning the system away from these commensurate fillings, checkerboard supersolid is generated near half-filling whereas a rare diagonal striped supersolid is realized near one-third filling. Interestingly, there is an asymmetry in the extent of supersolidity about one-third filling. Within our framework, we also provide an explanation for the observed checkerboard and stripe formations in La2 -xSrxNiO4 at x =1 /2 and x =1 /3 .

  16. Normal-mode Magnetoseismology as a Virtual Instrument for the Plasma Mass Density in the Inner Magneotsphere: MMS Observations during Magnetic Storms

    Science.gov (United States)

    Chi, P. J.; Takahashi, K.; Denton, R. E.

    2017-12-01

    Previous studies have demonstrated that the electric and magnetic field measurements on closed field lines can detect harmonic frequencies of field line resonance (FLR) and infer the plasma mass density distribution in the inner magnetosphere. This normal-mode magnetoseismology technique can act as a virtual instrument for spacecraft with a magnetometer and/or an electric field instrument, and it can convert the electromagnetic measurements to knowledge about the plasma mass, of which the dominant low-energy core is difficult to detect directly due to the spacecraft potential. The additional measurement of the upper hybrid frequency by the plasma wave instrument can well constrain the oxygen content in the plasma. In this study, we use field line resonance (FLR) frequencies observed by the Magnetospheric Multiscale (MMS) satellites to estimate the plasma mass density during magnetic storms. At FLR frequencies, the phase difference between the azimuthal magnetic perturbation and the radial electric perturbation is approximately ±90°, which is consistent with the characteristic of standing waves. During the magnetic storm in October 2015, the FLR observations indicate a clear enhancement in the plasma mass density on the first day of the recovery phase, but the added plasma was quickly removed on the following day. We will compare with the FLR observations by other operating satellites such as the Van Allen Probes and GOES to examine the spatial variations of the plasma mass density in the magnetosphere. Also discussed are how the spacing in harmonic frequencies can infer the distribution of plasma mass density along the field line as well as its implications.

  17. The signaling pathway of dopamine D2 receptor (D2R) activation using normal mode analysis (NMA) and the construction of pharmacophore models for D2R ligands.

    Science.gov (United States)

    Salmas, Ramin Ekhteiari; Stein, Matthias; Yurtsever, Mine; Seeman, Philip; Erol, Ismail; Mestanoglu, Mert; Durdagi, Serdar

    2017-07-01

    G-protein-coupled receptors (GPCRs) are targets of more than 30% of marketed drugs. Investigation on the GPCRs may shed light on upcoming drug design studies. In the present study, we performed a combination of receptor- and ligand-based analysis targeting the dopamine D2 receptor (D2R). The signaling pathway of D2R activation and the construction of universal pharmacophore models for D2R ligands were also studied. The key amino acids, which contributed to the regular activation of the D2R, were in detail investigated by means of normal mode analysis (NMA). A derived cross-correlation matrix provided us an understanding of the degree of pair residue correlations. Although negative correlations were not observed in the case of the inactive D2R state, a high degree of correlation appeared between the residues in the active state. NMA results showed that the cytoplasmic side of the TM5 plays a significant role in promoting of residue-residue correlations in the active state of D2R. Tracing motions of the amino acids Arg219, Arg220, Val223, Asn224, Lys226, and Ser228 in the position of the TM5 are found to be critical in signal transduction. Complementing the receptor-based modeling, ligand-based modeling was also performed using known D2R ligands. The top-scored pharmacophore models were found as 5-sited (AADPR.671, AADRR.1398, AAPRR.3900, and ADHRR.2864) hypotheses from PHASE modeling from a pool consisting of more than 100 initial candidates. The constructed models using 38 D2R ligands (in the training set) were validated with 15 additional test set compounds. The resulting model correctly predicted the pIC 50 values of an additional test set compounds as true unknowns.

  18. Evaluation of four methods for separation of lymphocytes from normal individuals and patients with cancer and tuberculosis.

    Science.gov (United States)

    Patrick, C C; Graber, C D; Loadholt, C B

    1976-01-01

    An optimal technique was sought for lymphocyte recovery from normal and chronic diseased individuals. Lymphocytes were separated by four techniques: Plasmagel, Ficoll--Hypaque, a commercial semiautomatic method, and simple centrifugation using blood drawn from ten normal individuals, ten cancer patients, and ten tuberculosis patients. The lymphocyte mixture obtained after using each method was analyzed for percent recovery, amount if contamination by erythrocytes and neutrophils, and percent viability. The results show that the semiautomatic method yielded the best percent recovery of lymphocytes for normal individuals, while the simple centrifugation method contributed the highest percent recovery for cancer and tuberculosis patients. The Ficoll-Hypaque method gave the lowest erythrocyte contamination for all three types of individuals tested, while the Plasmagel method gave the lowest neutrophil contamination for all three types of individuals. The simple centrifugation method yielded all viable lymphocytes and thus gave the highest percent viability.

  19. Macro-architectured cellular materials: Properties, characteristic modes, and prediction methods

    Science.gov (United States)

    Ma, Zheng-Dong

    2017-12-01

    Macro-architectured cellular (MAC) material is defined as a class of engineered materials having configurable cells of relatively large (i.e., visible) size that can be architecturally designed to achieve various desired material properties. Two types of novel MAC materials, negative Poisson's ratio material and biomimetic tendon reinforced material, were introduced in this study. To estimate the effective material properties for structural analyses and to optimally design such materials, a set of suitable homogenization methods was developed that provided an effective means for the multiscale modeling of MAC materials. First, a strain-based homogenization method was developed using an approach that separated the strain field into a homogenized strain field and a strain variation field in the local cellular domain superposed on the homogenized strain field. The principle of virtual displacements for the relationship between the strain variation field and the homogenized strain field was then used to condense the strain variation field onto the homogenized strain field. The new method was then extended to a stress-based homogenization process based on the principle of virtual forces and further applied to address the discrete systems represented by the beam or frame structures of the aforementioned MAC materials. The characteristic modes and the stress recovery process used to predict the stress distribution inside the cellular domain and thus determine the material strengths and failures at the local level are also discussed.

  20. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data

    Directory of Open Access Journals (Sweden)

    Li Chen

    2018-04-01

    Full Text Available Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios—a simple but effective normalization method—for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  1. Feasibility of Computed Tomography-Guided Methods for Spatial Normalization of Dopamine Transporter Positron Emission Tomography Image.

    Science.gov (United States)

    Kim, Jin Su; Cho, Hanna; Choi, Jae Yong; Lee, Seung Ha; Ryu, Young Hoon; Lyoo, Chul Hyoung; Lee, Myung Sik

    2015-01-01

    Spatial normalization is a prerequisite step for analyzing positron emission tomography (PET) images both by using volume-of-interest (VOI) template and voxel-based analysis. Magnetic resonance (MR) or ligand-specific PET templates are currently used for spatial normalization of PET images. We used computed tomography (CT) images acquired with PET/CT scanner for the spatial normalization for [18F]-N-3-fluoropropyl-2-betacarboxymethoxy-3-beta-(4-iodophenyl) nortropane (FP-CIT) PET images and compared target-to-cerebellar standardized uptake value ratio (SUVR) values with those obtained from MR- or PET-guided spatial normalization method in healthy controls and patients with Parkinson's disease (PD). We included 71 healthy controls and 56 patients with PD who underwent [18F]-FP-CIT PET scans with a PET/CT scanner and T1-weighted MR scans. Spatial normalization of MR images was done with a conventional spatial normalization tool (cvMR) and with DARTEL toolbox (dtMR) in statistical parametric mapping software. The CT images were modified in two ways, skull-stripping (ssCT) and intensity transformation (itCT). We normalized PET images with cvMR-, dtMR-, ssCT-, itCT-, and PET-guided methods by using specific templates for each modality and measured striatal SUVR with a VOI template. The SUVR values measured with FreeSurfer-generated VOIs (FSVOI) overlaid on original PET images were also used as a gold standard for comparison. The SUVR values derived from all four structure-guided spatial normalization methods were highly correlated with those measured with FSVOI (P normalization methods provided reliable striatal SUVR values comparable to those obtained with MR-guided methods. CT-guided methods can be useful for analyzing dopamine transporter PET images when MR images are unavailable.

  2. Biological dosimetry intercomparison exercise: an evaluation of Triage and routine mode results by robust methods

    International Nuclear Information System (INIS)

    Di Giorgio, M.; Vallerga, M.B.; Radl, A.; Taja, M.R.; Barquinero, J.F.; Seoane, A.; De Luca, J.; Guerrero Carvajal, Y.C.; Stuck Oliveira, M.S.; Valdivia, P.; García Lima, O.; Lamadrid, A.; González Mesa, J.; Romero Aguilera, I.; Mandina Cardoso, T.; Arceo Maldonado, C.; Espinoza, M.E.; Martínez López, W.; Lloyd, D.C.; Méndez Acuña, L.; Di Tomaso, M.V.; Roy, L.; Lindholm, C.; Romm, H.; Güçlü, I.

    2011-01-01

    Well-defined protocols and quality management standards are indispensable for biological dosimetry laboratories. Participation in periodic proficiency testing by interlaboratory comparisons is also required. This harmonization is essential if a cooperative network is used to respond to a mass casualty event. Here we present an international intercomparison based on dicentric chromosome analysis for dose assessment performed in the framework of the IAEA Regional Latin American RLA/9/054 Project. The exercise involved 14 laboratories, 8 from Latin America and 6 from Europe. The performance of each laboratory and the reproducibility of the exercise were evaluated using robust methods described in ISO standards. The study was based on the analysis of slides from samples irradiated with 0.75 (DI) and 2.5 Gy (DII). Laboratories were required to score the frequency of dicentrics and convert them to estimated doses, using their own dose-effect curves, after the analysis of 50 or 100 cells (triage mode) and after conventional scoring of 500 cells or 100 dicentrics. In the conventional scoring, at both doses, all reported frequencies were considered as satisfactory, and two reported doses were considered as questionable. The analysis of the data dispersion among the dicentric frequencies and among doses indicated a better reproducibility for estimated doses (15.6% for DI and 8.8% for DII) than for frequencies (24.4% for DI and 11.4% for DII), expressed by the coefficient of variation. In the two triage modes, although robust analysis classified some reported frequencies or doses as unsatisfactory or questionable, all estimated doses were in agreement with the accepted error of ±0.5 Gy. However, at the DI dose and for 50 scored cells, 5 out of the 14 reported confidence intervals that included zero dose and could be interpreted as false negatives. This improved with 100 cells, where only one confidence interval included zero dose. At the DII dose, all estimations fell within

  3. Modal–Physical Hybrid System Identification of High-rise Building via Subspace and Inverse-Mode Methods

    Directory of Open Access Journals (Sweden)

    Kohei Fujita

    2017-08-01

    Full Text Available A system identification (SI problem of high-rise buildings is investigated under restricted data environments. The shear and bending stiffnesses of a shear-bending model (SB model representing the high-rise buildings are identified via the smart combination of the subspace and inverse-mode methods. Since the shear and bending stiffnesses of the SB model can be identified in the inverse-mode method by using the lowest mode of horizontal displacements and floor rotation angles, the lowest mode of the objective building is identified first by using the subspace method. Identification of the lowest mode is performed by using the amplitude of transfer functions derived in the subspace method. Considering the resolution in measuring the floor rotation angles in lower stories, floor rotation angles in most stories are predicted from the floor rotation angle at the top floor. An empirical equation of floor rotation angles is proposed by investigating those for various building models. From the viewpoint of application of the present SI method to practical situations, a non-simultaneous measurement system is also proposed. In order to investigate the reliability and accuracy of the proposed SI method, a 10-story building frame subjected to micro-tremor is examined.

  4. Comparing the normalization methods for the differential analysis of Illumina high-throughput RNA-Seq data.

    Science.gov (United States)

    Li, Peipei; Piao, Yongjun; Shon, Ho Sun; Ryu, Keun Ho

    2015-10-28

    Recently, rapid improvements in technology and decrease in sequencing costs have made RNA-Seq a widely used technique to quantify gene expression levels. Various normalization approaches have been proposed, owing to the importance of normalization in the analysis of RNA-Seq data. A comparison of recently proposed normalization methods is required to generate suitable guidelines for the selection of the most appropriate approach for future experiments. In this paper, we compared eight non-abundance (RC, UQ, Med, TMM, DESeq, Q, RPKM, and ERPKM) and two abundance estimation normalization methods (RSEM and Sailfish). The experiments were based on real Illumina high-throughput RNA-Seq of 35- and 76-nucleotide sequences produced in the MAQC project and simulation reads. Reads were mapped with human genome obtained from UCSC Genome Browser Database. For precise evaluation, we investigated Spearman correlation between the normalization results from RNA-Seq and MAQC qRT-PCR values for 996 genes. Based on this work, we showed that out of the eight non-abundance estimation normalization methods, RC, UQ, Med, TMM, DESeq, and Q gave similar normalization results for all data sets. For RNA-Seq of a 35-nucleotide sequence, RPKM showed the highest correlation results, but for RNA-Seq of a 76-nucleotide sequence, least correlation was observed than the other methods. ERPKM did not improve results than RPKM. Between two abundance estimation normalization methods, for RNA-Seq of a 35-nucleotide sequence, higher correlation was obtained with Sailfish than that with RSEM, which was better than without using abundance estimation methods. However, for RNA-Seq of a 76-nucleotide sequence, the results achieved by RSEM were similar to without applying abundance estimation methods, and were much better than with Sailfish. Furthermore, we found that adding a poly-A tail increased alignment numbers, but did not improve normalization results. Spearman correlation analysis revealed that RC, UQ

  5. Evaluation of left ventricular function in patients with atrial fibrillation by ECG gated blood pool scintigraphy. Using frame count normalization method

    Energy Technology Data Exchange (ETDEWEB)

    Akanabe, Hiroshi; Oshima, Motoo; Sakuma, Sadayuki

    1988-07-01

    The assumption necessary to perform ECG gated blood pool scintigraphy (EGBPS) are seemingly not valid for patients with atrial fibrillation (af), since they have wide variability in cardiac cycle length. The data were acquired in frame mode within the limits of mean heart rate of fix the first diastolic volume, and were calculated by frame count normalization (FCN) method to correct total counts in each frame. EGBPS were performed twelve patients with af, who were operated against valvular disease. The data acquired within mean heart rate +-10 % in frame mode were divided to 32 frames, and calculated total frame counts. With FCN method total frame counts from at 22nd to 32nd frame were multiplied to be equal to the average of total frame counts. FCN method could correct total frame counts at the latter frames. And there was good correlation between left ventricular ejection fraction calculated from scintigraphy and that from contrast cineangiography. Thus EGBPS with FCN method may be allow estimation of cardiac function even in subjects with af.

  6. A 3D multi-mode geometry-independent RMP optimization method and its application to TCV

    International Nuclear Information System (INIS)

    Rossel, J X; Moret, J-M; Martin, Y

    2010-01-01

    Resonant magnetic perturbation (RMP) and error field correction (EFC) produced by toroidally and poloidally distributed coil systems can be optimized if each coil is powered with an independent power supply. A 3D multi-mode geometry-independent Lagrange method has been developed and appears to be an efficient way to minimize the parasitic spatial modes of the magnetic perturbation and the coil current requirements while imposing the amplitude and phase of a number of target modes. A figure of merit measuring the quality of a perturbation spectrum with respect to RMP independently of the considered coil system or plasma equilibrium is proposed. To ease the application of the Lagrange method, a spectral characterization of the system, based on a generalized discrete Fourier transform applied in current space, is performed to determine how spectral degeneracy and side-band creation limit the set of simultaneously controllable target modes. This characterization is also useful to quantify the efficiency of the coil system in each toroidal mode number and to know whether optimization is possible for a given number of target modes. The efficiency of the method is demonstrated in the special case of a multi-purpose saddle coil system proposed as part of a future upgrade of Tokamak a Configuration Variable (TCV). This system consists of three rows of eight internal coils, each coil having independent power supplies, and provides simultaneously EFC, RMP and fast vertical position control.

  7. Core Power Control of the fast nuclear reactors with estimation of the delayed neutron precursor density using Sliding Mode method

    International Nuclear Information System (INIS)

    Ansarifar, G.R.; Nasrabadi, M.N.; Hassanvand, R.

    2016-01-01

    Highlights: • We present a S.M.C. system based on the S.M.O for control of a fast reactor power. • A S.M.O has been developed to estimate the density of delayed neutron precursor. • The stability analysis has been given by means Lyapunov approach. • The control system is guaranteed to be stable within a large range. • The comparison between S.M.C. and the conventional PID controller has been done. - Abstract: In this paper, a nonlinear controller using sliding mode method which is a robust nonlinear controller is designed to control a fast nuclear reactor. The reactor core is simulated based on the point kinetics equations and one delayed neutron group. Considering the limitations of the delayed neutron precursor density measurement, a sliding mode observer is designed to estimate it and finally a sliding mode control based on the sliding mode observer is presented. The stability analysis is given by means Lyapunov approach, thus the control system is guaranteed to be stable within a large range. Sliding Mode Control (SMC) is one of the robust and nonlinear methods which have several advantages such as robustness against matched external disturbances and parameter uncertainties. The employed method is easy to implement in practical applications and moreover, the sliding mode control exhibits the desired dynamic properties during the entire output-tracking process independent of perturbations. Simulation results are presented to demonstrate the effectiveness of the proposed controller in terms of performance, robustness and stability.

  8. Quantitative Analysis of Differential Proteome Expression in Bladder Cancer vs. Normal Bladder Cells Using SILAC Method.

    Directory of Open Access Journals (Sweden)

    Ganglong Yang

    Full Text Available The best way to increase patient survival rate is to identify patients who are likely to progress to muscle-invasive or metastatic disease upfront and treat them more aggressively. The human cell lines HCV29 (normal bladder epithelia, KK47 (low grade nonmuscle invasive bladder cancer, NMIBC, and YTS1 (metastatic bladder cancer have been widely used in studies of molecular mechanisms and cell signaling during bladder cancer (BC progression. However, little attention has been paid to global quantitative proteome analysis of these three cell lines. We labeled HCV29, KK47, and YTS1 cells by the SILAC method using three stable isotopes each of arginine and lysine. Labeled proteins were analyzed by 2D ultrahigh-resolution liquid chromatography LTQ Orbitrap mass spectrometry. Among 3721 unique identified and annotated proteins in KK47 and YTS1 cells, 36 were significantly upregulated and 74 were significantly downregulated with >95% confidence. Differential expression of these proteins was confirmed by western blotting, quantitative RT-PCR, and cell staining with specific antibodies. Gene ontology (GO term and pathway analysis indicated that the differentially regulated proteins were involved in DNA replication and molecular transport, cell growth and proliferation, cellular movement, immune cell trafficking, and cell death and survival. These proteins and the advanced proteome techniques described here will be useful for further elucidation of molecular mechanisms in BC and other types of cancer.

  9. NDT-Bobath method in normalization of muscle tone in post-stroke patients.

    Science.gov (United States)

    Mikołajewska, Emilia

    2012-01-01

    Ischaemic stroke is responsible for 80-85% of strokes. There is great interest in finding effective methods of rehabilitation for post-stroke patients. The aim of this study was to assess the results of rehabilitation carried out in the normalization of upper limb muscle tonus in patients, estimated on the Ashworth Scale for Grading Spasticity. The examined group consisted of 60 patients after ischaemic stroke. 10 sessions of NDT-Bobath therapy were provided within 2 weeks (ten days of therapy). Patient examinations using the Ashworth Scale for Grading Spasticity were done twice: the first time on admission and the second after the last session of the therapy to assess rehabilitation effects. Among the patients involved in the study, the results measured on the Ashworth Scale (where possible) were as follows: recovery in 16 cases (26.67%), relapse in 1 case (1.67%), no measurable changes (or change within the same grade of the scale) in 8 cases (13.33%). Statistically significant changes were observed in the health status of the patients. These changes, in the area of muscle tone, were favorable and reflected in the outcomes of the assessment using the Ashworth Scale for Grading Spasticity.

  10. A design method for two-layer beams consisting of normal and fibered high strength concrete

    International Nuclear Information System (INIS)

    Iskhakov, I.; Ribakov, Y.

    2007-01-01

    Two-layer fibered concrete beams can be analyzed using conventional methods for composite elements. The compressed zone of such beam section is made of high strength concrete (HSC), and the tensile one of normal strength concrete (NSC). The problems related to such type of beams are revealed and studied. An appropriate depth of each layer is prescribed. Compatibility conditions between HSC and NSC layers are found. It is based on the shear deformations equality on the layers border in a section with maximal depth of the compression zone. For the first time a rigorous definition of HSC is given using a comparative analysis of deformability and strength characteristics of different concrete classes. According to this definition, HSC has no download branch in the stress-strain diagram, the stress-strain function has minimum exponent, the ductility parameter is minimal and the concrete tensile strength remains constant with an increase in concrete compression strength. The application fields of two-layer concrete beams based on different static schemes and load conditions make known. It is known that the main disadvantage of HSCs is their low ductility. In order to overcome this problem, fibers are added to the HSC layer. Influence of different fiber volume ratios on structural ductility is discussed. An upper limit of the required fibers volume ratio is found based on compatibility equation of transverse tensile concrete deformations and deformations of fibers

  11. Application of multi attribute failure mode analysis of milk production using analytical hierarchy process method

    Science.gov (United States)

    Rucitra, A. L.

    2018-03-01

    Pusat Koperasi Induk Susu (PKIS) Sekar Tanjung, East Java is one of the modern dairy industries producing Ultra High Temperature (UHT) milk. A problem that often occurs in the production process in PKIS Sekar Tanjung is a mismatch between the production process and the predetermined standard. The purpose of applying Analytical Hierarchy Process (AHP) was to identify the most potential cause of failure in the milk production process. Multi Attribute Failure Mode Analysis (MAFMA) method was used to eliminate or reduce the possibility of failure when viewed from the failure causes. This method integrates the severity, occurrence, detection, and expected cost criteria obtained from depth interview with the head of the production department as an expert. The AHP approach was used to formulate the priority ranking of the cause of failure in the milk production process. At level 1, the severity has the highest weight of 0.41 or 41% compared to other criteria. While at level 2, identifying failure in the UHT milk production process, the most potential cause was the average mixing temperature of more than 70 °C which was higher than the standard temperature (≤70 ° C). This failure cause has a contributes weight of 0.47 or 47% of all criteria Therefore, this study suggested the company to control the mixing temperature to minimise or eliminate the failure in this process.

  12. Methods of accounting the hot water consumption modes at the solar installations design

    Directory of Open Access Journals (Sweden)

    Vyacheslav O. Dubkovsky

    2015-06-01

    Full Text Available Peculiarities of the high-powered solar systems for hot water heating are considered. The purpose of work consists in development of methods for accounting the 24-hourly hot water consumption mode, determining the solar systems dynamic descriptions. The basic solar system schemes are analyzed with their shortages from the user satisfaction view point due to sun energy. For the dynamic parameters improvement the use of operative expense tank is examined such receptacle bearing built-in worm-pipe, through which all heat carrier from solar collectors passes before entering the fast heat exchanger which heats a tank-accumulator. The scientific novelty refers to the proof that this tank principal parameter is a not the volume, but the built-in exchanger capacity, determined by the solar collectors field total thermal power. As an ecological constituent of operating costs it is suggested to take into account cost paid for the emission of combustion products. As this method practical application example considered is the solar collectors capacity optimization for a communal enterprise.

  13. Observations of the azimuthal dependence of normal mode coupling below 4 mHz at the South Pole and its nearby stations: Insights into the anisotropy beneath the Transantarctic Mountains

    Science.gov (United States)

    Hu, Xiao Gang

    2016-08-01

    Normal mode coupling pair 0S26-0T26 and 0S27-0T27 are significantly present at the South Pole station QSPA after the 2011/03/11 Mw9.1 Tohoku earthquake. In an attempt to determine the mechanisms responsible for the coupling pairs, I first investigate mode observations at 43 stations distributed along the polar great-circle path for the earthquake and observations at 32 Antarctic stations. I rule out the effect of Earth's rotation as well as the effect of global large-scale lateral heterogeneity, but argue instead for the effect of small-scale local azimuthal anisotropy in a depth extent about 300 km. The presence of quasi-Love waveform in 2-5 mHz at QSPA and its nearby stations confirms the predication. Secondly, I analyze normal mode observations at the South Pole location after 28 large earthquakes from 1998 to 2015. The result indicates that the presence of the mode coupling is azimuthal dependent, which is related to event azimuths in -46° to -18°. I also make a comparison between the shear-wave splitting measurements of previous studies and the mode coupling observations of this study, suggesting that their difference can be explained by a case that the anisotropy responsible for the mode coupling is not just below the South Pole location but located below region close to the Transantarctic Mountains (TAM). Furthermore, more signals of local azimuthal anisotropy in normal-mode observations at QSPA and SBA, such as coupling of 0S12-0T11 and vertical polarization anomaly for 0T10, confirms the existence of deep anisotropy close to TAM, which may be caused by asthenospheric mantle flow and edge convection around cratonic keel of TAM.

  14. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    Science.gov (United States)

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  15. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Effects of Different LiDAR Intensity Normalization Methods on Scotch Pine Forest Leaf Area Index Estimation

    Directory of Open Access Journals (Sweden)

    YOU Haotian

    2018-02-01

    Full Text Available The intensity data of airborne light detection and ranging (LiDAR are affected by many factors during the acquisition process. It is of great significance for the normalization and application of LiDAR intensity data to study the effective quantification and normalization of the effect from each factor. In this paper, the LiDAR data were normalized with range, angel of incidence, range and angle of incidence based on radar equation, respectively. Then two metrics, including canopy intensity sum and ratio of intensity, were extracted and used to estimate forest LAI, which was aimed at quantifying the effects of intensity normalization on forest LAI estimation. It was found that the range intensity normalization could improve the accuracy of forest LAI estimation. While the angle of incidence intensity normalization did not improve the accuracy and made the results worse. Although the range and incidence angle normalized intensity data could improve the accuracy, the improvement was less than the result of range intensity normalization. Meanwhile, the differences between the results of forest LAI estimation from raw intensity data and normalized intensity data were relatively big for canopy intensity sum metrics. However, the differences were relatively small for the ratio of intensity metrics. The results demonstrated that the effects of intensity normalization on forest LAI estimation were depended on the choice of affecting factor, and the influential level is closely related to the characteristics of metrics used. Therefore, the appropriate method of intensity normalization should be chosen according to the characteristics of metrics used in the future research, which could avoid the waste of cost and the reduction of estimation accuracy caused by the introduction of inappropriate affecting factors into intensity normalization.

  17. Hamiltonian diagonalization in foliable space-times: A method to find the modes

    International Nuclear Information System (INIS)

    Castagnino, M.; Ferraro, R.

    1989-01-01

    A way to obtain modes diagonalizing the canonical Hamiltonian of a minimally coupled scalar quantum field, in a foliable space-time, is shown. The Cauchy data for these modes are found to be the eigenfunctions of a second-order differential operator that could be interpreted as the squared Hamiltonian for the first-quantized relativistic particle in curved space

  18. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    Science.gov (United States)

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Efficient analysis of mode profiles in elliptical microcavity using dynamic-thermal electron-quantum medium FDTD method.

    Science.gov (United States)

    Khoo, E H; Ahmed, I; Goh, R S M; Lee, K H; Hung, T G G; Li, E P

    2013-03-11

    The dynamic-thermal electron-quantum medium finite-difference time-domain (DTEQM-FDTD) method is used for efficient analysis of mode profile in elliptical microcavity. The resonance peak of the elliptical microcavity is studied by varying the length ratio. It is observed that at some length ratios, cavity mode is excited instead of whispering gallery mode. This depicts that mode profiles are length ratio dependent. Through the implementation of the DTEQM-FDTD on graphic processing unit (GPU), the simulation time is reduced by 300 times as compared to the CPU. This leads to an efficient optimization approach to design microcavity lasers for wide range of applications in photonic integrated circuits.

  20. MO-DE-207A-09: Low-Dose CT Image Reconstruction Via Learning From Different Patient Normal-Dose Images

    Energy Technology Data Exchange (ETDEWEB)

    Han, H; Xing, L [Stanford University, Palo Alto, CA (United States); Liang, Z [Stony Brook University, Stony Brook, NY (United States)

    2016-06-15

    Purpose: To investigate a novel low-dose CT (LdCT) image reconstruction strategy for lung CT imaging in radiation therapy. Methods: The proposed approach consists of four steps: (1) use the traditional filtered back-projection (FBP) method to reconstruct the LdCT image; (2) calculate structure similarity (SSIM) index between the FBP-reconstructed LdCT image and a set of normal-dose CT (NdCT) images, and select the NdCT image with the highest SSIM as the learning source; (3) segment the NdCT source image into lung and outside tissue regions via simple thresholding, and adopt multiple linear regression to learn high-order Markov random field (MRF) pattern for each tissue region in the NdCT source image; (4) segment the FBP-reconstructed LdCT image into lung and outside regions as well, and apply the learnt MRF prior in each tissue region for statistical iterative reconstruction of the LdCT image following the penalized weighted least squares (PWLS) framework. Quantitative evaluation of the reconstructed images was based on the signal-to-noise ratio (SNR), local binary pattern (LBP) and histogram of oriented gradients (HOG) metrics. Results: It was observed that lung and outside tissue regions have different MRF patterns predicted from the NdCT. Visual inspection showed that our method obviously outperformed the traditional FBP method. Comparing with the region-smoothing PWLS method, our method has, in average, 13% increase in SNR, 15% decrease in LBP difference, and 12% decrease in HOG difference from reference standard for all regions of interest, which indicated the superior performance of the proposed method in terms of image resolution and texture preservation. Conclusion: We proposed a novel LdCT image reconstruction method by learning similar image characteristics from a set of NdCT images, and the to-be-learnt NdCT image does not need to be scans from the same subject. This approach is particularly important for enhancing image quality in radiation therapy.

  1. MO-DE-207A-09: Low-Dose CT Image Reconstruction Via Learning From Different Patient Normal-Dose Images

    International Nuclear Information System (INIS)

    Han, H; Xing, L; Liang, Z

    2016-01-01

    Purpose: To investigate a novel low-dose CT (LdCT) image reconstruction strategy for lung CT imaging in radiation therapy. Methods: The proposed approach consists of four steps: (1) use the traditional filtered back-projection (FBP) method to reconstruct the LdCT image; (2) calculate structure similarity (SSIM) index between the FBP-reconstructed LdCT image and a set of normal-dose CT (NdCT) images, and select the NdCT image with the highest SSIM as the learning source; (3) segment the NdCT source image into lung and outside tissue regions via simple thresholding, and adopt multiple linear regression to learn high-order Markov random field (MRF) pattern for each tissue region in the NdCT source image; (4) segment the FBP-reconstructed LdCT image into lung and outside regions as well, and apply the learnt MRF prior in each tissue region for statistical iterative reconstruction of the LdCT image following the penalized weighted least squares (PWLS) framework. Quantitative evaluation of the reconstructed images was based on the signal-to-noise ratio (SNR), local binary pattern (LBP) and histogram of oriented gradients (HOG) metrics. Results: It was observed that lung and outside tissue regions have different MRF patterns predicted from the NdCT. Visual inspection showed that our method obviously outperformed the traditional FBP method. Comparing with the region-smoothing PWLS method, our method has, in average, 13% increase in SNR, 15% decrease in LBP difference, and 12% decrease in HOG difference from reference standard for all regions of interest, which indicated the superior performance of the proposed method in terms of image resolution and texture preservation. Conclusion: We proposed a novel LdCT image reconstruction method by learning similar image characteristics from a set of NdCT images, and the to-be-learnt NdCT image does not need to be scans from the same subject. This approach is particularly important for enhancing image quality in radiation therapy.

  2. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  3. Normalization Methods and Selection Strategies for Reference Materials in Stable Isotope Analyes. Review

    Energy Technology Data Exchange (ETDEWEB)

    Skrzypek, G. [West Australian Biogeochemistry Centre, John de Laeter Centre of Mass Spectrometry, School of Plant Biology, University of Western Australia, Crawley (Australia); Sadler, R. [School of Agricultural and Resource Economics, University of Western Australia, Crawley (Australia); Paul, D. [Department of Civil Engineering (Geosciences), Indian Institute of Technology Kanpur, Kanpur (India); Forizs, I. [Institute for Geochemical Research, Hungarian Academy of Sciences, Budapest (Hungary)

    2013-07-15

    Stable isotope ratio mass spectrometers are highly precise, but not accurate instruments. Therefore, results have to be normalized to one of the isotope scales (e.g., VSMOW, VPDB) based on well calibrated reference materials. The selection of reference materials, numbers of replicates, {delta}-values of these reference materials and normalization technique have been identified as crucial in determining the uncertainty associated with the final results. The most common normalization techniques and reference materials have been tested using both Monte Carlo simulations and laboratory experiments to investigate aspects of error propagation during the normalization of isotope data. The range of observed differences justifies the need to employ the same sets of standards worldwide for each element and each stable isotope analytical technique. (author)

  4. UPLC-MS method for quantification of pterostilbene and its application to comparative study of bioavailability and tissue distribution in normal and Lewis lung carcinoma bearing mice.

    Science.gov (United States)

    Deng, Li; Li, Yongzhi; Zhang, Xinshi; Chen, Bo; Deng, Yulin; Li, Yujuan

    2015-10-10

    A UPLC-MS method was developed for determination of pterostilbene (PTS) in plasma and tissues of mice. PTS was separated on Agilent Zorbax XDB-C18 column (50 × 2.1 mm, 1.8 μm) with gradient mobile phase at the flow rate of 0.2 ml/min. The detection was performed by negative ion electrospray ionization in multiple reaction monitoring mode. The linear calibration curve of PTS in mouse plasma and tissues ranged from 1.0 to 5000 and 0.50 to 500 ng/ml (r(2)>0.9979), respectively, with lowest limits of quantification (LLOQ) were between 0.5 and 2.0 ng/ml, respectively. The accuracy and precision of the assay were satisfactory. The validated method was applied to the study of bioavailability and tissue distribution of PTS in normal and Lewis lung carcinoma (LLC) bearing mice. The bioavailability of PTS (dose 14, 28 and 56 mg/kg) in normal mice were 11.9%, 13.9% and 26.4%, respectively; and the maximum level (82.1 ± 14.2 μg/g) was found in stomach (dose 28 mg/kg). The bioavailability, peak concentration (Cmax), time to peak concentration (Tmax) of PTS in LLC mice was increased compared with normal mice. The results indicated the UPLC-MS method is reliable and bioavailability and tissue distribution of PTS in normal and LLC mice were dramatically different. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Methods to evvaluate normal rainfall for short-term wetland hydrology assessment

    Science.gov (United States)

    Jaclyn Sumner; Michael J. Vepraskas; Randall K. Kolka

    2009-01-01

    Identifying sites meeting wetland hydrology requirements is simple when long-term (>10 years) records are available. Because such data are rare, we hypothesized that a single-year of hydrology data could be used to reach the same conclusion as with long-term data, if the data were obtained during a period of normal or below normal rainfall. Long-term (40-45 years)...

  6. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    Science.gov (United States)

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  7. A Comparative Study of Analog Voltage-mode Control Methods for Ultra-Fast Tracking Power Supplies

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.

    2007-01-01

    This paper presents a theoretical and experimental comparison of the standard PWM/PID voltage-mode control method for single-phase buck converters with two highperformance self-oscillating (a.k.a. sliding mode) control methods. The application considered is ultra-fast tracking power supplies...... (UFTPSs) for RF power amplifiers, where the switching converter needs to track a varying reference voltage precisely and quickly while maintaining low output impedance. The small-signal analyses performed on the different controllers show that the hysteretic-type controller can achieve the highest loop...

  8. Effects of gain-scheduling methods in a classical wind turbine controller on wind turbine aeroservoelastic modes and loads

    DEFF Research Database (Denmark)

    Tibaldi, Carlo; Henriksen, Lars Christian; Hansen, Morten Hartvig

    2014-01-01

    The eects of dierent gain-scheduling methods for a classical wind turbine controller, operating in full load region, on the wind turbine aeroservoelastic modes and loads are investigated in this work. The dierent techniques are derived looking at the physical problem to take into account the chan......The eects of dierent gain-scheduling methods for a classical wind turbine controller, operating in full load region, on the wind turbine aeroservoelastic modes and loads are investigated in this work. The dierent techniques are derived looking at the physical problem to take into account...

  9. Component mode synthesis methods for 3-D heterogeneous core calculations applied to the mixed-dual finite element solver MINOS

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A.M.; Lautard, J.J.; Van Criekingen, S.

    2007-01-01

    This paper describes a new technique for determining the pin power in heterogeneous three-dimensional calculations. It is based on a domain decomposition with overlapping sub-domains and a component mode synthesis (CMS) technique for the global flux determination. Local basis functions are used to span a discrete space that allows fundamental global mode approximation through a Galerkin technique. Two approaches are given to obtain these local basis functions. In the first one (the CMS method), the first few spatial eigenfunctions are computed on each sub-domain, using periodic boundary conditions. In the second one (factorized CMS method), only the fundamental mode is computed, and we use a factorization principle for the flux in order to replace the higher-order Eigenmodes. These different local spatial functions are extended to the global domain by defining them as zero outside the sub-domain. These methods are well fitted for heterogeneous core calculations because the spatial interface modes are taken into account in the domain decomposition. Although these methods could be applied to higher-order angular approximations-particularly easily to an SPN approximation-the numerical results we provide are obtained using a diffusion model. We show the methods' accuracy for reactor cores loaded with uranium dioxide and mixed oxide assemblies, for which standard reconstruction techniques are known to perform poorly. Furthermore, we show that our methods are highly and easily parallelizable. (authors)

  10. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    Directory of Open Access Journals (Sweden)

    Guoliang Zhao

    2013-01-01

    Full Text Available This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.

  11. Mode-field half-widths of Gaussian approximation for the fundamental mode of two kinds of optical waveguides

    International Nuclear Information System (INIS)

    Lian-Huang, Li; Fu-Yuan, Guo

    2009-01-01

    This paper analyzes the characteristic of matching efficiency between the fundamental mode of two kinds of optical waveguides and its Gaussian approximate field. Then, it presents a new method where the mode-field half-width of Gaussian approximation for the fundamental mode should be defined according to the maximal matching efficiency method. The relationship between the mode-field half-width of the Gaussian approximate field obtained from the maximal matching efficiency and normalized frequency is studied; furthermore, two formulas of mode-field half-widths as a function of normalized frequency are proposed

  12. Concepts for measuring maintenance performance and methods for analysing competing failure modes

    DEFF Research Database (Denmark)

    Cooke, R.; Paulsen, J.L.

    1997-01-01

    competing failure modes. This article examines ways to assess maintenance performance without introducing statistical assumptions, then introduces a plausible statistical model for describing the interaction of preventive and corrective maintenance, and finally illustrates these with examples from...

  13. A boundary integral method for a dynamic, transient mode I crack problem with viscoelastic cohesive zone

    KAUST Repository

    Leise, Tanya L.; Walton, Jay R.; Gorb, Yuliya

    2009-01-01

    interpenetration, in contrast to the usual mode I boundary conditions that assume all unloaded crack faces are stress-free. The nonlinear viscoelastic cohesive zone behavior is motivated by dynamic fracture in brittle polymers in which crack propagation

  14. Comprehensive method of common-mode failure analysis for LMFBR safety systems

    International Nuclear Information System (INIS)

    Unione, A.J.; Ritzman, R.L.; Erdmann, R.C.

    1976-01-01

    A technique is demonstrated which allows the systematic treatment of common-mode failures of safety system performance. The technique uses log analysis in the form of fault and success trees to qualitatively assess the sources of common-mode failure and quantitatively estimate the contribution to the overall risk of system failure. The analysis is applied to the secondary control rod system of an early sized LMFBR

  15. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  16. A simple method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation

    International Nuclear Information System (INIS)

    Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.

    1994-01-01

    Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de

  17. Investigating the Effect of Normalization Norms in Flexible Manufacturing Sytem Selection Using Multi-Criteria Decision-Making Methods

    Directory of Open Access Journals (Sweden)

    Prasenjit Chatterjee

    2014-07-01

    Full Text Available The main objective of this paper is to assess the effect of different normalization norms within multi-criteria decisionmaking (MADM models. Three well accepted MCDM tools, namely, preference ranking organization method for enrichment evaluation (PROMETHEE, grey relation analysis (GRA and technique for order preference by similarity to ideal solution (TOPSIS methods are applied for solving a flexible manufacturing system (FMS selection problem in a discrete manufacturing environment. Finally, by the introduction of different normalization norms to the decision algorithms, its effct on the FMS selection problem using these MCDM models are also studied.

  18. Histological versus stereological methods applied at spermatogonia during normal human development

    DEFF Research Database (Denmark)

    Cortes, Dina

    1990-01-01

    The number of spermatogonia per tubular transverse section (S/T), and the percentage of seminiferous tubulus containing spermatogonia (the fertility index (FI] were measured in 40 pairs of normal autopsy testes aged 28 weeks of gestation-40 years. S/T and FI showed similar changes during the whol...

  19. An analysis of normalization methods for Drosophila RNAi genomic screens and development of a robust validation scheme

    Science.gov (United States)

    Wiles, Amy M.; Ravi, Dashnamoorthy; Bhavani, Selvaraj; Bishop, Alexander J.R.

    2010-01-01

    Genome-wide RNAi screening is a powerful, yet relatively immature technology that allows investigation into the role of individual genes in a process of choice. Most RNAi screens identify a large number of genes with a continuous gradient in the assessed phenotype. Screeners must then decide whether to examine just those genes with the most robust phenotype or to examine the full gradient of genes that cause an effect and how to identify the candidate genes to be validated. We have used RNAi in Drosophila cells to examine viability in a 384-well plate format and compare two screens, untreated control and treatment. We compare multiple normalization methods, which take advantage of different features within the data, including quantile normalization, background subtraction, scaling, cellHTS2 1, and interquartile range measurement. Considering the false-positive potential that arises from RNAi technology, a robust validation method was designed for the purpose of gene selection for future investigations. In a retrospective analysis, we describe the use of validation data to evaluate each normalization method. While no normalization method worked ideally, we found that a combination of two methods, background subtraction followed by quantile normalization and cellHTS2, at different thresholds, captures the most dependable and diverse candidate genes. Thresholds are suggested depending on whether a few candidate genes are desired or a more extensive systems level analysis is sought. In summary, our normalization approaches and experimental design to perform validation experiments are likely to apply to those high-throughput screening systems attempting to identify genes for systems level analysis. PMID:18753689

  20. Inside-sediment partitioning of PAH, PCB and organochlorine compounds and inferences on sampling and normalization methods

    International Nuclear Information System (INIS)

    Opel, Oliver; Palm, Wolf-Ulrich; Steffen, Dieter; Ruck, Wolfgang K.L.

    2011-01-01

    Comparability of sediment analyses for semivolatile organic substances is still low. Neither screening of the sediments nor organic-carbon based normalization is sufficient to obtain comparable results. We are showing the interdependency of grain-size effects with inside-sediment organic-matter distribution for PAH, PCB and organochlorine compounds. Surface sediment samples collected by Van-Veen grab were sieved and analyzed for 16 PAH, 6 PCB and 18 organochlorine pesticides (OCP) as well as organic-matter content. Since bulk concentrations are influenced by grain-size effects themselves, we used a novel normalization method based on the sum of concentrations in the separate grain-size fractions of the sediments. By calculating relative normalized concentrations, it was possible to clearly show underlying mechanisms throughout a heterogeneous set of samples. Furthermore, we were able to show that, for comparability, screening at <125 μm is best suited and can be further improved by additional organic-carbon normalization. - Research highlights: → New method for the comparison of heterogeneous sets of sediment samples. → Assessment of organic pollutants partitioning mechanisms in sediments. → Proposed method for more comparable sediment sampling. - Inside-sediment partitioning mechanisms are shown using a new mathematical approach and discussed in terms of sediment sampling and normalization.

  1. Travel path and transport mode identification method using ''less-frequently-detected'' position data

    International Nuclear Information System (INIS)

    Shimizu, T; Yamaguchi, T; Ai, H; Katagiri, Y; Kawase, J

    2014-01-01

    This study aims to seek method on travel path and transport mode identification in case positions of travellers are detected in low frequency. The survey in which ten test travellers with GPS logger move around Tokyo city centre was conducted. Travel path datasets of each traveller in which position data are selected every five minutes are processed from our survey data. Coverage index analysis based on the buffer analysis using GIS software is conducted. The condition and possibility to identify a path and a transport mode used are discussed

  2. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    Science.gov (United States)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  3. Immobilization method of yeast cells for intermittent contact mode imaging using the atomic force microscope

    International Nuclear Information System (INIS)

    De, Tathagata; Chettoor, Antony M.; Agarwal, Pranav; Salapaka, Murti V.; Nettikadan, Saju

    2010-01-01

    The atomic force microscope (AFM) is widely used for studying the surface morphology and growth of live cells. There are relatively fewer reports on the AFM imaging of yeast cells (Kasas and Ikai, 1995), (Gad and Ikai, 1995). Yeasts have thick and mechanically strong cell walls and are therefore difficult to attach to a solid substrate. In this report, a new immobilization technique for the height mode imaging of living yeast cells in solid media using AFM is presented. The proposed technique allows the cell surface to be almost completely exposed to the environment and studied using AFM. Apart from the new immobilization protocol, for the first time, height mode imaging of live yeast cell surface in intermittent contact mode is presented in this report. Stable and reproducible imaging over a 10-h time span is observed. A significant improvement in operational stability will facilitate the investigation of growth patterns and surface patterns of yeast cells.

  4. On a possible method of experimental investigation of proton decay modes

    International Nuclear Information System (INIS)

    Gulkanyan, H.R.; Pogosov, V.S.; Tamanyan, A.G.

    1982-01-01

    A detector for experimental investigation of proton decay modes is described. The detector represents a multiwire high pressure gas chamber, located in an underground cavity in a rock salt layer, analogous to known underground artificial depositories of fuel gas. It allows to identify decay particles and reaction kinematics at the amount of working gas of several dozens of kilotons and more required for the proton decay detection at the half-lifetime tau > 10 33 years and investigation of decay modes at tau 33 years. The detector also permits to investigate other exotic events such as a search for fractional charge particles, neutrino oscillations

  5. Equivalent Method of Solving Quantum Efficiency of Reflection-Mode Exponential Doping GaAs Photocathode

    International Nuclear Information System (INIS)

    Jun, Niu; Zhi, Yang; Ben-Kang, Chang

    2009-01-01

    The mathematical expression of the electron diffusion and drift length L DE of exponential doping photocathode is deduced. In the quantum efficiency equation of the reffection-mode uniform doping cathode, substituting L DE for L D , the equivalent quantum efficiency equation of the reffection-mode exponential doping cathode is obtained. By using the equivalent equation, theoretical simulation and experimental analysis shows that the equivalent index formula and formula-doped cathode quantum efficiency results in line. The equivalent equation avoids complicated calculation, thereby simplifies the process of solving the quantum efficiency of exponential doping photocathode

  6. Risk assessment of failure modes of gas diffuser liner of V94.2 siemens gas turbine by FMEA method

    Science.gov (United States)

    Mirzaei Rafsanjani, H.; Rezaei Nasab, A.

    2012-05-01

    Failure of welding connection of gas diffuser liner and exhaust casing is one of the failure modes of V94.2 gas turbines which are happened in some power plants. This defect is one of the uncertainties of customers when they want to accept the final commissioning of this product. According to this, the risk priority of this failure evaluated by failure modes and effect analysis (FMEA) method to find out whether this failure is catastrophic for turbine performance and is harmful for humans. By using history of 110 gas turbines of this model which are used in some power plants, the severity number, occurrence number and detection number of failure determined and consequently the Risk Priority Number (RPN) of failure determined. Finally, critically matrix of potential failures is created and illustrated that failure modes are located in safe zone.

  7. [Quantitative analysis method based on fractal theory for medical imaging of normal brain development in infants].

    Science.gov (United States)

    Li, Heheng; Luo, Liangping; Huang, Li

    2011-02-01

    The present paper is aimed to study the fractal spectrum of the cerebral computerized tomography in 158 normal infants of different age groups, based on the calculation of chaotic theory. The distribution range of neonatal period was 1.88-1.90 (mean = 1.8913 +/- 0.0064); It reached a stable condition at the level of 1.89-1.90 during 1-12 months old (mean = 1.8927 +/- 0.0045); The normal range of 1-2 years old infants was 1.86-1.90 (mean = 1.8863 +/- 4 0.0085); It kept the invariance of the quantitative value among 1.88-1.91(mean = 1.8958 +/- 0.0083) during 2-3 years of age. ANOVA indicated there's no significant difference between boys and girls (F = 0.243, P > 0.05), but the difference of age groups was significant (F = 8.947, P development.

  8. Research on a Novel Exciting Method for a Sandwich Transducer Operating in Longitudinal-Bending Hybrid Modes

    Directory of Open Access Journals (Sweden)

    Yingxiang Liu

    2017-06-01

    Full Text Available A novel exciting method for a sandwich type piezoelectric transducer operating in longitudinal-bending hybrid vibration modes is proposed and discussed, in which the piezoelectric elements for the excitations of the longitudinal and bending vibrations share the same axial location, but correspond to different partitions. Whole-piece type piezoelectric plates with three separated partitions are used, in which the center partitions generate the first longitudinal vibration, while the upper and lower partitions produce the second bending vibration. Detailed comparisons between the proposed exciting method and the traditional one were accomplished by finite element method (FEM calculations, which were further verified by experiments. Compared with the traditional exciting method using independent longitudinal ceramics and bending ceramics, the proposed method achieves higher electromechanical coupling factors and larger vibration amplitudes, especially for the bending vibration mode. This novel exciting method for longitudinal-bending hybrid vibrations has not changed the structural dimensions of the sandwich transducer, but markedly improves the mechanical output ability, which makes it very helpful and meaningful in designing new piezoelectric actuators operated in longitudinal-bending hybrid vibration modes.

  9. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  10. Development and application of the analytical energy gradient for the normalized elimination of the small component method

    NARCIS (Netherlands)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter

    2011-01-01

    The analytical energy gradient of the normalized elimination of the small component (NESC) method is derived for the first time and implemented for the routine calculation of NESC geometries and other first order molecular properties. Essential for the derivation is the correct calculation of the

  11. SU-E-J-178: A Normalization Method Can Remove Discrepancy in Ventilation Function Due to Different Breathing Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Qu, H; Yu, N; Stephans, K; Xia, P [Cleveland Clinic, Cleveland, OH (United States)

    2014-06-01

    Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed.

  12. SU-E-J-178: A Normalization Method Can Remove Discrepancy in Ventilation Function Due to Different Breathing Patterns

    International Nuclear Information System (INIS)

    Qu, H; Yu, N; Stephans, K; Xia, P

    2014-01-01

    Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed

  13. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  14. An assumed mode method and finite element method investigation of the coupled vibration in a flexible-disk rotor system with lacing wires

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shui-Ting; Huang, Hong-Wu [Hunan University, Changsha (China); Chiu, Yi-Jui; Yu, Guo-Fei [Xiamen University of Technology, Xiamen (China); Yang, Chia-Hao [Taipei Chengshih University of Science and Technology, Taipei (China); Jian, Sheng-Rui [I-Shou University, Kaohsiung (China)

    2017-02-15

    The Assumed mode method (AMM) and Finite element method (FEM) were used. Their results were compared to investigate the coupled shaft-torsion, disk-transverse, and blade-bending vibrations in a flexible-disk rotor system. The blades were grouped with a spring. The flexible-disk rotor system was divided into three modes of coupled vibrations: Shaft-disk-blade, disk-blade, and blade-blade. Two new modes of coupled vibrations were introduced, namely, lacing wires-blade and lacing wires-disk-blade. The patterns of change of the natural frequencies and mode shapes of the system were discussed. The results showed the following: first, mode shapes and natural frequencies varied, and the results of the AMM and FEM differed; second, numerical calculation results showed three influencing factors on natural frequencies, namely, the lacing wire constant, the lacing wire location, and the flexible disk; lastly, the flexible disk could affect the stability of the system as reflected in the effect of the rotational speed.

  15. Multi-satellites normalization of the FengYun-2s visible detectors by the MVP method

    Science.gov (United States)

    Li, Yuan; Rong, Zhi-guo; Zhang, Li-jun; Sun, Ling; Xu, Na

    2013-08-01

    After January 13, 2012, FY-2F had successfully launched, the total number of the in orbit operating FengYun-2 geostationary meteorological satellites reached three. For accurate and efficient application of multi-satellite observation data, the study of the multi-satellites normalization of the visible detector was urgent. The method required to be non-rely on the in orbit calibration. So as to validate the calibration results before and after the launch; calculate day updating surface bidirectional reflectance distribution function (BRDF); at the same time track the long-term decay phenomenon of the detector's linearity and responsivity. By research of the typical BRDF model, the normalization method was designed. Which could effectively solute the interference of surface directional reflectance characteristics, non-rely on visible detector in orbit calibration. That was the Median Vertical Plane (MVP) method. The MVP method was based on the symmetry of principal plane, which were the directional reflective properties of the general surface targets. Two geostationary satellites were taken as the endpoint of a segment, targets on the intersecting line of the segment's MVP and the earth surface could be used as a normalization reference target (NRT). Observation on the NRT by two satellites at the moment the sun passing through the MVP brought the same observation zenith, solar zenith, and opposite relative direction angle. At that time, the linear regression coefficients of the satellite output data were the required normalization coefficients. The normalization coefficients between FY-2D, FY-2E and FY-2F were calculated, and the self-test method of the normalized results was designed and realized. The results showed the differences of the responsivity between satellites could up to 10.1%(FY-2E to FY-2F); the differences of the output reflectance calculated by the broadcast calibration look-up table could up to 21.1%(FY-2D to FY-2F); the differences of the output

  16. The Analysis and Calculation Method of Urban Rail Transit Carrying Capacity Based on Express-Slow Mode

    Directory of Open Access Journals (Sweden)

    Xiaobing Ding

    2016-01-01

    Full Text Available Urban railway transport that connects suburbs and city areas is characterized by uneven temporal and spatial distribution in terms of passenger flow and underutilized carrying capacity. This paper aims to develop methodologies to measure the carrying capacity of the urban railway by introducing a concept of the express-slow mode. We first explore factors influencing the carrying capacity under the express-slow mode and the interactive relationships among these factors. Then we establish seven different scenarios to measure the carrying capacity by considering the ratio of the number of the express trains and the slow trains, the station where overtaking takes place, and the number of overtaking maneuvers. Taking Shanghai Metro Line 16 as an empirical study, the proposed methods to measure the carrying capacity under different express-slow mode are proved to be valid. This paper contributes to the literature by remodifying the traditional methods to measure the carrying capacity when different express-slow modes are applied to improve the carrying capacity of the suburban railway.

  17. Distance Determination Method for Normally Distributed Obstacle Avoidance of Mobile Robots in Stochastic Environments

    Directory of Open Access Journals (Sweden)

    Jinhong Noh

    2016-04-01

    Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.

  18. New determinations of Q quality factors and eigenfrequencies for the whole set of singlets of the Earth's normal modes 0S 0, 0S 2, 0S 3 and 2S 1 using superconducting gravimeter data from the GGP network

    Science.gov (United States)

    Roult, Geneviève; Rosat, Séverine; Clévédé, Eric; Millot-Langet, Raphaële; Hinderer, Jacques

    2006-01-01

    We present new modal Q measurements of the 0S 0 and 0S 2 modes, and first modal frequencies and Q measurements of 2S 1 and 0S 3 modes. The high quality of the GGP (Global Geodynamics Project) superconducting gravimeters (SGs) contributes to the clear observation of seismic normal modes at frequencies lower than 1 mHz and offers a good opportunity for studying the behaviour of these modes. The interest of scientists in the gravest normal modes is due to the fact that they do contribute to a better knowledge of the density profile in the Earth, helping to constrain Earth's models. These modes have been clearly identified after some large recent events recorded with superconducting gravimeters, particularly well-suited for low-frequency investigations. The Ms = 8.1 (Mw = 8.4) Peruvian earthquake of June 2001 and the Ms = 9.0 (Mw = 9.3) Sumatra-Andaman earthquake of December 2004 provide us with individual spectra which exhibit a clear splitting of the spheroidal modes 0S 2, 0S 3 and 2S 1, and a clear identification of each of the individual singlets, with a resolution never obtained from broad-band seismometers records. The Q quality factors have been determined from the apparent decrease of the amplitude of each singlet with time, according to a well-suited technique [Roult, G., Clévédé, E., 2000. New refinements in attenuation measurements from free-oscillation and surface-wave observations. Phys. Earth Planet. Inter. 121, 1-37]. The results are compared to the theoretical frequencies and Q quality factors computed for the PREM and 1066A models, taking into account both rotation and ellipticity effects of the Earth. The two observed datasets (frequencies and Q quality factors) exhibit a splitting on the observed values different from the predicted one. That seems to point out that some parameters as density or attenuation values used in the theoretical models do not explain the observations. A new dataset of frequencies and Q quality factors of the whole set of

  19. Raman dispersion spectroscopy on the highly saddled nickel(II)-octaethyltetraphenylporphyrin reveals the symmetry of nonplanar distortions and the vibronic coupling strength of normal modes

    International Nuclear Information System (INIS)

    Schweitzer-Stenner, R.; Stichternath, A.; Dreybrodt, W.; Jentzen, W.; Song, X.; Shelnutt, J.A.; Nielsen, O.F.; Medforth, C.J.; Smith, K.M.

    1997-01-01

    We have measured the polarized Raman cross sections and depolarization ratios of 16 fundamental modes of nickel octaethyltetraphenylporphyrin in a CS 2 solution for 16 fundamental modes, i.e., the A 1g -type vibrations ν 1 , ν 2 , ν 3 , ν 4 , ν 5 , and φ 8 , the B 1g vibrations ν 11 and ν 14 , the B 2g vibrations ν 28 , ν 29 , and ν 30 and the antisymmetric A 2g modes ν 19 , ν 20 , ν 22 , and ν 23 as function of the excitation wavelength. The data cover the entire resonant regions of the Q- and B-bands. They were analyzed by use of a theory which describes intra- and intermolecular coupling in terms of a time-independent nonadiabatic perturbation theory [E. Unger, U. Bobinger, W. Dreybrodt, and R. Schweitzer-Stenner, J. Phys. Chem. 97, 9956 (1993)]. This approach explicitly accounts in a self-consistent way for multimode mixing with all Raman modes investigated. The vibronic coupling parameters obtained from this procedure were then used to successfully fit the vibronic side bands of the absorption spectrum and to calculate the resonance excitation profiles in absolute units. Our results show that the porphyrin macrocycle is subject to B 2u -(saddling) and B 1u -(ruffling) distortions which lower its symmetry to S 4 . Thus, evidence is provided that the porphyrin molecule maintains the nonplanar structure of its crystal phase in an organic solvent. The vibronic coupling parameters indicate a breakdown of the four-orbital model. This notion is corroborated by (ZINDO/S) calculations which reveal that significant configurational interaction occurs between the electronic transitions into |Q right-angle- and |1B right-angle-states and various porphyrin→porphyrin, metal→porphyrin, and porphyrin→metal transitions. (Abstract Truncated)

  20. Novel Approach to Design Ultra Wideband Microwave Amplifiers: Normalized Gain Function Method

    Directory of Open Access Journals (Sweden)

    R. Kopru

    2013-09-01

    Full Text Available In this work, we propose a novel approach called as “Normalized Gain Function (NGF method” to design low/medium power single stage ultra wide band microwave amplifiers based on linear S parameters of the active device. Normalized Gain Function TNGF is defined as the ratio of T and |S21|^2, desired shape or frequency response of the gain function of the amplifier to be designed and the shape of the transistor forward gain function, respectively. Synthesis of input/output matching networks (IMN/OMN of the amplifier requires mathematically generated target gain functions to be tracked in two different nonlinear optimization processes. In this manner, NGF not only facilitates a mathematical base to share the amplifier gain function into such two distinct target gain functions, but also allows their precise computation in terms of TNGF=T/|S21|^2 at the very beginning of the design. The particular amplifier presented as the design example operates over 800-5200 MHz to target GSM, UMTS, Wi-Fi and WiMAX applications. An SRFT (Simplified Real Frequency Technique based design example supported by simulations in MWO (MicroWave Office from AWR Corporation is given using a 1400mW pHEMT transistor, TGF2021-01 from TriQuint Semiconductor.

  1. Application of in situ current normalized PIGE method for determination of total boron and its isotopic composition

    International Nuclear Information System (INIS)

    Chhillar, Sumit; Acharya, R.; Sodaye, S.; Pujari, P.K.

    2014-01-01

    A particle induced gamma-ray emission (PIGE) method using proton beam has been standardized for determination of isotopic composition of natural boron and enriched boron samples. Target pellets of boron standard and samples were prepared in cellulose matrix. The prompt gamma rays of 429 keV, 718 keV and 2125 keV were measured from 10 B(p,αγ) 7 Be, 10 B(p, p'γ) 10 B and 11 B(p, p'γ) 11 B nuclear reactions, respectively. For normalizing the beam current variations in situ current normalization method was used. Validation of method was carried out using synthetic samples of boron carbide, borax, borazine and lithium metaborate in cellulose matrix. (author)

  2. Staining Methods for Normal and Regenerative Myelin in the Nervous System.

    Science.gov (United States)

    Carriel, Víctor; Campos, Antonio; Alaminos, Miguel; Raimondo, Stefania; Geuna, Stefano

    2017-01-01

    Histochemical techniques enable the specific identification of myelin by light microscopy. Here we describe three histochemical methods for the staining of myelin suitable for formalin-fixed and paraffin-embedded materials. The first method is conventional luxol fast blue (LFB) method which stains myelin in blue and Nissl bodies and mast cells in purple. The second method is a LBF-based method called MCOLL, which specifically stains the myelin as well the collagen fibers and cells, giving an integrated overview of the histology and myelin content of the tissue. Finally, we describe the osmium tetroxide method, which consist in the osmication of previously fixed tissues. Osmication is performed prior the embedding of tissues in paraffin giving a permanent positive reaction for myelin as well as other lipids present in the tissue.

  3. Investigation of diocotron modes in toroidally trapped electron plasmas using non-destructive method

    Science.gov (United States)

    Lachhvani, Lavkesh; Pahari, Sambaran; Sengupta, Sudip; Yeole, Yogesh G.; Bajpai, Manu; Chattopadhyay, P. K.

    2017-10-01

    Experiments with trapped electron plasmas in a SMall Aspect Ratio Toroidal device (SMARTEX-C) have demonstrated a flute-like mode represented by oscillations on capacitive (wall) probes. Although analogous to diocotron mode observed in linear electron traps, the mode evolution in toroids can have interesting consequences due to the presence of in-homogeneous magnetic field. In SMARTEX-C, the probe signals are observed to undergo transition from small, near-sinusoidal oscillations to large amplitude, non-linear "double-peaked" oscillations. To interpret the wall probe signal and bring forth the dynamics, an expression for the induced current on the probe for an oscillating charge is derived, utilizing Green's Reciprocation Theorem. Equilibrium position, poloidal velocity of the charge cloud, and charge content of the cloud, required to compute the induced current, are estimated from the experiments. Signal through capacitive probes is thereby computed numerically for possible charge cloud trajectories. In order to correlate with experiments, starting with an intuitive guess of the trajectory, the model is evolved and tweaked to arrive at a signal consistent with experimentally observed probe signals. A possible vortex like dynamics is predicted, hitherto unexplored in toroidal geometries, for a limited set of experimental observations from SMARTEX-C. Though heuristic, a useful interpretation of capacitive probe data in terms of charge cloud dynamics is obtained.

  4. Study on variance-to-mean method as subcriticality monitor for accelerator driven system operated with pulse-mode

    International Nuclear Information System (INIS)

    Yamauchi, Hideto; Kitamura, Yasunori; Yamane, Yoshihiro; Misawa, Tsuyoshi; Unesaki, Hironobu

    2003-01-01

    Two types of the variance-to-mean methods for the subcritical system that was driven by the periodic and pulsed neutron source were developed and their experimental examination was performed with the Kyoto University Critical Assembly and a pulsed neutron generator. As a result, it was demonstrated that the prompt neutron decay constant could be measured by these methods. From this fact, it was concluded that the present variance-to-mean methods had potential for being used in the subcriticality monitor for the future accelerator driven system operated with the pulse-mode. (author)

  5. The normalization of surface anisotropy effects present in SEVIRI reflectances by using the MODIS BRDF method

    DEFF Research Database (Denmark)

    Proud, Simon Richard; Zhang, Qingling; Schaaf, Crystal

    2014-01-01

    A modified version of the MODerate resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF) algorithm is presented for use in the angular normalization of surface reflectance data gathered by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI...... acquisition period than the comparable MODIS products while, at the same time, removing many of the angular perturbations present within the original MSG data. The NBAR data are validated against reflectance data from the MODIS instrument and in situ data gathered at a field location in Africa throughout 2008....... It is found that the MSG retrievals are stable and are of high-quality across much of the SEVIRI disk while maintaining a higher temporal resolution than the MODIS BRDF products. However, a number of circumstances are discovered whereby the BRDF model is unable to function correctly with the SEVIRI...

  6. Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models

    Science.gov (United States)

    Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...

  7. Measurement of plasma histamine: description of an improved method and normal values

    International Nuclear Information System (INIS)

    Dyer, J.; Warren, K.; Merlin, S.; Metcalfe, D.D.; Kaliner, M.

    1982-01-01

    The single isotopic-enzymatic assay of histamine was modified to increase its sensitivity and to facilitate measurement of plasma histamine levels. The modification involved extracting 3 H-1-methylhistamine (generated by the enzyme N-methyltransferase acting on histamine in the presence of S-[methyl- 3 H]-adenosyl-L-methionine) into chloroform and isolating the 3 H-1-methylhistamine by thin-layer chromatography (TLC). The TLC was developed in acetone:ammonium hydroxide (95:10), and the methylhistamine spot (Rf . 0.50) was identified with an o-phthalaldehyde spray, scraped from the plate, and assayed in a scintillation counter. The assay in plasma demonstrated a linear relationship from 200 to 5000 pg histamine/ml. Plasma always had higher readings than buffer, and dialysis of plasma returned these values to the same level as buffer, suggesting that the baseline elevations might be attributable to histamine. However, all histamine standard curves were run in dialyzed plasma to negate any additional influences plasma might exert on the assay. The arithmetic mean (+/- SEM) in normal plasma histamine was 318.4 +/- 25 pg/ml (n . 51), and the geometric mean was 280 +/- 35 pg/ml. Plasma histamine was significantly elevated by infusion of histamine at 0.05 to 1.0 micrograms/kg/min or by cold immersion of the hand of a cold-urticaria patient. Therefore this modified isotopic-enzymatic assay of histamine is extremely sensitive, capable of measuring fluctuations in plasma histamine levels within the normal range, and potentially useful in analysis of the role histamine plays in human physiology

  8. A method for detecting nonlinear determinism in normal and epileptic brain EEG signals.

    Science.gov (United States)

    Meghdadi, Amir H; Fazel-Rezai, Reza; Aghakhani, Yahya

    2007-01-01

    A robust method of detecting determinism for short time series is proposed and applied to both healthy and epileptic EEG signals. The method provides a robust measure of determinism through characterizing the trajectories of the signal components which are obtained through singular value decomposition. Robustness of the method is shown by calculating proposed index of determinism at different levels of white and colored noise added to a simulated chaotic signal. The method is shown to be able to detect determinism at considerably high levels of additive noise. The method is then applied to both intracranial and scalp EEG recordings collected in different data sets for healthy and epileptic brain signals. The results show that for all of the studied EEG data sets there is enough evidence of determinism. The determinism is more significant for intracranial EEG recordings particularly during seizure activity.

  9. Evaluating new methods for direct measurement of the moderator temperature coefficient in nuclear power plants during normal operation

    International Nuclear Information System (INIS)

    Makai, M.; Kalya, Z.; Nemes, I.; Pos, I.; Por, G.

    2007-01-01

    Moderator temperature coefficient of reactivity is not monitored during fuel cycles in WWER reactors, because it is not very easy or impossible to measure it without disturbing the normal operation. Two new methods were tested in our WWER type nuclear power plant to try methodologies, which enable to measure that important to safety parameter during the fuel cycle. One is based on small perturbances, and only small changes are requested in operation, the other is based on noise methods, which means it is without interference with reactor operation. Both method is new that aspects that they uses the plant computer data(VERONA) based signals calculated by C P ORCA diffusion code (Authors)

  10. Simulation de Ia propagation de fissures dans les solides elastiques en modes mixtes par Ia methode des equations integrales duales

    OpenAIRE

    Kebir , Hocine; Roelandt , Jean Marc; Gaudin , Jocelyn

    2000-01-01

    International audience; The present paper is concerned with the effective numerical implementation of the two dimensional Dual Boundary element method to analyse the mixed-mode crack growth All the boundaries are discretized with discontinuous quadratic boundary elements and the crack-tip is modeled by singular elements that exactly represent the strain field singularity $1/\\sqrt{r}$. The Stress lntensity Factors can be computed very accurately from the crack opening dis placement at collocat...

  11. Libraries for spectrum identification: Method of normalized coordinates versus linear correlation

    International Nuclear Information System (INIS)

    Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.

    2008-01-01

    In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification

  12. Characteristics and Laser Performance of Yb3+-Doped Silica Large Mode Area Fibers Prepared by Sol–Gel Method

    Directory of Open Access Journals (Sweden)

    Shikai Wang

    2013-12-01

    Full Text Available Large-size 0.1 Yb2O3–1.0 Al2O3–98.9 SiO2 (mol% core glass was prepared by the sol–gel method. Its optical properties were evaluated. Both large mode area double cladding fiber (LMA DCF with core diameter of 48 µm and large mode area photonic crystal fiber (LMA PCF with core diameter of 90 µm were prepared from this core glass. Transmission loss at 1200 nm is 0.41 dB/m. Refractive index fluctuation is less than 2 × 10−4. Pumped by 976 nm laser diode LD pigtailed with silica fiber (NA 0.22, the slope efficiency of 54% and “light-to-light” conversion efficiency of 51% were realized in large mode area double cladding fiber, and 81 W laser power with a slope efficiency of 70.8% was achieved in the corresponding large mode area photonic crystal fiber.

  13. A Gauss-Newton method for the integration of spatial normal fields in shape Space

    KAUST Repository

    Balzer, Jonathan

    2011-01-01

    to solving a nonlinear least-squares problem in shape space. Previously, the corresponding minimization has been performed by gradient descent, which suffers from slow convergence and susceptibility to local minima. Newton-type methods, although significantly

  14. Comparative numerical study of kaolin clay with three drying methods: Convective, convective–microwave and convective infrared modes

    International Nuclear Information System (INIS)

    Hammouda, I.; Mihoubi, D.

    2014-01-01

    Highlights: • Modelling of drying of deformable media. • Theoretical study of kaolin clay with three drying methods: convective, convective–microwave and convective infrared mode. • The stresses generated during convective, microwave/convective drying and infrared/convective drying. • The combined drying decrease the intensity of stresses developed during drying. - Abstract: A mathematical model is developed to simulate the response of a kaolin clay sample when subjected to convective, convective–microwave and convective–infrared mode. This model is proposed to describe heat, mass, and momentum transfers applied to a viscoelastic medium described by a Maxwell model with two branches. The combined drying methods were investigated to examine whether these types of drying may minimize cracking that can be generated in the product and to know whether the best enhancement is developed by the use of infra-red or microwave radiation. The numerical code allowed us to determine, and thus, compare the effect of the drying mode on drying rate, temperature, moisture content and mechanical stress evolutions during drying. The numerical results show that the combined drying decrease the intensity of stresses developed during drying and that convective–microwave drying is the best method that gives a good quality of dried product

  15. Normal boundary intersection method for suppliers' strategic bidding in electricity markets: An environmental/economic approach

    International Nuclear Information System (INIS)

    Vahidinasab, V.; Jadid, S.

    2010-01-01

    In this paper the problem of developing optimal bidding strategies for the participants of oligopolistic energy markets is studied. Special attention is given to the impacts of suppliers' emission of pollutants on their bidding strategies. The proposed methodology employs supply function equilibrium (SFE) model to represent the strategic behavior of each supplier and locational marginal pricing mechanism for the market clearing. The optimal bidding strategies are developed mathematically using a bilevel optimization problem where the upper-level subproblem maximizes individual supplier payoff and the lower-level subproblem solves the Independent System Operator's market clearing problem. In order to solve market clearing mechanism the multiobjective optimal power flow is used with supplier emission of pollutants, as an extra objective, subject to the supplier physical constraints. This paper uses normal boundary intersection (NBI) approach for generating Pareto optimal set and then fuzzy decision making to select the best compromise solution. The developed algorithm is applied to an IEEE 30-bus test system. Numerical results demonstrate the potential and effectiveness of the proposed multiobjective approach to develop successful bidding strategies in those energy markets that minimize generation cost and emission of pollutants simultaneously.

  16. The Normalization of Surface Anisotropy Effects Present in SEVIRI Reflectances by Using the MODIS BRDF Method

    Science.gov (United States)

    Proud, Simon Richard; Zhang, Qingling; Schaaf, Crystal; Fensholt, Rasmus; Rasmussen, Mads Olander; Shisanya, Chris; Mutero, Wycliffe; Mbow, Cheikh; Anyamba, Assaf; Pak, Ed; hide

    2014-01-01

    A modified version of the MODerate resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF) algorithm is presented for use in the angular normalization of surface reflectance data gathered by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) aboard the geostationary Meteosat Second Generation (MSG) satellites. We present early and provisional daily nadir BRDFadjusted reflectance (NBAR) data in the visible and near-infrared MSG channels. These utilize the high temporal resolution of MSG to produce BRDF retrievals with a greatly reduced acquisition period than the comparable MODIS products while, at the same time, removing many of the angular perturbations present within the original MSG data. The NBAR data are validated against reflectance data from the MODIS instrument and in situ data gathered at a field location in Africa throughout 2008. It is found that the MSG retrievals are stable and are of high-quality across much of the SEVIRI disk while maintaining a higher temporal resolution than the MODIS BRDF products. However, a number of circumstances are discovered whereby the BRDF model is unable to function correctly with the SEVIRI observations-primarily because of an insufficient spread of angular data due to the fixed sensor location or localized cloud contamination.

  17. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Directory of Open Access Journals (Sweden)

    Statovci Driton

    2006-01-01

    Full Text Available We present a practical solution for dynamic spectrum management (DSM in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA. Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA, and the bidirectional IWFA (bi-IWFA. We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  18. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Science.gov (United States)

    Statovci, Driton; Nordström, Tomas; Nilsson, Rickard

    2006-12-01

    We present a practical solution for dynamic spectrum management (DSM) in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA). Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA) for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA), and the bidirectional IWFA (bi-IWFA). We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  19. First-order systems of linear partial differential equations: normal forms, canonical systems, transform methods

    Directory of Open Access Journals (Sweden)

    Heinz Toparkus

    2014-04-01

    Full Text Available In this paper we consider first-order systems with constant coefficients for two real-valued functions of two real variables. This is both a problem in itself, as well as an alternative view of the classical linear partial differential equations of second order with constant coefficients. The classification of the systems is done using elementary methods of linear algebra. Each type presents its special canonical form in the associated characteristic coordinate system. Then you can formulate initial value problems in appropriate basic areas, and you can try to achieve a solution of these problems by means of transform methods.

  20. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  1. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  2. Normal Science and the Paranormal: The Effect of a Scientific Method Course on Students' Beliefs.

    Science.gov (United States)

    Morier, Dean; Keeports, David

    1994-01-01

    A study investigated the effects of an interdisciplinary course on the scientific method on the attitudes of 34 college students toward the paranormal. Results indicated that the course substantially reduced belief in the paranormal, relative to a control group. Student beliefs in their own paranormal powers, however, did not change. (Author/MSE)

  3. The Stochastic Galerkin Method for Darcy Flow Problem with Log-Normal Random

    Czech Academy of Sciences Publication Activity Database

    Beres, Michal; Domesová, Simona

    2017-01-01

    Roč. 15, č. 2 (2017), s. 267-279 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Darcy flow * Gaussian random field * Karhunen-Loeve decomposition * polynomial chaos * Stochastic Galerkin method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2280

  4. A method for unsupervised change detection and automatic radiometric normalization in multispectral data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton John

    2011-01-01

    Based on canonical correlation analysis the iteratively re-weighted multivariate alteration detection (MAD) method is used to successfully perform unsupervised change detection in bi-temporal Landsat ETM+ images covering an area with villages, woods, agricultural fields and open pit mines in North...... to carry out the analyses is available from the authors' websites....

  5. Conformational variability of the stationary phase survival protein E from Xylella fastidiosa revealed by X-ray crystallography, small-angle X-ray scattering studies, and normal mode analysis.

    Science.gov (United States)

    Machado, Agnes Thiane Pereira; Fonseca, Emanuella Maria Barreto; Reis, Marcelo Augusto Dos; Saraiva, Antonio Marcos; Santos, Clelton Aparecido Dos; de Toledo, Marcelo Augusto Szymanski; Polikarpov, Igor; de Souza, Anete Pereira; Aparicio, Ricardo; Iulek, Jorge

    2017-10-01

    Xylella fastidiosa is a xylem-limited bacterium that infects a wide variety of plants. Stationary phase survival protein E is classified as a nucleotidase, which is expressed when bacterial cells are in the stationary growth phase and subjected to environmental stresses. Here, we report four refined X-ray structures of this protein from X. fastidiosa in four different crystal forms in the presence and/or absence of the substrate 3'-AMP. In all chains, the conserved loop verified in family members assumes a closed conformation in either condition. Therefore, the enzymatic mechanism for the target protein might be different of its homologs. Two crystal forms exhibit two monomers whereas the other two show four monomers in the asymmetric unit. While the biological unit has been characterized as a tetramer, differences of their sizes and symmetry are remarkable. Four conformers identified by Small-Angle X-ray Scattering (SAXS) in a ligand-free solution are related to the low frequency normal modes of the crystallographic structures associated with rigid body-like protomer arrangements responsible for the longitudinal and symmetric adjustments between tetramers. When the substrate is present in solution, only two conformers are selected. The most prominent conformer for each case is associated to a normal mode able to elongate the protein by moving apart two dimers. To our knowledge, this work was the first investigation based on the normal modes that analyzed the quaternary structure variability for an enzyme of the SurE family followed by crystallography and SAXS validation. The combined results raise new directions to study allosteric features of XfSurE protein. © 2017 Wiley Periodicals, Inc.

  6. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    Science.gov (United States)

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  7. Normalization of shielding structure quality and the method of its studying

    International Nuclear Information System (INIS)

    Bychkov, Ya.A.; Lavdanskij, P.A.

    1987-01-01

    Method for evaluation of nuclear facility radiation shield quality is suggested. Indexes of shielding structure radiation efficiency and face efficiency are used as the shielding structure quality indexes. The first index is connected with radiation dose rate during personnel irradiation behind the shield, and the second one - with the stresses in shielding structure introduction of the indexes presented allows to evaluate objectively the quality of nuclear facility shielding structure quality design construction and operation and to economize labour and material resources

  8. Low flow measurement for infusion pumps: implementation and uncertainty determination of the normalized method

    International Nuclear Information System (INIS)

    Cebeiro, J; Musacchio, A; Sardá, E Fernández

    2011-01-01

    Intravenous drug delivery is a standard practice in hospitalized patients. As the blood concentration reached depends directly on infusion rate, it is important to use safe devices that guarantee output accuracy. In pediatric intensive care units, low infusion rates (i.e. lower than 10.0 ml/h) are frequently used. Thus, it would be necessary to use control programs to search for deviations at this flow range. We describe the implementation of a gravimetric method to test infusion pumps in low flow delivery. The procedure recommended by the ISO/IEC 60601-2-24 standard was used being a reasonable option among the methods frequently used in hospitals, such as infusion pumps analyzers and volumetric cylinders. The main uncertainty sources affecting this method are revised and a numeric and graphic uncertainty analysis is presented in order to show its dependence on flow. Additionally, the obtained uncertainties are compared to those presented by an automatic flow analyzer. Finally, the results of a series of tests performed on a syringe infusion pump operating at low rates are shown.

  9. Detection of normal plantar fascia thickness in adults via the ultrasonographic method.

    Science.gov (United States)

    Abul, Kadir; Ozer, Devrim; Sakizlioglu, Secil Sezgin; Buyuk, Abdul Fettah; Kaygusuz, Mehmet Akif

    2015-01-01

    Heel pain is a prevalent concern in orthopedic clinics, and there are numerous pathologic abnormalities that can cause heel pain. Plantar fasciitis is the most common cause of heel pain, and the plantar fascia thickens in this process. It has been found that thickening to greater than 4 mm in ultrasonographic measurements can be accepted as meaningful in diagnoses. Herein, we aimed to measure normal plantar fascia thickness in adults using ultrasonography. We used ultrasonography to measure the plantar fascia thickness of 156 healthy adults in both feet between April 1, 2011, and June 30, 2011. These adults had no previous heel pain. The 156 participants comprised 88 women (56.4%) and 68 men (43.6%) (mean age, 37.9 years; range, 18-65 years). The weight, height, and body mass index of the participants were recorded, and statistical analyses were conducted. The mean ± SD (range) plantar fascia thickness measurements for subgroups of the sample were as follows: 3.284 ± 0.56 mm (2.4-5.1 mm) for male right feet, 3.3 ± 0.55 mm (2.5-5.0 mm) for male left feet, 2.842 ± 0.42 mm (1.8-4.1 mm) for female right feet, and 2.8 ± 0.44 mm (1.8-4.3 mm) for female left feet. The overall mean ± SD (range) thickness for the right foot was 3.035 ± 0.53 mm (1.8-5.1 mm) and for the left foot was 3.053 ± 0.54 mm (1.8-5.0 mm). There was a statistically significant and positive correlation between plantar fascia thickness and participant age, weight, height, and body mass index. The plantar fascia thickness of adults without heel pain was measured to be less than 4 mm in most participants (~92%). There was no statistically significant difference between the thickness of the right and left foot plantar fascia.

  10. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    Science.gov (United States)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  11. A multi-mode real-time terrain parameter estimation method for wheeled motion control of mobile robots

    Science.gov (United States)

    Li, Yuankai; Ding, Liang; Zheng, Zhizhong; Yang, Qizhi; Zhao, Xingang; Liu, Guangjun

    2018-05-01

    For motion control of wheeled planetary rovers traversing on deformable terrain, real-time terrain parameter estimation is critical in modeling the wheel-terrain interaction and compensating the effect of wheel slipping. A multi-mode real-time estimation method is proposed in this paper to achieve accurate terrain parameter estimation. The proposed method is composed of an inner layer for real-time filtering and an outer layer for online update. In the inner layer, sinkage exponent and internal frictional angle, which have higher sensitivity than that of the other terrain parameters to wheel-terrain interaction forces, are estimated in real time by using an adaptive robust extended Kalman filter (AREKF), whereas the other parameters are fixed with nominal values. The inner layer result can help synthesize the current wheel-terrain contact forces with adequate precision, but has limited prediction capability for time-variable wheel slipping. To improve estimation accuracy of the result from the inner layer, an outer layer based on recursive Gauss-Newton (RGN) algorithm is introduced to refine the result of real-time filtering according to the innovation contained in the history data. With the two-layer structure, the proposed method can work in three fundamental estimation modes: EKF, REKF and RGN, making the method applicable for flat, rough and non-uniform terrains. Simulations have demonstrated the effectiveness of the proposed method under three terrain types, showing the advantages of introducing the two-layer structure.

  12. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  13. Mandibulary dental arch form differences between level four polynomial method and pentamorphic pattern for normal occlusion sample

    Directory of Open Access Journals (Sweden)

    Y. Yuliana

    2011-07-01

    Full Text Available The aim of an orthodontic treatment is to achieve aesthetic, dental health and the surrounding tissues, occlusal functional relationship, and stability. The success of an orthodontic treatment is influenced by many factors, such as diagnosis and treatment plan. In order to do a diagnosis and a treatment plan, medical record, clinical examination, radiographic examination, extra oral and intra oral photos, as well as study model analysis are needed. The purpose of this study was to evaluate the differences in dental arch form between level four polynomial and pentamorphic arch form and to determine which one is best suitable for normal occlusion sample. This analytic comparative study was conducted at Faculty of Dentistry Universitas Padjadjaran on 13 models by comparing the dental arch form using the level four polynomial method based on mathematical calculations, the pattern of the pentamorphic arch and mandibular normal occlusion as a control. The results obtained were tested using statistical analysis T student test. The results indicate a significant difference both in the form of level four polynomial method and pentamorphic arch form when compared with mandibular normal occlusion dental arch form. Level four polynomial fits better, compare to pentamorphic arch form.

  14. Performance improvement of two-dimensional EUV spectroscopy based on high frame rate CCD and signal normalization method

    International Nuclear Information System (INIS)

    Zhang, H.M.; Morita, S.; Ohishi, T.; Goto, M.; Huang, X.L.

    2014-01-01

    In the Large Helical Device (LHD), the performance of two-dimensional (2-D) extreme ultraviolet (EUV) spectroscopy with wavelength range of 30-650A has been improved by installing a high frame rate CCD and applying a signal intensity normalization method. With upgraded 2-D space-resolved EUV spectrometer, measurement of 2-D impurity emission profiles with high horizontal resolution is possible in high-density NBI discharges. The variation in intensities of EUV emission among a few discharges is significantly reduced by normalizing the signal to the spectral intensity from EUV_—Long spectrometer which works as an impurity monitor with high-time resolution. As a result, high resolution 2-D intensity distribution has been obtained from CIV (384.176A), CV(2x40.27A), CVI(2x33.73A) and HeII(303.78A). (author)

  15. A novel discrete adaptive sliding-mode-like control method for ionic polymer–metal composite manipulators

    International Nuclear Information System (INIS)

    Sun, Zhiyong; Hao, Lina; Liu, Liqun; Chen, Wenlin; Li, Zhi

    2013-01-01

    Ionic polymer–metal composite (IPMC), also called artificial muscle, is an EAP material which can generate a relatively large deformation with a low driving voltage (generally less than 5 V). Like other EAP materials, IPMC possesses strong nonlinear properties, which can be described as a hybrid of back-relaxation (BR) and hysteresis characteristics, which also vary with water content, environmental temperature and even the usage consumption. Nowadays, many control approaches have been developed to tune the IPMC actuators, among which adaptive methods show a particular striking performance. To deal with IPMCs’ nonlinear problem, this paper represents a robust discrete adaptive inverse (AI) control approach, which employs an on-line identification technique based on the BR operator and Prandtl–Ishlinskii (PI) hysteresis operator hybrid model estimation method. Here the newly formed control approach is called discrete adaptive sliding-mode-like control (DASMLC) due to the similarity of its design method to that of a sliding mode controller. The weighted least mean squares (WLMS) identification method was employed to estimate the hybrid IPMC model because of its advantage of insensitivity to environmental noise. Experiments with the DASMLC approach and a conventional PID controller were carried out to compare and demonstrate the proposed controller’s better performance. (paper)

  16. Dark current studies on a normal-conducting high-brightness very-high-frequency electron gun operating in continuous wave mode

    Directory of Open Access Journals (Sweden)

    R. Huang

    2015-01-01

    Full Text Available We report on measurements and analysis of a field-emitted electron current in the very-high-frequency (VHF gun, a room temperature rf gun operating at high field and continuous wave (CW mode at the Lawrence Berkeley National Laboratory (LBNL. The VHF gun is the core of the Advanced Photo-injector Experiment (APEX at LBNL, geared toward the development of an injector for driving the next generation of high average power x-ray free electron lasers. High accelerating fields at the cathode are necessary for the high-brightness performance of an electron gun. When coupled with CW operation, such fields can generate a significant amount of field-emitted electrons that can be transported downstream the accelerator forming the so-called “dark current.” Elevated levels of a dark current can cause radiation damage, increase the heat load in the downstream cryogenic systems, and ultimately limit the overall performance and reliability of the facility. We performed systematic measurements that allowed us to characterize the field emission from the VHF gun, determine the location of the main emitters, and define an effective strategy to reduce and control the level of dark current at APEX. Furthermore, the energy spectra of isolated sources have been measured. A simple model for energy data analysis was developed that allows one to extract information on the emitter from a single energy distribution measurement.

  17. Study by the disco method of critical components of a P.W.R. normal feedwater system

    International Nuclear Information System (INIS)

    Duchemin, B.; Villeneuve, M.J. de; Vallette, F.; Bruna, J.G.

    1983-03-01

    The DISCO (Determination of Importance Sensitivity of COmponents) method objectif is to rank the components of a system in order to obtain the most important ones versus availability. This method uses the fault tree description of the system and the cut set technique. It ranks the components by ordering the importances attributed to each one. The DISCO method was applied to the study of the 900 MWe P.W.R. normal feedwater system with insufficient flow in steam generator. In order to take account of operating experience several data banks were used and the results compared. This study allowed to determine the most critical component (the turbo-pumps) and to propose and quantify modifications of the system in order to improve its availability

  18. A non-Hertzian method for solving wheel-rail normal contact problem taking into account the effect of yaw

    Science.gov (United States)

    Liu, Binbin; Bruni, Stefano; Vollebregt, Edwin

    2016-09-01

    A novel approach is proposed in this paper to deal with non-Hertzian normal contact in wheel-rail interface, extending the widely used Kik-Piotrowski method. The new approach is able to consider the effect of the yaw angle of the wheelset against the rail on the shape of the contact patch and on pressure distribution. Furthermore, the method considers the variation of profile curvature across the contact patch, enhancing the correspondence to CONTACT for highly non-Hertzian contact conditions. The simulation results show that the proposed method can provide more accurate estimation than the original algorithm compared to Kalker's CONTACT, and that the influence of yaw on the contact results is significant under certain circumstances.

  19. Coupling Neumann development and component mode synthesis methods for stochastic analysis of random structures

    Directory of Open Access Journals (Sweden)

    Driss Sarsri

    2014-05-01

    Full Text Available In this paper, we propose a method to calculate the first two moments (mean and variance of the structural dynamics response of a structure with uncertain variables and subjected to random excitation. For this, Newmark method is used to transform the equation of motion of the structure into a quasistatic equilibrium equation in the time domain. The Neumann development method was coupled with Monte Carlo simulations to calculate the statistical values of the random response. The use of modal synthesis methods can reduce the dimensions of the model before integration of the equation of motion. Numerical applications have been developed to highlight effectiveness of the method developed to analyze the stochastic response of large structures.

  20. ChIPnorm: a statistical method for normalizing and identifying differential regions in histone modification ChIP-seq libraries.

    Science.gov (United States)

    Nair, Nishanth Ulhas; Sahu, Avinash Das; Bucher, Philipp; Moret, Bernard M E

    2012-01-01

    The advent of high-throughput technologies such as ChIP-seq has made possible the study of histone modifications. A problem of particular interest is the identification of regions of the genome where different cell types from the same organism exhibit different patterns of histone enrichment. This problem turns out to be surprisingly difficult, even in simple pairwise comparisons, because of the significant level of noise in ChIP-seq data. In this paper we propose a two-stage statistical method, called ChIPnorm, to normalize ChIP-seq data, and to find differential regions in the genome, given two libraries of histone modifications of different cell types. We show that the ChIPnorm method removes most of the noise and bias in the data and outperforms other normalization methods. We correlate the histone marks with gene expression data and confirm that histone modifications H3K27me3 and H3K4me3 act as respectively a repressor and an activator of genes. Compared to what was previously reported in the literature, we find that a substantially higher fraction of bivalent marks in ES cells for H3K27me3 and H3K4me3 move into a K27-only state. We find that most of the promoter regions in protein-coding genes have differential histone-modification sites. The software for this work can be downloaded from http://lcbb.epfl.ch/software.html.

  1. LONG-TERM MONITORING OF MODE SWITCHING FOR PSR B0329+54

    International Nuclear Information System (INIS)

    Chen, J. L.; Wang, N.; Liu, Z. Y.; Yuan, J. P.; Wang, H. G.; Lyne, A.; Jessner, A.; Kramer, M.

    2011-01-01

    The mode-switching phenomenon of PSR B0329+54 is investigated based on the long-term monitoring from 2003 September to 2009 April made with the Urumqi 25 m radio telescope at 1540 MHz. At that frequency, the change of relative intensity between the leading and trailing components is the predominant feature of mode switching. The intensity ratios between the leading and trailing components are measured for the individual profiles averaged over a few minutes. It is found that the ratios follow normal distributions, where the abnormal mode has a greater typical width than the normal mode, indicating that the abnormal mode is less stable than the normal mode. Our data show that 84.9% of the time for PSR B0329+54 was in the normal mode and 15.1% was in the abnormal mode. From the two passages of eight-day quasi-continuous observations in 2004, supplemented by the daily data observed with the 15 m telescope at 610 MHz at Jodrell Bank Observatory, the intrinsic distributions of mode timescales are constrained with the Bayesian inference method. It is found that the gamma distribution with the shape parameter slightly smaller than 1 is favored over the normal, log-normal, and Pareto distributions. The optimal scale parameters of the gamma distribution are 31.5 minutes for the abnormal mode and 154 minutes for the normal mode. The shape parameters have very similar values, i.e., 0.75 +0.22 – 0 .17 for the normal mode and 0.84 +0.28 – 0 .22 for the abnormal mode, indicating that the physical mechanisms in both modes may be the same. No long-term modulation of the relative intensity ratios was found for either mode, suggesting that the mode switching was stable. The intrinsic timescale distributions, constrained for this pulsar for the first time, provide valuable information to understand the physics of mode switching.

  2. Methods and data for HTGR fuel performance and radionuclide release modeling during normal operation and accidents for safety analysis

    International Nuclear Information System (INIS)

    Verfondern, K.; Martin, R.C.; Moormann, R.

    1993-01-01

    The previous status report released in 1987 on reference data and calculation models for fission product transport in High-Temperature, Gas-Cooled Reactor (HTGR) safety analyses has been updated to reflect the current state of knowledge in the German HTGR program. The content of the status report has been expanded to include information from other national programs in HTGRs to provide comparative information on methods of analysis and the underlying database for fuel performance and fission product transport. The release and transport of fission products during normal operating conditions and during the accident scenarios of core heatup, water and air ingress, and depressurization are discussed. (orig.) [de

  3. Method for the determination of the equation of state of advanced fuels based on the properties of normal fluids

    International Nuclear Information System (INIS)

    Hecht, M.J.; Catton, I.; Kastenberg, W.E.

    1976-12-01

    An equation of state based on the properties of normal fluids, the law of rectilinear averages, and the second law of thermodynamics can be derived for advanced LMFBR fuels on the basis of the vapor pressure, enthalpy of vaporization, change in heat capacity upon vaporization, and liquid density at the melting point. The method consists of estimating an equation of state by means of the law of rectilinear averages and the second law of thermodynamics, integrating by means of the second law until an instability is reached, and then extrapolating by means of a self-consistent estimation of the enthalpy of vaporization

  4. The method of normal forms for singularly perturbed systems of Fredholm integro-differential equations with rapidly varying kernels

    Energy Technology Data Exchange (ETDEWEB)

    Bobodzhanov, A A; Safonov, V F [National Research University " Moscow Power Engineering Institute" , Moscow (Russian Federation)

    2013-07-31

    The paper deals with extending the Lomov regularization method to classes of singularly perturbed Fredholm-type integro-differential systems, which have not so far been studied. In these the limiting operator is discretely noninvertible. Such systems are commonly known as problems with unstable spectrum. Separating out the essential singularities in the solutions to these problems presents great difficulties. The principal one is to give an adequate description of the singularities induced by 'instability points' of the spectrum. A methodology for separating singularities by using normal forms is developed. It is applied to the above type of systems and is substantiated in these systems. Bibliography: 10 titles.

  5. Reduction of system matrices of planar beam in ANCF by component mode synthesis method

    International Nuclear Information System (INIS)

    Kobayashi, Nobuyuki; Wago, Tsubasa; Sugawara, Yoshiki

    2011-01-01

    A method of reducing the system matrices of a planar flexible beam described by an absolute nodal coordinate formulation (ANCF) is presented. In this method, we focus that the bending stiffness matrix expressed by adopting a continuum mechanics approach to the ANCF beam element is constant when the axial strain is not very large. This feature allows to apply the Craig–Bampton method to the equation of motion that is composed of the independent coordinates when the constraint forces are eliminated. Four numerical examples that compare the proposed method and the conventional ANCF are demonstrated to verify the performance and accuracy of the proposed method. From these examples, it is verified that the proposed method can describe the large deformation effects such as dynamic stiffening due to the centrifugal force, as well as the conventional ANCF does. The use of this method also reduces the computing time, while maintaining an acceptable degree of accuracy for the expression characteristics of the conventional ANCF when the modal truncation number is adequately employed. This reduction in CPU time particularly pronounced in the case of a large element number and small modal truncation number; the reduction can be verified not only in the case of small deformation but also in the case of a fair bit large deformation.

  6. Impact of PET/CT image reconstruction methods and liver uptake normalization strategies on quantitative image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kuhnert, Georg; Sterzer, Sergej; Kahraman, Deniz; Dietlein, Markus; Drzezga, Alexander; Kobe, Carsten [University Hospital of Cologne, Department of Nuclear Medicine, Cologne (Germany); Boellaard, Ronald [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Scheffler, Matthias; Wolf, Juergen [University Hospital of Cologne, Lung Cancer Group Cologne, Department I of Internal Medicine, Center for Integrated Oncology Cologne Bonn, Cologne (Germany)

    2016-02-15

    In oncological imaging using PET/CT, the standardized uptake value has become the most common parameter used to measure tracer accumulation. The aim of this analysis was to evaluate ultra high definition (UHD) and ordered subset expectation maximization (OSEM) PET/CT reconstructions for their potential impact on quantification. We analyzed 40 PET/CT scans of lung cancer patients who had undergone PET/CT. Standardized uptake values corrected for body weight (SUV) and lean body mass (SUL) were determined in the single hottest lesion in the lung and normalized to the liver for UHD and OSEM reconstruction. Quantitative uptake values and their normalized ratios for the two reconstruction settings were compared using the Wilcoxon test. The distribution of quantitative uptake values and their ratios in relation to the reconstruction method used were demonstrated in the form of frequency distribution curves, box-plots and scatter plots. The agreement between OSEM and UHD reconstructions was assessed through Bland-Altman analysis. A significant difference was observed after OSEM and UHD reconstruction for SUV and SUL data tested (p < 0.0005 in all cases). The mean values of the ratios after OSEM and UHD reconstruction showed equally significant differences (p < 0.0005 in all cases). Bland-Altman analysis showed that the SUV and SUL and their normalized values were, on average, up to 60 % higher after UHD reconstruction as compared to OSEM reconstruction. OSEM and HD reconstruction brought a significant difference for SUV and SUL, which remained constantly high after normalization to the liver, indicating that standardization of reconstruction and the use of comparable SUV measurements are crucial when using PET/CT. (orig.)

  7. Liquid-phase sample preparation method for real-time monitoring of airborne asbestos fibers by dual-mode high-throughput microscopy.

    Science.gov (United States)

    Cho, Myoung-Ock; Kim, Jung Kyung; Han, Hwataik; Lee, Jeonghoon

    2013-01-01

    Asbestos that had been used widely as a construction material is a first-level carcinogen recognized by the World Health Organization. It can be accumulated in body by inhalation causing virulent respiratory diseases including lung cancer. In our previous study, we developed a high-throughput microscopy (HTM) system that can minimize human intervention accompanied by the conventional phase contrast microscopy (PCM) through automated counting of fibrous materials and thus significantly reduce analysis time and labor. Also, we attempted selective detection of chrysotile using DksA protein extracted from Escherichia coli through a recombinant protein production technique, and developed a dual-mode HTM (DM-HTM) by upgrading the HTM device. We demonstrated that fluorescently-labeled chrysotile asbestos fibers can be identified and enumerated automatically among other types of asbestos fibers or non-asbestos particles in a high-throughput manner through a newly modified HTM system for both reflection and fluorescence imaging. However there is a limitation to apply DM-HTM to airborne sample with current air collecting method due to the difficulty of applying the protein to dried asbestos sample. Here, we developed a technique for preparing liquid-phase asbestos sample using an impinger normally used to collect odor molecules in the air. It would be possible to improve the feasibility of the dual-mode HTM by integrating a sample preparation unit for making collected asbestos sample dispersed in a solution. The new technique developed for highly sensitive and automated asbestos detection can be a potential alternative to the conventional manual counting method, and it may be applied on site as a fast and reliable environmental monitoring tool.

  8. Deterministic and probabilistic interval prediction for short-term wind power generation based on variational mode decomposition and machine learning methods

    International Nuclear Information System (INIS)

    Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli

    2016-01-01

    Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid

  9. Fuzzy cross-model cross-mode method and its application to update the finite element model of structures

    International Nuclear Information System (INIS)

    Liu Yang; Xu Dejian; Li Yan; Duan Zhongdong

    2011-01-01

    As a novel updating technique, cross-model cross-mode (CMCM) method possesses a high efficiency and capability of flexible selecting updating parameters. However, the success of this method depends on the accuracy of measured modal shapes. Usually, the measured modal shapes are inaccurate since many kinds of measured noises are inevitable. Furthermore, the complete testing modal shapes are required by CMCM method so that the calculating errors may be introduced into the measured modal shapes by conducting the modal expansion or model reduction technique. Therefore, this algorithm is faced with the challenge of updating the finite element (FE) model of practical complex structures. In this study, the fuzzy CMCM method is proposed in order to weaken the effect of errors of the measured modal shapes on the updated results. Then two simulated examples are applied to compare the performance of the fuzzy CMCM method with the CMCM method. The test results show that proposed method is more promising to update the FE model of practical structures than CMCM method.

  10. Methods of evaluating dynamic characteristics of the helicopter with suspension mode vysinnya

    Directory of Open Access Journals (Sweden)

    В.Г. Вовк

    2009-03-01

    Full Text Available  The new estimating method of stochastic parameters of the complex moving object (helicopter with cargo suspension is suggested for the structured problem identification of the object and optimal stabilizing system synthesis as well.

  11. Influences of Normalization Method on Biomarker Discovery in Gas Chromatography-Mass Spectrometry-Based Untargeted Metabolomics: What Should Be Considered?

    Science.gov (United States)

    Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo

    2017-05-16

    Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.

  12. Using lod-score differences to determine mode of inheritance: A simple, robust method even in the presence of heterogeneity and reduced penetrance

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.A.; Berger, B. [Mount Sinai Medical Center, New York, NY (United States)

    1994-10-01

    Determining the mode of inheritance is often difficult under the best of circumstances, but when segregation analysis is used, the problems of ambiguous ascertainment procedures, reduced penetrance, heterogeneity, and misdiagnosis make mode-of-inheritance determinations even more unreliable. The mode of inheritance can also be determined using a linkage-based method and association-based methods, which can overcome many of these problems. In this work, we determined how much information is necessary to reliably determine the mode of inheritance from linkage data when heterogeneity and reduced penetrance are present in the data set. We generated data sets under both dominant and recessive inheritance with reduced penetrance and with varying fractions of linked and unlinked families. We then analyzed those data sets, assuming reduced penetrance, both dominant and recessive inheritance, and no heterogeneity. We investigated the reliability of two methods for determining the mode of inheritance from the linkage data. The first method examined the difference ({Delta}) between the maximum lod scores calculated under the two mode-of-inheritance assumptions. We found that if {Delta} was >1.5, then the higher of the two maximum lod scores reflected the correct mode of inheritance with high reliability and that a {Delta} of 2.5 appeared to practically guarantee a correct mode-of-inheritance inference. Furthermore, this reliability appeared to be virtually independent of {alpha}, the fraction of linked families in the data set. The second method we tested was based on choosing the higher of the two maximum lod scores calculated under the different mode-of-inheritance assumptions. This method became unreliable as {alpha} decreased. These results suggest that the mode of inheritance can be inferred from linkage data with high reliability, even in the presence of heterogeneity and reduced penetrance. 12 refs., 3 figs., 2 tabs.

  13. Probing the effect of human normal sperm morphology rate on cycle outcomes and assisted reproductive methods selection.

    Directory of Open Access Journals (Sweden)

    Bo Li

    Full Text Available Sperm morphology is the best predictor of fertilization potential, and the critical predictive information for supporting assisted reproductive methods selection. Given its important predictive value and the declining reality of semen quality in recent years, the threshold of normal sperm morphology rate (NSMR is being constantly corrected and controversial, from the 4th edition (14% to the 5th version (4%. We retrospectively analyzed 4756 cases of infertility patients treated with conventional-IVF(c-IVF or ICSI, which were divided into three groups according to NSMR: ≥14%, 4%-14% and <4%. Here, we demonstrate that, with decrease in NSMR(≥14%, 4%-14%, <4%, in the c-IVF group, the rate of fertilization, normal fertilization, high-quality embryo, multi-pregnancy and birth weight of twins gradually decreased significantly (P<0.05, while the miscarriage rate was significantly increased (p<0.01 and implantation rate, clinical pregnancy rate, ectopic pregnancy rate, preterm birth rate, live birth rate, sex ratio, and birth weight(Singleton showed no significant change. In the ICSI group, with decrease in NSMR (≥14%, 4%-14%, <4%, high-quality embryo rate, multi-pregnancy rate and birth weight of twins were gradually decreased significantly (p<0.05, while other parameters had no significant difference. Considering the clinical assisted methods selection, in the NFMR ≥14% group, normal fertilization rate of c-IVF was significantly higher than the ICSI group (P<0.05, in the 4%-14% group, birth weight (twins of c-IVF were significantly higher than the ICSI group, in the <4% group, miscarriage of IVF was significantly higher than the ICSI group. Therefore, we conclude that NSMR is positively related to embryo reproductive potential, and when NSMR<4% (5th edition, ICSI should be considered first, while the NSMR≥4%, c-IVF assisted reproduction might be preferred.

  14. Development of a graphical method for choosing the optimal mode of traffic light

    Science.gov (United States)

    Novikov, A. N.; Katunin, A. A.; Novikov, I. A.; Kravchenko, A. A.; Shevtsova, A. G.

    2018-05-01

    Changing the transportation infrastructure for improving the main characteristics of the transportation flow is the key problem in transportation planning, therefore the main question lies in the ability to plan the change of the main indicators for the long term. In this investigation, an analysis of the city’s population has been performed and the most difficult transportation segment has been identified. During its identification, the main characteristics of the transportation flow have been established. For the evaluation of these characteristics until 2025, an analysis of the available methods of establishing changes in their values has been conducted. During the analysis of the above mentioned methods of evaluation of the change in intensity, based on the method of extrapolation, three scenarios of the development of the transportation system have been identified. It has been established that the most favorable method of controlling the transportation flow in the entrance to the city is the long term control of the traffic system. For the first time, with the help of the authors, based on the investigations of foreign scientists and the mathematical analysis of the changes in intensiveness on the main routes of the given road, the method of graphically choosing the required control plan has been put forward. The effectiveness of said organization scheme of the transportation system has been rated in the Transyt-14 program, with the analysis of changes in the main characteristics of the transportation flow.

  15. X-CT imaging method for large objects using double offset scan mode

    International Nuclear Information System (INIS)

    Fu Jian; Lu Hongnian; Li Bing; Zhang Lei; Sun Jingjing

    2007-01-01

    In X-ray computed tomography (X-CT) inspection, rotate-only scanner is commonly used because this configuration offers the highest imaging speed and best utilization of X-ray dose. But it requires that the imaging region of the scanned object must fit within the X-ray beam. Another configuration, transverse-rotate scanner, has a bigger field of view, but it is much more time consuming. In this paper, a two-dimensional X-CT imaging method for large objects is proposed to overcome the existing disadvantages. The scan principle of this method has been described and the reconstruction algorithm has been deduced. The results of the computer simulation and the experiments show the validity of the new method. Analysis shows that the scan field of view of this method is 1.8 times larger than that of rotate-only X-CT. The scan speed of this method is also much quicker than transverse-rotate X-CT

  16. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  17. On the use of rigid body modes in the deflated preconditioned conjugate gradient method

    NARCIS (Netherlands)

    Jönsthövel, T.B.; Van Gijzen, M.B.; Vuik, C.; Scarpas, A.

    2013-01-01

    Large discontinuities in material properties, such as those encountered in composite materials, lead to ill-conditioned systems of linear equations. These discontinuities give rise to small eigenvalues that may negatively affect the convergence of iterative solution methods such as the

  18. On the use of rigid body modes in the deflated preconditioned conjugate gradient method

    NARCIS (Netherlands)

    Jönsthövel, T.B.; Van Gijzen, M.B.; Vuik, C.; Scarpas, A.

    2011-01-01

    Large discontinuities in material properties, such as encountered in composite materials, lead to ill-conditioned systems of linear equations. These discontinuities give rise to small eigenvalues that may negatively affect the convergence of iterative solution methods such as the Preconditioned

  19. Financing modes and methods for nuclear power development in developing countries

    International Nuclear Information System (INIS)

    Su Qun

    1999-02-01

    In financing for nuclear power project in developing countries, governmental support is significant in reducing the risk of the project and improving the financing environment. Issues studied and discussed include financing conditions and methods, export credit and supply. An appropriate solution of the financing problem will play an important role in developing nuclear power

  20. Cardiac Time Intervals by Tissue Doppler Imaging M-Mode

    DEFF Research Database (Denmark)

    Biering-Sørensen, Tor; Mogelvang, Rasmus; de Knegt, Martina Chantal

    2016-01-01

    PURPOSE: To define normal values of the cardiac time intervals obtained by tissue Doppler imaging (TDI) M-mode through the mitral valve (MV). Furthermore, to evaluate the association of the myocardial performance index (MPI) obtained by TDI M-mode (MPITDI) and the conventional method of obtaining...

  1. Fingerprint start the next generation of payment method : Fingerprint payment: a new mode of mobile payment

    OpenAIRE

    Wu, Chong

    2016-01-01

    In the generation of mobile internet, fingerprint payment is one of the most popular topics at the moment. China has a big market and many users are using the mobile payment methods. There are a large number of mobile phones equipped with fingerprint recognition technology. As we know, fingerprint payment brings us more convenience and safety. We do not need to use many bankcards, and fingerprint also eliminates the users from the trouble of queuing to pay. However, users send traditional dig...

  2. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    Science.gov (United States)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  3. New tuning method of the low-mode asymmetry for ignition capsule implosions

    International Nuclear Information System (INIS)

    Gu, Jianfa; Dai, Zhensheng; Zou, Shiyang; Song, Peng; Ye, Wenhua; Zheng, Wudi; Gu, Peijun

    2015-01-01

    In the deuterium-tritium inertial confinement fusion implosion experiments on the National Ignition Facility, the hot spot and the surrounding main fuel layer show obvious P2 asymmetries. This may be caused by the large positive P2 radiation flux asymmetry during the peak pulse resulting form the poor propagation of the inner laser beam in the gas-filled hohlraum. The symmetry evolution of ignition capsule implosions is investigated by applying P2 radiation flux asymmetries during different time intervals. A series of two-dimensional simulation results show that a positive P2 flux asymmetry during the peak pulse results in a positive P2 shell ρR asymmetry; while an early time positive P2 flux asymmetry causes a negative P2 in the fuel ρR shape. The opposite evolution behavior of shell ρR asymmetry is used to develop a new tuning method to correct the radiation flux asymmetry during the peak pulse by adding a compensating same-phased P2 drive asymmetry during the early time. The significant improvements of the shell ρR symmetry, hot spot shape, hot spot internal energy, and neutron yield indicate that the tuning method is quite effective. The similar tuning method can also be used to control the early time drive asymmetries

  4. Method for determining the optimum mode of operation of the chemical water regime in the water-steam-circuit of power plants

    International Nuclear Information System (INIS)

    Sommerfeldt, P.; Reisner, H.; Hartmann, G.; Kulicke, P.

    1988-01-01

    The method aims at increasing the lifetime of secondary coolant circuit components in nuclear power plants through the determination of the optimum mode of operation of the chemical water regime by help of radioisotopes

  5. Assessment of temporal resolution of multi-detector row computed tomography in helical acquisition mode using the impulse method.

    Science.gov (United States)

    Ichikawa, Katsuhiro; Hara, Takanori; Urikura, Atsushi; Takata, Tadanori; Ohashi, Kazuya

    2015-06-01

    The purpose of this study was to propose a method for assessing the temporal resolution (TR) of multi-detector row computed tomography (CT) (MDCT) in the helical acquisition mode using temporal impulse signals generated by a metal ball passing through the acquisition plane. An 11-mm diameter metal ball was shot along the central axis at approximately 5 m/s during a helical acquisition, and the temporal sensitivity profile (TSP) was measured from the streak image intensities in the reconstructed helical CT images. To assess the validity, we compared the measured and theoretical TSPs for the 4-channel modes of two MDCT systems. A 64-channel MDCT system was used to compare TSPs and image quality of a motion phantom for the pitch factors P of 0.6, 0.8, 1.0 and 1.2 with a rotation time R of 0.5 s, and for two R/P combinations of 0.5/1.2 and 0.33/0.8. Moreover, the temporal transfer functions (TFs) were calculated from the obtained TSPs. The measured and theoretical TSPs showed perfect agreement. The TSP narrowed with an increase in the pitch factor. The image sharpness of the 0.33/0.8 combination was inferior to that of the 0.5/1.2 combination, despite their almost identical full width at tenth maximum values. The temporal TFs quantitatively confirmed these differences. The TSP results demonstrated that the TR in the helical acquisition mode significantly depended on the pitch factor as well as the rotation time, and the pitch factor and reconstruction algorithm affected the TSP shape. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. The Infulence of Microarc Oxidation Method Modes on the Properties of Coatings

    Directory of Open Access Journals (Sweden)

    N.Y. Dudareva

    2014-07-01

    Full Text Available The experimental studies of the properties of the hardened surface layer, developed by the microarc oxidation method (MAO on the surface of Al-Si ingots from AK12D alloy have been presented here. The effect of concentration of the electrolyte components on the properties of the MAO coating, such as microhardness, thickness, porosity have been studied. The corresponding regression equations to estimate the influence of the process parameters on the quality of the developed MAO-layer, have been set up.

  7. New Method to Study the Vibrational Modes of Biomolecules in the Terahertz Range Based on a Single-Stage Raman Spectrometer.

    Science.gov (United States)

    Kalanoor, Basanth S; Ronen, Maria; Oren, Ziv; Gerber, Doron; Tischler, Yaakov R

    2017-03-31

    The low-frequency vibrational (LFV) modes of biomolecules reflect specific intramolecular and intermolecular thermally induced fluctuations that are driven by external perturbations, such as ligand binding, protein interaction, electron transfer, and enzymatic activity. Large efforts have been invested over the years to develop methods to access the LFV modes due to their importance in the studies of the mechanisms and biological functions of biomolecules. Here, we present a method to measure the LFV modes of biomolecules based on Raman spectroscopy that combines volume holographic filters with a single-stage spectrometer, to obtain high signal-to-noise-ratio spectra in short acquisition times. We show that this method enables LFV mode characterization of biomolecules even in a hydrated environment. The measured spectra exhibit distinct features originating from intra- and/or intermolecular collective motion and lattice modes. The observed modes are highly sensitive to the overall structure, size, long-range order, and configuration of the molecules, as well as to their environment. Thus, the LFV Raman spectrum acts as a fingerprint of the molecular structure and conformational state of a biomolecule. The comprehensive method we present here is widely applicable, thus enabling high-throughput study of LFV modes of biomolecules.

  8. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    International Nuclear Information System (INIS)

    Gigase, Yves

    2007-01-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)

  9. Depth Estimates for Slingram Electromagnetic Anomalies from Dipping Sheet-like Bodies by the Normalized Full Gradient Method

    Science.gov (United States)

    Dondurur, Derman

    2005-11-01

    The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.

  10. Determination of Optimal Imaging Mode for Ultrasonographic Detection of Subdermal Contraceptive Rods: Comparison of Spatial Compound, Conventional, and Tissue Harmonic Imaging Methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Jin; Seo, Kyung; Song, Ho Taek; Park, Ah Young; Kim, Yaena; Yoon, Choon Sik [Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Suh, Jin Suck; Kim, Ah Hyun [Dept. of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Ryu, Jeong Ah [Dept. of Radiology, Guri Hospital, Hanyang University College of Medicine, Guri (Korea, Republic of); Park, Jeong Seon [Dept. of Radiology, Hanyang University Hospital, Hanyang University College of Medicine, Seoul (Korea, Republic of)

    2012-09-15

    To determine which mode of ultrasonography (US), among the conventional, spatial compound, and tissue-harmonic methods, exhibits the best performance for the detection of Implanon with respect to generation of posterior acoustic shadowing (PAS). A total of 21 patients, referred for localization of impalpable Implanon, underwent US, using the three modes with default settings (i.e., wide focal zone). Representative transverse images of the rods, according to each mode for all patients, were obtained. The resulting 63 images were reviewed by four observers. The observers provided a confidence score for the presence of PAS, using a five-point scale ranging from 1 (definitely absent) to 5 (definitely present), with scores of 4 or 5 for PAS being considered as detection. The average scores of PAS, obtained from the three different modes for each observer, were compared using one-way repeated measure ANOVA. The detection rates were compared using a weighted least square method. Statistically, the tissue harmonic mode was significantly superior to the other two modes, when comparing the average scores of PAS for all observers (p < 0.00-1). The detection rate was also highest for the tissue harmonic mode (p < 0.001). Tissue harmonic mode in US appears to be the most suitable in detecting subdermal contraceptive implant rods.

  11. Determination of Optimal Imaging Mode for Ultrasonographic Detection of Subdermal Contraceptive Rods: Comparison of Spatial Compound, Conventional, and Tissue Harmonic Imaging Methods

    International Nuclear Information System (INIS)

    Kim, Sung Jin; Seo, Kyung; Song, Ho Taek; Park, Ah Young; Kim, Yaena; Yoon, Choon Sik; Suh, Jin Suck; Kim, Ah Hyun; Ryu, Jeong Ah; Park, Jeong Seon

    2012-01-01

    To determine which mode of ultrasonography (US), among the conventional, spatial compound, and tissue-harmonic methods, exhibits the best performance for the detection of Implanon with respect to generation of posterior acoustic shadowing (PAS). A total of 21 patients, referred for localization of impalpable Implanon, underwent US, using the three modes with default settings (i.e., wide focal zone). Representative transverse images of the rods, according to each mode for all patients, were obtained. The resulting 63 images were reviewed by four observers. The observers provided a confidence score for the presence of PAS, using a five-point scale ranging from 1 (definitely absent) to 5 (definitely present), with scores of 4 or 5 for PAS being considered as detection. The average scores of PAS, obtained from the three different modes for each observer, were compared using one-way repeated measure ANOVA. The detection rates were compared using a weighted least square method. Statistically, the tissue harmonic mode was significantly superior to the other two modes, when comparing the average scores of PAS for all observers (p < 0.00-1). The detection rate was also highest for the tissue harmonic mode (p < 0.001). Tissue harmonic mode in US appears to be the most suitable in detecting subdermal contraceptive implant rods.

  12. A new plan-scoring method using normal tissue complication probability for personalized treatment plan decisions in prostate cancer

    Science.gov (United States)

    Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie; Chang, Kyung Hwan

    2018-01-01

    The aim of this study was to derive a new plan-scoring index using normal tissue complication probabilities to verify different plans in the selection of personalized treatment. Plans for 12 patients treated with tomotherapy were used to compare scoring for ranking. Dosimetric and biological indexes were analyzed for the plans for a clearly distinguishable group ( n = 7) and a similar group ( n = 12), using treatment plan verification software that we developed. The quality factor ( QF) of our support software for treatment decisions was consistent with the final treatment plan for the clearly distinguishable group (average QF = 1.202, 100% match rate, n = 7) and the similar group (average QF = 1.058, 33% match rate, n = 12). Therefore, we propose a normal tissue complication probability (NTCP) based on the plan scoring index for verification of different plans for personalized treatment-plan selection. Scoring using the new QF showed a 100% match rate (average NTCP QF = 1.0420). The NTCP-based new QF scoring method was adequate for obtaining biological verification quality and organ risk saving using the treatment-planning decision-support software we developed for prostate cancer.

  13. Marine neurotoxins: state of the art, bottlenecks, and perspectives for mode of action based methods of detection in seafood.

    Science.gov (United States)

    Nicolas, Jonathan; Hendriksen, Peter J M; Gerssen, Arjen; Bovee, Toine F H; Rietjens, Ivonne M C M

    2014-01-01

    Marine biotoxins can accumulate in fish and shellfish, representing a possible threat for consumers. Many marine biotoxins affect neuronal function essentially through their interaction with ion channels or receptors, leading to different symptoms including paralysis and even death. The detection of marine biotoxins in seafood products is therefore a priority. Official methods for control are often still using in vivo assays, such as the mouse bioassay. This test is considered unethical and the development of alternative assays is urgently required. Chemical analyses as well as in vitro assays have been developed to detect marine biotoxins in seafood. However, most of the current in vitro alternatives to animal testing present disadvantages: low throughput and lack of sensitivity resulting in a high number of false-negative results. Thus, there is an urgent need for the development of new in vitro tests that would allow the detection of marine biotoxins in seafood products at a low cost, with high throughput combined with high sensitivity, reproducibility, and predictivity. Mode of action based in vitro bioassays may provide tools that fulfil these requirements. This review covers the current state of the art of such mode of action based alternative assays to detect neurotoxic marine biotoxins in seafood. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A review of output-only structural mode identification literature employing blind source separation methods

    Science.gov (United States)

    Sadhu, A.; Narasimhan, S.; Antoni, J.

    2017-09-01

    Output-only modal identification has seen significant activity in recent years, especially in large-scale structures where controlled input force generation is often difficult to achieve. This has led to the development of new system identification methods which do not require controlled input. They often work satisfactorily if they satisfy some general assumptions - not overly restrictive - regarding the stochasticity of the input. Hundreds of papers covering a wide range of applications appear every year related to the extraction of modal properties from output measurement data in more than two dozen mechanical, aerospace and civil engineering journals. In little more than a decade, concepts of blind source separation (BSS) from the field of acoustic signal processing have been adopted by several researchers and shown that they can be attractive tools to undertake output-only modal identification. Originally intended to separate distinct audio sources from a mixture of recordings, mathematical equivalence to problems in linear structural dynamics have since been firmly established. This has enabled many of the developments in the field of BSS to be modified and applied to output-only modal identification problems. This paper reviews over hundred articles related to the application of BSS and their variants to output-only modal identification. The main contribution of the paper is to present a literature review of the papers which have appeared on the subject. While a brief treatment of the basic ideas are presented where relevant, a comprehensive and critical explanation of their contents is not attempted. Specific issues related to output-only modal identification and the relative advantages and limitations of BSS methods both from theoretical and application standpoints are discussed. Gap areas requiring additional work are also summarized and the paper concludes with possible future trends in this area.

  15. Algebraic description of intrinsic modes in nuclei

    International Nuclear Information System (INIS)

    Leviatan, A.

    1989-01-01

    We present a procedure for extracting normal modes in algebraic number-conserving systems of interacting bosons relevant for collective states in even-even nuclei. The Hamiltonian is resolved into intrinsic (bandhead related) and collective (in-band related) parts. Shape parameters are introduced through non-spherical boson bases. Intrinsic modes decoupled from the spurious modes are obtained from the intinsic part of the Hamiltonian in the limit of large number of bosons. Intrinsic states are constructed and serve to evaluate electromagnetic transition rates. The method is illustrated for systems with one type of boson as well as with proton-neutron bosons. 28 refs., 1 fig

  16. A New Quantitative Method for the Non-Invasive Documentation of Morphological Damage in Paintings Using RTI Surface Normals

    Directory of Open Access Journals (Sweden)

    Marcello Manfredi

    2014-07-01

    Full Text Available In this paper we propose a reliable surface imaging method for the non-invasive detection of morphological changes in paintings. Usually, the evaluation and quantification of changes and defects results mostly from an optical and subjective assessment, through the comparison of the previous and subsequent state of conservation and by means of condition reports. Using quantitative Reflectance Transformation Imaging (RTI we obtain detailed information on the geometry and morphology of the painting surface with a fast, precise and non-invasive method. Accurate and quantitative measurements of deterioration were acquired after the painting experienced artificial damage. Morphological changes were documented using normal vector images while the intensity map succeeded in highlighting, quantifying and describing the physical changes. We estimate that the technique can detect a morphological damage slightly smaller than 0.3 mm, which would be difficult to detect with the eye, considering the painting size. This non-invasive tool could be very useful, for example, to examine paintings and artwork before they travel on loan or during a restoration. The method lends itself to automated analysis of large images and datasets. Quantitative RTI thus eases the transition of extending human vision into the realm of measuring change over time.

  17. Spin-orbit coupling calculations with the two-component normalized elimination of the small component method

    Science.gov (United States)

    Filatov, Michael; Zou, Wenli; Cremer, Dieter

    2013-07-01

    A new algorithm for the two-component Normalized Elimination of the Small Component (2cNESC) method is presented and tested in the calculation of spin-orbit (SO) splittings for a series of heavy atoms and their molecules. The 2cNESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac SO splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000), 10.1103/PhysRevB.62.7809]. The use of the screened nucleus potential for the two-electron SO interaction leads to accurate spinor energy splittings, for which the deviations from the accurate Dirac Fock-Coulomb values are on the average far below the deviations observed for other effective one-electron SO operators. For hydrogen halides HX (X = F, Cl, Br, I, At, and Uus) and mercury dihalides HgX2 (X = F, Cl, Br, I) trends in spinor energies and SO splittings as obtained with the 2cNESC method are analyzed and discussed on the basis of coupling schemes and the electronegativity of X.

  18. Advanced analytical method of nereistoxin using mixed-mode cationic exchange solid-phase extraction and GC/MS.

    Science.gov (United States)

    Park, Yujin; Choe, Sanggil; Lee, Heesang; Jo, Jiyeong; Park, Yonghoon; Kim, Eunmi; Pyo, Jaesung; Jung, Jee H

    2015-07-01

    Nereistoxin(NTX) was originated from a marine annelid worm Lumbriconereis heteropoda and its analogue pesticides including cartap, bensultap, thiocyclam and thiobensultap have been commonly used in agriculture, because of their low toxicity and high insecticidal activity. However, NTX has been reported about its inhibitory neuro toxicity in human and animal body, by blocking nicotinic acetylcholine receptor and it cause significant neuromuscular toxicity, resulting in respiratory failure. We developed a new method to determine NTX in biological fluid. The method involves mixed-mode cationic exchange based solid phase extraction and gas chromatography/mass spectrometry for final identification and quantitative analysis. The limit of detection and recovery were substantially better than those of other methods using liquid-liquid extraction or headspace solid phase microextraction. The good recoveries (97±14%) in blood samples were obtained and calibration curves over the range 0.05-20 mg/L have R2 values greater than 0.99. The developed method was applied to a fatal case of cartap intoxication of 74 years old woman who ingested cartap hydrochloride for suicide. Cartap and NTX were detected from postmortem specimens and the cause of the death was ruled to be nereistoxin intoxication. The concentrations of NTX were 2.58 mg/L, 3.36 mg/L and 1479.7 mg/L in heart, femoral blood and stomach liquid content, respectively. The heart blood/femoral blood ratio of NTX was 0.76. Copyright © 2015. Published by Elsevier Ireland Ltd.

  19. Methods for Real-Time Prediction of the Mode of Travel Using Smartphone-Based GPS and Accelerometer Data.

    Science.gov (United States)

    Martin, Bryan D; Addona, Vittorio; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling

    2017-09-08

    We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy.

  20. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...