WorldWideScience

Sample records for normal form method

  1. SYNTHESIS METHODS OF ALGEBRAIC NORMAL FORM OF MANY-VALUED LOGIC FUNCTIONS

    Directory of Open Access Journals (Sweden)

    A. V. Sokolov

    2016-01-01

    Full Text Available The rapid development of methods of error-correcting coding, cryptography, and signal synthesis theory based on the principles of many-valued logic determines the need for a more detailed study of the forms of representation of functions of many-valued logic. In particular the algebraic normal form of Boolean functions, also known as Zhegalkin polynomial, that well describe many of the cryptographic properties of Boolean functions is widely used. In this article, we formalized the notion of algebraic normal form for many-valued logic functions. We developed a fast method of synthesis of algebraic normal form of 3-functions and 5-functions that work similarly to the Reed-Muller transform for Boolean functions: on the basis of recurrently synthesized transform matrices. We propose the hypothesis, which determines the rules of the synthesis of these matrices for the transformation from the truth table to the coefficients of the algebraic normal form and the inverse transform for any given number of variables of 3-functions or 5-functions. The article also introduces the definition of algebraic degree of nonlinearity of the functions of many-valued logic and the S-box, based on the principles of many-valued logic. Thus, the methods of synthesis of algebraic normal form of 3-functions applied to the known construction of recurrent synthesis of S-boxes of length N = 3k, whereby their algebraic degrees of nonlinearity are computed. The results could be the basis for further theoretical research and practical applications such as: the development of new cryptographic primitives, error-correcting codes, algorithms of data compression, signal structures, and algorithms of block and stream encryption, all based on the perspective principles of many-valued logic. In addition, the fast method of synthesis of algebraic normal form of many-valued logic functions is the basis for their software and hardware implementation.

  2. Nonlinear dynamics exploration through normal forms

    CERN Document Server

    Kahn, Peter B

    2014-01-01

    Geared toward advanced undergraduates and graduate students, this exposition covers the method of normal forms and its application to ordinary differential equations through perturbation analysis. In addition to its emphasis on the freedom inherent in the normal form expansion, the text features numerous examples of equations, the kind of which are encountered in many areas of science and engineering. The treatment begins with an introduction to the basic concepts underlying the normal forms. Coverage then shifts to an investigation of systems with one degree of freedom that model oscillations

  3. Analysis of a renormalization group method and normal form theory for perturbed ordinary differential equations

    Science.gov (United States)

    DeVille, R. E. Lee; Harkin, Anthony; Holzer, Matt; Josić, Krešimir; Kaper, Tasso J.

    2008-06-01

    For singular perturbation problems, the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. E. 49 (1994) 4502-4511] has been shown to be an effective general approach for deriving reduced or amplitude equations that govern the long time dynamics of the system. It has been applied to a variety of problems traditionally analyzed using disparate methods, including the method of multiple scales, boundary layer theory, the WKBJ method, the Poincaré-Lindstedt method, the method of averaging, and others. In this article, we show how the RG method may be used to generate normal forms for large classes of ordinary differential equations. First, we apply the RG method to systems with autonomous perturbations, and we show that the reduced or amplitude equations generated by the RG method are equivalent to the classical Poincaré-Birkhoff normal forms for these systems up to and including terms of O(ɛ2), where ɛ is the perturbation parameter. This analysis establishes our approach and generalizes to higher order. Second, we apply the RG method to systems with nonautonomous perturbations, and we show that the reduced or amplitude equations so generated constitute time-asymptotic normal forms, which are based on KBM averages. Moreover, for both classes of problems, we show that the main coordinate changes are equivalent, up to translations between the spaces in which they are defined. In this manner, our results show that the RG method offers a new approach for deriving normal forms for nonautonomous systems, and it offers advantages since one can typically more readily identify resonant terms from naive perturbation expansions than from the nonautonomous vector fields themselves. Finally, we establish how well the solution to the RG equations approximates the solution of the original equations on time scales of O(1/ɛ).

  4. Application of normal form methods to the analysis of resonances in particle accelerators

    International Nuclear Information System (INIS)

    Davies, W.G.

    1992-01-01

    The transformation to normal form in a Lie-algebraic framework provides a very powerful method for identifying and analysing non-linear behaviour and resonances in particle accelerators. The basic ideas are presented and illustrated. (author). 4 refs

  5. Normal forms of Hopf-zero singularity

    International Nuclear Information System (INIS)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative–nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov–Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov–Takens singularities. Despite this, the normal form computations of Bogdanov–Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative–nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto–Sivashinsky equations to demonstrate the applicability of our results. (paper)

  6. Normal forms of Hopf-zero singularity

    Science.gov (United States)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.

  7. Mandibulary dental arch form differences between level four polynomial method and pentamorphic pattern for normal occlusion sample

    Directory of Open Access Journals (Sweden)

    Y. Yuliana

    2011-07-01

    Full Text Available The aim of an orthodontic treatment is to achieve aesthetic, dental health and the surrounding tissues, occlusal functional relationship, and stability. The success of an orthodontic treatment is influenced by many factors, such as diagnosis and treatment plan. In order to do a diagnosis and a treatment plan, medical record, clinical examination, radiographic examination, extra oral and intra oral photos, as well as study model analysis are needed. The purpose of this study was to evaluate the differences in dental arch form between level four polynomial and pentamorphic arch form and to determine which one is best suitable for normal occlusion sample. This analytic comparative study was conducted at Faculty of Dentistry Universitas Padjadjaran on 13 models by comparing the dental arch form using the level four polynomial method based on mathematical calculations, the pattern of the pentamorphic arch and mandibular normal occlusion as a control. The results obtained were tested using statistical analysis T student test. The results indicate a significant difference both in the form of level four polynomial method and pentamorphic arch form when compared with mandibular normal occlusion dental arch form. Level four polynomial fits better, compare to pentamorphic arch form.

  8. Normal form and synchronization of strict-feedback chaotic systems

    International Nuclear Information System (INIS)

    Wang, Feng; Chen, Shihua; Yu Minghai; Wang Changping

    2004-01-01

    This study concerns the normal form and synchronization of strict-feedback chaotic systems. We prove that, any strict-feedback chaotic system can be rendered into a normal form with a invertible transform and then a design procedure to synchronize the normal form of a non-autonomous strict-feedback chaotic system is presented. This approach needs only a scalar driving signal to realize synchronization no matter how many dimensions the chaotic system contains. Furthermore, the Roessler chaotic system is taken as a concrete example to illustrate the procedure of designing without transforming a strict-feedback chaotic system into its normal form. Numerical simulations are also provided to show the effectiveness and feasibility of the developed methods

  9. THE METHOD OF CONSTRUCTING A BOOLEAN FORMULA OF A POLYGON IN THE DISJUNCTIVE NORMAL FORM

    Directory of Open Access Journals (Sweden)

    A. A. Butov

    2014-01-01

    Full Text Available The paper focuses on finalizing the method of finding a polygon Boolean formula in disjunctive normal form, described in the previous article [1]. An improved method eliminates the drawback asso-ciated with the existence of a class of problems for which the solution is only approximate. The pro-posed method always allows to find an exact solution. The method can be used, in particular, in the systems of computer-aided design of integrated circuits topology.

  10. Normal form for mirror machine Hamiltonians

    International Nuclear Information System (INIS)

    Dragt, A.J.; Finn, J.M.

    1979-01-01

    A systematic algorithm is developed for performing canonical transformations on Hamiltonians which govern particle motion in magnetic mirror machines. These transformations are performed in such a way that the new Hamiltonian has a particularly simple normal form. From this form it is possible to compute analytic expressions for gyro and bounce frequencies. In addition, it is possible to obtain arbitrarily high order terms in the adiabatic magnetic moment expansion. The algorithm makes use of Lie series, is an extension of Birkhoff's normal form method, and has been explicitly implemented by a digital computer programmed to perform the required algebraic manipulations. Application is made to particle motion in a magnetic dipole field and to a simple mirror system. Bounce frequencies and locations of periodic orbits are obtained and compared with numerical computations. Both mirror systems are shown to be insoluble, i.e., trajectories are not confined to analytic hypersurfaces, there is no analytic third integral of motion, and the adiabatic magnetic moment expansion is divergent. It is expected also that the normal form procedure will prove useful in the study of island structure and separatrices associated with periodic orbits, and should facilitate studies of breakdown of adiabaticity and the onset of ''stochastic'' behavior

  11. Volume-preserving normal forms of Hopf-zero singularity

    International Nuclear Information System (INIS)

    Gazor, Majid; Mokhtari, Fahimeh

    2013-01-01

    A practical method is described for computing the unique generator of the algebra of first integrals associated with a large class of Hopf-zero singularity. The set of all volume-preserving classical normal forms of this singularity is introduced via a Lie algebra description. This is a maximal vector space of classical normal forms with first integral; this is whence our approach works. Systems with a nonzero condition on their quadratic parts are considered. The algebra of all first integrals for any such system has a unique (modulo scalar multiplication) generator. The infinite level volume-preserving parametric normal forms of any nondegenerate perturbation within the Lie algebra of any such system is computed, where it can have rich dynamics. The associated unique generator of the algebra of first integrals are derived. The symmetry group of the infinite level normal forms are also discussed. Some necessary formulas are derived and applied to appropriately modified Rössler and generalized Kuramoto–Sivashinsky equations to demonstrate the applicability of our theoretical results. An approach (introduced by Iooss and Lombardi) is applied to find an optimal truncation for the first level normal forms of these examples with exponentially small remainders. The numerically suggested radius of convergence (for the first integral) associated with a hypernormalization step is discussed for the truncated first level normal forms of the examples. This is achieved by an efficient implementation of the results using Maple. (paper)

  12. Volume-preserving normal forms of Hopf-zero singularity

    Science.gov (United States)

    Gazor, Majid; Mokhtari, Fahimeh

    2013-10-01

    A practical method is described for computing the unique generator of the algebra of first integrals associated with a large class of Hopf-zero singularity. The set of all volume-preserving classical normal forms of this singularity is introduced via a Lie algebra description. This is a maximal vector space of classical normal forms with first integral; this is whence our approach works. Systems with a nonzero condition on their quadratic parts are considered. The algebra of all first integrals for any such system has a unique (modulo scalar multiplication) generator. The infinite level volume-preserving parametric normal forms of any nondegenerate perturbation within the Lie algebra of any such system is computed, where it can have rich dynamics. The associated unique generator of the algebra of first integrals are derived. The symmetry group of the infinite level normal forms are also discussed. Some necessary formulas are derived and applied to appropriately modified Rössler and generalized Kuramoto-Sivashinsky equations to demonstrate the applicability of our theoretical results. An approach (introduced by Iooss and Lombardi) is applied to find an optimal truncation for the first level normal forms of these examples with exponentially small remainders. The numerically suggested radius of convergence (for the first integral) associated with a hypernormalization step is discussed for the truncated first level normal forms of the examples. This is achieved by an efficient implementation of the results using Maple.

  13. Optimization of accelerator parameters using normal form methods on high-order transfer maps

    Energy Technology Data Exchange (ETDEWEB)

    Snopok, Pavel [Michigan State Univ., East Lansing, MI (United States)

    2007-05-01

    Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented

  14. Normal form theory and spectral sequences

    OpenAIRE

    Sanders, Jan A.

    2003-01-01

    The concept of unique normal form is formulated in terms of a spectral sequence. As an illustration of this technique some results of Baider and Churchill concerning the normal form of the anharmonic oscillator are reproduced. The aim of this paper is to show that spectral sequences give us a natural framework in which to formulate normal form theory. © 2003 Elsevier Science (USA). All rights reserved.

  15. a Recursive Approach to Compute Normal Forms

    Science.gov (United States)

    HSU, L.; MIN, L. J.; FAVRETTO, L.

    2001-06-01

    Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.

  16. An Algorithm for Higher Order Hopf Normal Forms

    Directory of Open Access Journals (Sweden)

    A.Y.T. Leung

    1995-01-01

    Full Text Available Normal form theory is important for studying the qualitative behavior of nonlinear oscillators. In some cases, higher order normal forms are required to understand the dynamic behavior near an equilibrium or a periodic orbit. However, the computation of high-order normal forms is usually quite complicated. This article provides an explicit formula for the normalization of nonlinear differential equations. The higher order normal form is given explicitly. Illustrative examples include a cubic system, a quadratic system and a Duffing–Van der Pol system. We use exact arithmetic and find that the undamped Duffing equation can be represented by an exact polynomial differential amplitude equation in a finite number of terms.

  17. A New Normal Form for Multidimensional Mode Conversion

    International Nuclear Information System (INIS)

    Tracy, E. R.; Richardson, A. S.; Kaufman, A. N.; Zobin, N.

    2007-01-01

    Linear conversion occurs when two wave types, with distinct polarization and dispersion characteristics, are locally resonant in a nonuniform plasma [1]. In recent work, we have shown how to incorporate a ray-based (WKB) approach to mode conversion in numerical algorithms [2,3]. The method uses the ray geometry in the conversion region to guide the reduction of the full NxN-system of wave equations to a 2x2 coupled pair which can be solved and matched to the incoming and outgoing WKB solutions. The algorithm in [2] assumes the ray geometry is hyperbolic and that, in ray phase space, there is an 'avoided crossing', which is the most common type of conversion. Here, we present a new formulation that can deal with more general types of conversion [4]. This formalism is based upon the fact (first proved in [5]) that it is always possible to put the 2x2 wave equation into a 'normal' form, such that the diagonal elements of the dispersion matrix Poisson-commute with the off-diagonals (at leading order). Therefore, if we use the diagonals (rather than the eigenvalues or the determinant) of the dispersion matrix as ray Hamiltonians, the off-diagonals will be conserved quantities. When cast into normal form, the 2x2 dispersion matrix has a very natural physical interpretation: the diagonals are the uncoupled ray hamiltonians and the off-diagonals are the coupling. We discuss how to incorporate the normal form into ray tracing algorithms

  18. Quantifying Normal Craniofacial Form and Baseline Craniofacial Asymmetry in the Pediatric Population.

    Science.gov (United States)

    Cho, Min-Jeong; Hallac, Rami R; Ramesh, Jananie; Seaward, James R; Hermann, Nuno V; Darvann, Tron A; Lipira, Angelo; Kane, Alex A

    2018-03-01

    Restoring craniofacial symmetry is an important objective in the treatment of many craniofacial conditions. Normal form has been measured using anthropometry, cephalometry, and photography, yet all of these modalities have drawbacks. In this study, the authors define normal pediatric craniofacial form and craniofacial asymmetry using stereophotogrammetric images, which capture a densely sampled set of points on the form. After institutional review board approval, normal, healthy children (n = 533) with no known craniofacial abnormalities were recruited at well-child visits to undergo full head stereophotogrammetric imaging. The children's ages ranged from 0 to 18 years. A symmetric three-dimensional template was registered and scaled to each individual scan using 25 manually placed landmarks. The template was deformed to each subject's three-dimensional scan using a thin-plate spline algorithm and closest point matching. Age-based normal facial models were derived. Mean facial asymmetry and statistical characteristics of the population were calculated. The mean head asymmetry across all pediatric subjects was 1.5 ± 0.5 mm (range, 0.46 to 4.78 mm), and the mean facial asymmetry was 1.2 ± 0.6 mm (range, 0.4 to 5.4 mm). There were no significant differences in the mean head or facial asymmetry with age, sex, or race. Understanding the "normal" form and baseline distribution of asymmetry is an important anthropomorphic foundation. The authors present a method to quantify normal craniofacial form and baseline asymmetry in a large pediatric sample. The authors found that the normal pediatric craniofacial form is asymmetric, and does not change in magnitude with age, sex, or race.

  19. Automatic identification and normalization of dosage forms in drug monographs

    Science.gov (United States)

    2012-01-01

    Background Each day, millions of health consumers seek drug-related information on the Web. Despite some efforts in linking related resources, drug information is largely scattered in a wide variety of websites of different quality and credibility. Methods As a step toward providing users with integrated access to multiple trustworthy drug resources, we aim to develop a method capable of identifying drug's dosage form information in addition to drug name recognition. We developed rules and patterns for identifying dosage forms from different sections of full-text drug monographs, and subsequently normalized them to standardized RxNorm dosage forms. Results Our method represents a significant improvement compared with a baseline lookup approach, achieving overall macro-averaged Precision of 80%, Recall of 98%, and F-Measure of 85%. Conclusions We successfully developed an automatic approach for drug dosage form identification, which is critical for building links between different drug-related resources. PMID:22336431

  20. Normal forms in Poisson geometry

    NARCIS (Netherlands)

    Marcut, I.T.

    2013-01-01

    The structure of Poisson manifolds is highly nontrivial even locally. The first important result in this direction is Conn's linearization theorem around fixed points. One of the main results of this thesis (Theorem 2) is a normal form theorem in Poisson geometry, which is the Poisson-geometric

  1. Method for construction of normalized cDNA libraries

    Science.gov (United States)

    Soares, Marcelo B.; Efstratiadis, Argiris

    1998-01-01

    This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to appropriate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. This invention also provides normalized cDNA libraries generated by the above-described method and uses of the generated libraries.

  2. Diagonalization and Jordan Normal Form--Motivation through "Maple"[R

    Science.gov (United States)

    Glaister, P.

    2009-01-01

    Following an introduction to the diagonalization of matrices, one of the more difficult topics for students to grasp in linear algebra is the concept of Jordan normal form. In this note, we show how the important notions of diagonalization and Jordan normal form can be introduced and developed through the use of the computer algebra package…

  3. Normal equivariant forms of vector fields

    International Nuclear Information System (INIS)

    Sanchez Bringas, F.

    1992-07-01

    We prove a theorem of linearization of type Siegel and a theorem of normal forms of type Poincare-Dulac for germs of holomorphic vector fields in the origin of C 2 , Γ -equivariants, where Γ is a finite subgroup of GL (2,C). (author). 5 refs

  4. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.

    Science.gov (United States)

    Frejlich, Pedro; Mărcuț, Ioan

    2018-01-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  5. The method of normal forms for singularly perturbed systems of Fredholm integro-differential equations with rapidly varying kernels

    Energy Technology Data Exchange (ETDEWEB)

    Bobodzhanov, A A; Safonov, V F [National Research University " Moscow Power Engineering Institute" , Moscow (Russian Federation)

    2013-07-31

    The paper deals with extending the Lomov regularization method to classes of singularly perturbed Fredholm-type integro-differential systems, which have not so far been studied. In these the limiting operator is discretely noninvertible. Such systems are commonly known as problems with unstable spectrum. Separating out the essential singularities in the solutions to these problems presents great difficulties. The principal one is to give an adequate description of the singularities induced by 'instability points' of the spectrum. A methodology for separating singularities by using normal forms is developed. It is applied to the above type of systems and is substantiated in these systems. Bibliography: 10 titles.

  6. TRASYS form factor matrix normalization

    Science.gov (United States)

    Tsuyuki, Glenn T.

    1992-01-01

    A method has been developed for adjusting a TRASYS enclosure form factor matrix to unity. This approach is not limited to closed geometries, and in fact, it is primarily intended for use with open geometries. The purpose of this approach is to prevent optimistic form factors to space. In this method, nodal form factor sums are calculated within 0.05 of unity using TRASYS, although deviations as large as 0.10 may be acceptable, and then, a process is employed to distribute the difference amongst the nodes. A specific example has been analyzed with this method, and a comparison was performed with a standard approach for calculating radiation conductors. In this comparison, hot and cold case temperatures were determined. Exterior nodes exhibited temperature differences as large as 7 C and 3 C for the hot and cold cases, respectively when compared with the standard approach, while interior nodes demonstrated temperature differences from 0 C to 5 C. These results indicate that temperature predictions can be artificially biased if the form factor computation error is lumped into the individual form factors to space.

  7. AFP Algorithm and a Canonical Normal Form for Horn Formulas

    OpenAIRE

    Majdoddin, Ruhollah

    2014-01-01

    AFP Algorithm is a learning algorithm for Horn formulas. We show that it does not improve the complexity of AFP Algorithm, if after each negative counterexample more that just one refinements are performed. Moreover, a canonical normal form for Horn formulas is presented, and it is proved that the output formula of AFP Algorithm is in this normal form.

  8. Utilizing Nested Normal Form to Design Redundancy Free JSON Schemas

    Directory of Open Access Journals (Sweden)

    Wai Yin Mok

    2016-12-01

    Full Text Available JSON (JavaScript Object Notation is a lightweight data-interchange format for the Internet. JSON is built on two structures: (1 a collection of name/value pairs and (2 an ordered list of values (http://www.json.org/. Because of this simple approach, JSON is easy to use and it has the potential to be the data interchange format of choice for the Internet. Similar to XML, JSON schemas allow nested structures to model hierarchical data. As data interchange over the Internet increases exponentially due to cloud computing or otherwise, redundancy free JSON data are an attractive form of communication because they improve the quality of data communication through eliminating update anomaly. Nested Normal Form, a normal form for hierarchical data, is a precise characterization of redundancy. A nested table, or a hierarchical schema, is in Nested Normal Form if and only if it is free of redundancy caused by multivalued and functional dependencies. Using Nested Normal Form as a guide, this paper introduces a JSON schema design methodology that begins with UML use case diagrams, communication diagrams and class diagrams that model a system under study. Based on the use cases’ execution frequencies and the data passed between involved parties in the communication diagrams, the proposed methodology selects classes from the class diagrams to be the roots of JSON scheme trees and repeatedly adds classes from the class diagram to the scheme trees as long as the schemas satisfy Nested Normal Form. This process continues until all of the classes in the class diagram have been added to some JSON scheme trees.

  9. A normal form approach to the theory of nonlinear betatronic motion

    International Nuclear Information System (INIS)

    Bazzani, A.; Todesco, E.; Turchetti, G.; Servizi, G.

    1994-01-01

    The betatronic motion of a particle in a circular accelerator is analysed using the transfer map description of the magnetic lattice. In the linear case the transfer matrix approach is shown to be equivalent to the Courant-Snyder theory: In the normal coordinates' representation the transfer matrix is a pure rotation. When the nonlinear effects due to the multipolar components of the magnetic field are taken into account, a similar procedure is used: a nonlinear change of coordinates provides a normal form representation of the map, which exhibits explicit symmetry properties depending on the absence or presence of resonance relations among the linear tunes. The use of normal forms is illustrated in the simplest but significant model of a cell with a sextupolar nonlinearity which is described by the quadratic Henon map. After recalling the basic theoretical results in Hamiltonian dynamics, we show how the normal forms describe the different topological structures of phase space such as KAM tori, chains of islands and chaotic regions; a critical comparison with the usual perturbation theory for Hamilton equations is given. The normal form theory is applied to compute the tune shift and deformation of the orbits for the lattices of the SPS and LHC accelerators, and scaling laws are obtained. Finally, the correction procedure of the multipolar errors of the LHC, based on the analytic minimization of the tune shift computed via the normal forms, is described and the results for a model of the LHC are presented. This application, relevant for the lattice design, focuses on the advantages of normal forms with respect to tracking when parametric dependences have to be explored. (orig.)

  10. Normal form of linear systems depending on parameters

    International Nuclear Information System (INIS)

    Nguyen Huynh Phan.

    1995-12-01

    In this paper we resolve completely the problem to find normal forms of linear systems depending on parameters for the feedback action that we have studied for the special case of controllable linear systems. (author). 24 refs

  11. Normal forms of invariant vector fields under a finite group action

    International Nuclear Information System (INIS)

    Sanchez Bringas, F.

    1992-07-01

    Let Γ be a finite subgroup of GL(n,C). This subgroup acts on the space of germs of holomorphic vector fields vanishing at the origin in C n . We prove a theorem of invariant conjugation to a normal form and linearization for the subspace of invariant elements and we give a description of these normal forms in dimension n=2. (author)

  12. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  13. On the relationship between LTL normal forms and Büchi automata

    DEFF Research Database (Denmark)

    Li, Jianwen; Pu, Geguang; Zhang, Lijun

    2013-01-01

    In this paper, we revisit the problem of translating LTL formulas to Büchi automata. We first translate the given LTL formula into a special disjuctive-normal form (DNF). The formula will be part of the state, and its DNF normal form specifies the atomic properties that should hold immediately...

  14. Drug Use Normalization: A Systematic and Critical Mixed-Methods Review.

    Science.gov (United States)

    Sznitman, Sharon R; Taubman, Danielle S

    2016-09-01

    Drug use normalization, which is a process whereby drug use becomes less stigmatized and more accepted as normative behavior, provides a conceptual framework for understanding contemporary drug issues and changes in drug use trends. Through a mixed-methods systematic review of the normalization literature, this article seeks to (a) critically examine how the normalization framework has been applied in empirical research and (b) make recommendations for future research in this area. Twenty quantitative, 26 qualitative, and 4 mixed-methods studies were identified through five electronic databases and reference lists of published studies. Studies were assessed for relevance, study characteristics, quality, and aspects of normalization examined. None of the studies applied the most rigorous research design (experiments) or examined all of the originally proposed normalization dimensions. The most commonly assessed dimension of drug use normalization was "experimentation." In addition to the original dimensions, the review identified the following new normalization dimensions in the literature: (a) breakdown of demographic boundaries and other risk factors in relation to drug use; (b) de-normalization; (c) drug use as a means to achieve normal goals; and (d) two broad forms of micro-politics associated with managing the stigma of illicit drug use: assimilative and transformational normalization. Further development in normalization theory and methodology promises to provide researchers with a novel framework for improving our understanding of drug use in contemporary society. Specifically, quasi-experimental designs that are currently being made feasible by swift changes in cannabis policy provide researchers with new and improved opportunities to examine normalization processes.

  15. Normal Forms for Retarded Functional Differential Equations and Applications to Bogdanov-Takens Singularity

    Science.gov (United States)

    Faria, T.; Magalhaes, L. T.

    The paper addresses, for retarded functional differential equations (FDEs), the computation of normal forms associated with the flow on a finite-dimensional invariant manifold tangent to invariant spaces for the infinitesimal generator of the linearized equation at a singularity. A phase space appropriate to the computation of these normal forms is introduced, and adequate nonresonance conditions for the computation of the normal forms are derived. As an application, the general situation of Bogdanov-Takens singularity and its versal unfolding for scalar retarded FDEs with nondegeneracy at second order is considered, both in the general case and in the case of differential-delay equations of the form ẋ( t) = ƒ( x( t), x( t-1)).

  16. Reconstruction of normal forms by learning informed observation geometries from data.

    Science.gov (United States)

    Yair, Or; Talmon, Ronen; Coifman, Ronald R; Kevrekidis, Ioannis G

    2017-09-19

    The discovery of physical laws consistent with empirical observations is at the heart of (applied) science and engineering. These laws typically take the form of nonlinear differential equations depending on parameters; dynamical systems theory provides, through the appropriate normal forms, an "intrinsic" prototypical characterization of the types of dynamical regimes accessible to a given model. Using an implementation of data-informed geometry learning, we directly reconstruct the relevant "normal forms": a quantitative mapping from empirical observations to prototypical realizations of the underlying dynamics. Interestingly, the state variables and the parameters of these realizations are inferred from the empirical observations; without prior knowledge or understanding, they parametrize the dynamics intrinsically without explicit reference to fundamental physical quantities.

  17. Methods of forming semiconductor devices and devices formed using such methods

    Science.gov (United States)

    Fox, Robert V; Rodriguez, Rene G; Pak, Joshua

    2013-05-21

    Single source precursors are subjected to carbon dioxide to form particles of material. The carbon dioxide may be in a supercritical state. Single source precursors also may be subjected to supercritical fluids other than supercritical carbon dioxide to form particles of material. The methods may be used to form nanoparticles. In some embodiments, the methods are used to form chalcopyrite materials. Devices such as, for example, semiconductor devices may be fabricated that include such particles. Methods of forming semiconductor devices include subjecting single source precursors to carbon dioxide to form particles of semiconductor material, and establishing electrical contact between the particles and an electrode.

  18. Densified waste form and method for forming

    Science.gov (United States)

    Garino, Terry J.; Nenoff, Tina M.; Sava Gallis, Dorina Florentina

    2015-08-25

    Materials and methods of making densified waste forms for temperature sensitive waste material, such as nuclear waste, formed with low temperature processing using metallic powder that forms the matrix that encapsulates the temperature sensitive waste material. The densified waste form includes a temperature sensitive waste material in a physically densified matrix, the matrix is a compacted metallic powder. The method for forming the densified waste form includes mixing a metallic powder and a temperature sensitive waste material to form a waste form precursor. The waste form precursor is compacted with sufficient pressure to densify the waste precursor and encapsulate the temperature sensitive waste material in a physically densified matrix.

  19. Normal Forms for Fuzzy Logics: A Proof-Theoretic Approach

    Czech Academy of Sciences Publication Activity Database

    Cintula, Petr; Metcalfe, G.

    2007-01-01

    Roč. 46, č. 5-6 (2007), s. 347-363 ISSN 1432-0665 R&D Projects: GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10300504 Keywords : fuzzy logic * normal form * proof theory * hypersequents Subject RIV: BA - General Mathematics Impact factor: 0.620, year: 2007

  20. A New One-Pass Transformation into Monadic Normal Form

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2003-01-01

    We present a translation from the call-by-value λ-calculus to monadic normal forms that includes short-cut boolean evaluation. The translation is higher-order, operates in one pass, duplicates no code, generates no chains of thunks, and is properly tail recursive. It makes a crucial use of symbolic...

  1. New method for computing ideal MHD normal modes in axisymmetric toroidal geometry

    International Nuclear Information System (INIS)

    Wysocki, F.; Grimm, R.C.

    1984-11-01

    Analytic elimination of the two magnetic surface components of the displacement vector permits the normal mode ideal MHD equations to be reduced to a scalar form. A Galerkin procedure, similar to that used in the PEST codes, is implemented to determine the normal modes computationally. The method retains the efficient stability capabilities of the PEST 2 energy principle code, while allowing computation of the normal mode frequencies and eigenfunctions, if desired. The procedure is illustrated by comparison with earlier various of PEST and by application to tilting modes in spheromaks, and to stable discrete Alfven waves in tokamak geometry

  2. Method of forming aluminum oxynitride material and bodies formed by such methods

    Science.gov (United States)

    Bakas, Michael P [Ammon, ID; Lillo, Thomas M [Idaho Falls, ID; Chu, Henry S [Idaho Falls, ID

    2010-11-16

    Methods of forming aluminum oxynitride (AlON) materials include sintering green bodies comprising aluminum orthophosphate or another sacrificial material therein. Such green bodies may comprise aluminum, oxygen, and nitrogen in addition to the aluminum orthophosphate. For example, the green bodies may include a mixture of aluminum oxide, aluminum nitride, and aluminum orthophosphate or another sacrificial material. Additional methods of forming aluminum oxynitride (AlON) materials include sintering a green body including a sacrificial material therein, using the sacrificial material to form pores in the green body during sintering, and infiltrating the pores formed in the green body with a liquid infiltrant during sintering. Bodies are formed using such methods.

  3. Methods for detecting the environmental coccoid form of Helicobacter pylori

    Directory of Open Access Journals (Sweden)

    Mahnaz eMazaheri Assadi

    2015-05-01

    Full Text Available Helicobacter pylori is recognized as the most common pathogen to cause gastritis, peptic and duodenal ulcers, and gastric cancer. The organisms are found in two forms: 1 spiral-shaped bacillus and 2 coccoid. H. pylori coccoid form, generally found in the environment, is the transformed form of the normal spiral-shaped bacillus after exposed to water or adverse environmental conditions such as exposure to sub-inhibitory concentrations of antimicrobial agents. The putative infectious capability and the viability of H. pylori under environmental conditions are controversial. This disagreement is partially due to the fact of lack in detecting the coccoid form of H. pylori in the environment. Accurate and effective detection methods of H. pylori will lead to rapid treatment and disinfection, and less human health damages and reduction in health care costs. In this review, we provide a brief introduction to H. pylori environmental coccoid forms, their transmission and detection methods. We further discuss the use of these detection methods including their accuracy and efficiency.

  4. First-order systems of linear partial differential equations: normal forms, canonical systems, transform methods

    Directory of Open Access Journals (Sweden)

    Heinz Toparkus

    2014-04-01

    Full Text Available In this paper we consider first-order systems with constant coefficients for two real-valued functions of two real variables. This is both a problem in itself, as well as an alternative view of the classical linear partial differential equations of second order with constant coefficients. The classification of the systems is done using elementary methods of linear algebra. Each type presents its special canonical form in the associated characteristic coordinate system. Then you can formulate initial value problems in appropriate basic areas, and you can try to achieve a solution of these problems by means of transform methods.

  5. Bioactive form of resveratrol in glioblastoma cells and its safety for normal brain cells

    Directory of Open Access Journals (Sweden)

    Xiao-Hong Shu

    2013-05-01

    Full Text Available ABSTRACTBackground: Resveratrol, a plant polyphenol existing in grapes and many other natural foods, possesses a wide range of biological activities including cancer prevention. It has been recognized that resveratrol is intracellularly biotransformed to different metabolites, but no direct evidence has been available to ascertain its bioactive form because of the difficulty to maintain resveratrol unmetabolized in vivo or in vitro. It would be therefore worthwhile to elucidate the potential therapeutic implications of resveratrol metabolism using a reliable resveratrol-sensitive cancer cells.Objective: To identify the real biological form of trans-resveratrol and to evaluate the safety of the effective anticancer dose of resveratrol for the normal brain cells.Methods: The samples were prepared from the condition media and cell lysates of human glioblastoma U251 cells, and were purified by solid phase extraction (SPE. The samples were subjected to high performance liquid chromatography (HPLC and liquid chromatography/tandem mass spectrometry (LC/MS analysis. According to the metabolite(s, trans-resveratrol was biotransformed in vitro by the method described elsewhere, and the resulting solution was used to treat U251 cells. Meanwhile, the responses of U251 and primarily cultured rat normal brain cells (glial cells and neurons to 100μM trans-resveratrol were evaluated by multiple experimental methods.Results: The results revealed that resveratrol monosulfate was the major metabolite in U251 cells. About half fraction of resveratrol monosulfate was prepared in vitro and this trans-resveratrol and resveratrol monosulfate mixture showed little inhibitory effect on U251 cells. It is also found that rat primary brain cells (PBCs not only resist 100μM but also tolerate as high as 200μM resveratrol treatment.Conclusions: Our study thus demonstrated that trans-resveratrol was the bioactive form in glioblastoma cells and, therefore, the biotransforming

  6. Evaluation of normalization methods in mammalian microRNA-Seq data

    Science.gov (United States)

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  7. Center manifolds, normal forms and bifurcations of vector fields with application to coupling between periodic and steady motions

    Science.gov (United States)

    Holmes, Philip J.

    1981-06-01

    We study the instabilities known to aeronautical engineers as flutter and divergence. Mathematically, these states correspond to bifurcations to limit cycles and multiple equilibrium points in a differential equation. Making use of the center manifold and normal form theorems, we concentrate on the situation in which flutter and divergence become coupled, and show that there are essentially two ways in which this is likely to occur. In the first case the system can be reduced to an essential model which takes the form of a single degree of freedom nonlinear oscillator. This system, which may be analyzed by conventional phase-plane techniques, captures all the qualitative features of the full system. We discuss the reduction and show how the nonlinear terms may be simplified and put into normal form. Invariant manifold theory and the normal form theorem play a major role in this work and this paper serves as an introduction to their application in mechanics. Repeating the approach in the second case, we show that the essential model is now three dimensional and that far more complex behavior is possible, including nonperiodic and ‘chaotic’ motions. Throughout, we take a two degree of freedom system as an example, but the general methods are applicable to multi- and even infinite degree of freedom problems.

  8. Fast Bitwise Implementation of the Algebraic Normal Form Transform

    OpenAIRE

    Bakoev, Valentin

    2017-01-01

    The representation of Boolean functions by their algebraic normal forms (ANFs) is very important for cryptography, coding theory and other scientific areas. The ANFs are used in computing the algebraic degree of S-boxes, some other cryptographic criteria and parameters of errorcorrecting codes. Their applications require these criteria and parameters to be computed by fast algorithms. Hence the corresponding ANFs should also be obtained by fast algorithms. Here we continue o...

  9. Planar undulator motion excited by a fixed traveling wave. Quasiperiodic averaging normal forms and the FEL pendulum

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, James A.; Heinemann, Klaus [New Mexico Univ., Albuquerque, NM (United States). Dept. of Mathematics and Statistics; Vogt, Mathias [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Gooden, Matthew [North Carolina State Univ., Raleigh, NC (United States). Dept. of Physics

    2013-03-15

    We present a mathematical analysis of planar motion of energetic electrons moving through a planar dipole undulator, excited by a fixed planar polarized plane wave Maxwell field in the X-Ray FEL regime. Our starting point is the 6D Lorentz system, which allows planar motions, and we examine this dynamical system as the wave length {lambda} of the traveling wave varies. By scalings and transformations the 6D system is reduced, without approximation, to a 2D system in a form for a rigorous asymptotic analysis using the Method of Averaging (MoA), a long time perturbation theory. The two dependent variables are a scaled energy deviation and a generalization of the so- called ponderomotive phase. As {lambda} varies the system passes through resonant and nonresonant (NR) zones and we develop NR and near-to-resonant (NtoR) MoA normal form approximations. The NtoR normal forms contain a parameter which measures the distance from a resonance. For a special initial condition, for the planar motion and on resonance, the NtoR normal form reduces to the well known FEL pendulum system. We then state and prove NR and NtoR first-order averaging theorems which give explicit error bounds for the normal form approximations. We prove the theorems in great detail, giving the interested reader a tutorial on mathematically rigorous perturbation theory in a context where the proofs are easily understood. The proofs are novel in that they do not use a near identity transformation and they use a system of differential inequalities. The NR case is an example of quasiperiodic averaging where the small divisor problem enters in the simplest possible way. To our knowledge the planar prob- lem has not been analyzed with the generality we aspire to here nor has the standard FEL pendulum system been derived with associated error bounds as we do here. We briefly discuss the low gain theory in light of our NtoR normal form. Our mathematical treatment of the noncollective FEL beam dynamics problem in

  10. Planar undulator motion excited by a fixed traveling wave. Quasiperiodic averaging normal forms and the FEL pendulum

    International Nuclear Information System (INIS)

    Ellison, James A.; Heinemann, Klaus; Gooden, Matthew

    2013-03-01

    We present a mathematical analysis of planar motion of energetic electrons moving through a planar dipole undulator, excited by a fixed planar polarized plane wave Maxwell field in the X-Ray FEL regime. Our starting point is the 6D Lorentz system, which allows planar motions, and we examine this dynamical system as the wave length λ of the traveling wave varies. By scalings and transformations the 6D system is reduced, without approximation, to a 2D system in a form for a rigorous asymptotic analysis using the Method of Averaging (MoA), a long time perturbation theory. The two dependent variables are a scaled energy deviation and a generalization of the so- called ponderomotive phase. As λ varies the system passes through resonant and nonresonant (NR) zones and we develop NR and near-to-resonant (NtoR) MoA normal form approximations. The NtoR normal forms contain a parameter which measures the distance from a resonance. For a special initial condition, for the planar motion and on resonance, the NtoR normal form reduces to the well known FEL pendulum system. We then state and prove NR and NtoR first-order averaging theorems which give explicit error bounds for the normal form approximations. We prove the theorems in great detail, giving the interested reader a tutorial on mathematically rigorous perturbation theory in a context where the proofs are easily understood. The proofs are novel in that they do not use a near identity transformation and they use a system of differential inequalities. The NR case is an example of quasiperiodic averaging where the small divisor problem enters in the simplest possible way. To our knowledge the planar prob- lem has not been analyzed with the generality we aspire to here nor has the standard FEL pendulum system been derived with associated error bounds as we do here. We briefly discuss the low gain theory in light of our NtoR normal form. Our mathematical treatment of the noncollective FEL beam dynamics problem in the

  11. Design of Normal Concrete Mixtures Using Workability-Dispersion-Cohesion Method

    Directory of Open Access Journals (Sweden)

    Hisham Qasrawi

    2016-01-01

    Full Text Available The workability-dispersion-cohesion method is a new proposed method for the design of normal concrete mixes. The method uses special coefficients called workability-dispersion and workability-cohesion factors. These coefficients relate workability to mobility and stability of the concrete mix. The coefficients are obtained from special charts depending on mix requirements and aggregate properties. The method is practical because it covers various types of aggregates that may not be within standard specifications, different water to cement ratios, and various degrees of workability. Simple linear relationships were developed for variables encountered in the mix design and were presented in graphical forms. The method can be used in countries where the grading or fineness of the available materials is different from the common international specifications (such as ASTM or BS. Results were compared to the ACI and British methods of mix design. The method can be extended to cover all types of concrete.

  12. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors formed by such methods

    Science.gov (United States)

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2014-09-09

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  13. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  14. Pre-form ceramic matrix composite cavity and method of forming and method of forming a ceramic matrix composite component

    Science.gov (United States)

    Monaghan, Philip Harold; Delvaux, John McConnell; Taxacher, Glenn Curtis

    2015-06-09

    A pre-form CMC cavity and method of forming pre-form CMC cavity for a ceramic matrix component includes providing a mandrel, applying a base ply to the mandrel, laying-up at least one CMC ply on the base ply, removing the mandrel, and densifying the base ply and the at least one CMC ply. The remaining densified base ply and at least one CMC ply form a ceramic matrix component having a desired geometry and a cavity formed therein. Also provided is a method of forming a CMC component.

  15. On the construction of the Kolmogorov normal form for the Trojan asteroids

    CERN Document Server

    Gabern, F; Locatelli, U

    2004-01-01

    In this paper we focus on the stability of the Trojan asteroids for the planar Restricted Three-Body Problem (RTBP), by extending the usual techniques for the neighbourhood of an elliptic point to derive results in a larger vicinity. Our approach is based on the numerical determination of the frequencies of the asteroid and the effective computation of the Kolmogorov normal form for the corresponding torus. This procedure has been applied to the first 34 Trojan asteroids of the IAU Asteroid Catalog, and it has worked successfully for 23 of them. The construction of this normal form allows for computer-assisted proofs of stability. To show it, we have implemented a proof of existence of families of invariant tori close to a given asteroid, for a high order expansion of the Hamiltonian. This proof has been successfully applied to three Trojan asteroids.

  16. A systematic evaluation of normalization methods in quantitative label-free proteomics.

    Science.gov (United States)

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2018-01-01

    To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.

  17. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.; Spoto, F.; Scollo, Giuseppe; Nijholt, Antinus

    2003-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq 1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with

  18. Generating all permutations by context-free grammars in Chomsky normal form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2006-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq1$, with

  19. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2004-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with

  20. Method for forming ammonia

    Science.gov (United States)

    Kong, Peter C.; Pink, Robert J.; Zuck, Larry D.

    2008-08-19

    A method for forming ammonia is disclosed and which includes the steps of forming a plasma; providing a source of metal particles, and supplying the metal particles to the plasma to form metal nitride particles; and providing a substance, and reacting the metal nitride particles with the substance to produce ammonia, and an oxide byproduct.

  1. Empirical evaluation of data normalization methods for molecular classification.

    Science.gov (United States)

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  2. On some hypersurfaces with time like normal bundle in pseudo Riemannian space forms

    International Nuclear Information System (INIS)

    Kashani, S.M.B.

    1995-12-01

    In this work we classify immersed hypersurfaces with constant sectional curvature in pseudo Riemannian space forms if the normal bundle is time like and the mean curvature is constant. (author). 9 refs

  3. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-01-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results are presented: the radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach test methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transporation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance

  5. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-11-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results is presented: The radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach tests methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transportation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance. 2 references, 2 figures

  6. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors and intermediate products formed by such methods

    Science.gov (United States)

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2012-12-04

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  7. Normal form of particle motion under the influence of an ac dipole

    Directory of Open Access Journals (Sweden)

    R. Tomás

    2002-05-01

    Full Text Available ac dipoles in accelerators are used to excite coherent betatron oscillations at a drive frequency close to the tune. These beam oscillations may last arbitrarily long and, in principle, there is no significant emittance growth if the ac dipole is adiabatically turned on and off. Therefore the ac dipole seems to be an adequate tool for nonlinear diagnostics provided the particle motion is well described in the presence of the ac dipole and nonlinearities. Normal forms and Lie algebra are powerful tools to study the nonlinear content of an accelerator lattice. In this article a way to obtain the normal form of the Hamiltonian of an accelerator with an ac dipole is described. The particle motion to first order in the nonlinearities is derived using Lie algebra techniques. The dependence of the Hamiltonian terms on the longitudinal coordinate is studied showing that they vary differently depending on the ac dipole parameters. The relation is given between the lines of the Fourier spectrum of the turn-by-turn motion and the Hamiltonian terms.

  8. Methods for forming particles

    Science.gov (United States)

    Fox, Robert V.; Zhang, Fengyan; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin

    2016-06-21

    Single source precursors or pre-copolymers of single source precursors are subjected to microwave radiation to form particles of a I-III-VI.sub.2 material. Such particles may be formed in a wurtzite phase and may be converted to a chalcopyrite phase by, for example, exposure to heat. The particles in the wurtzite phase may have a substantially hexagonal shape that enables stacking into ordered layers. The particles in the wurtzite phase may be mixed with particles in the chalcopyrite phase (i.e., chalcopyrite nanoparticles) that may fill voids within the ordered layers of the particles in the wurtzite phase thus produce films with good coverage. In some embodiments, the methods are used to form layers of semiconductor materials comprising a I-III-VI.sub.2 material. Devices such as, for example, thin-film solar cells may be fabricated using such methods.

  9. Method for forming materials

    Science.gov (United States)

    Tolle, Charles R [Idaho Falls, ID; Clark, Denis E [Idaho Falls, ID; Smartt, Herschel B [Idaho Falls, ID; Miller, Karen S [Idaho Falls, ID

    2009-10-06

    A material-forming tool and a method for forming a material are described including a shank portion; a shoulder portion that releasably engages the shank portion; a pin that releasably engages the shoulder portion, wherein the pin defines a passageway; and a source of a material coupled in material flowing relation relative to the pin and wherein the material-forming tool is utilized in methodology that includes providing a first material; providing a second material, and placing the second material into contact with the first material; and locally plastically deforming the first material with the material-forming tool so as mix the first material and second material together to form a resulting material having characteristics different from the respective first and second materials.

  10. A Proposed Arabic Handwritten Text Normalization Method

    Directory of Open Access Journals (Sweden)

    Tarik Abu-Ain

    2014-11-01

    Full Text Available Text normalization is an important technique in document image analysis and recognition. It consists of many preprocessing stages, which include slope correction, text padding, skew correction, and straight the writing line. In this side, text normalization has an important role in many procedures such as text segmentation, feature extraction and characters recognition. In the present article, a new method for text baseline detection, straightening, and slant correction for Arabic handwritten texts is proposed. The method comprises a set of sequential steps: first components segmentation is done followed by components text thinning; then, the direction features of the skeletons are extracted, and the candidate baseline regions are determined. After that, selection of the correct baseline region is done, and finally, the baselines of all components are aligned with the writing line.  The experiments are conducted on IFN/ENIT benchmark Arabic dataset. The results show that the proposed method has a promising and encouraging performance.

  11. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  12. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  13. Child in a Form: The Definition of Normality and Production of Expertise in Teacher Statement Forms--The Case of Northern Finland, 1951-1990

    Science.gov (United States)

    Koskela, Anne; Vehkalahti, Kaisa

    2017-01-01

    This article shows the importance of paying attention to the role of professional devices, such as standardised forms, as producers of normality and deviance in the history of education. Our case study focused on the standardised forms used by teachers during child guidance clinic referrals and transfers to special education in northern Finland,…

  14. Article and method of forming an article

    Science.gov (United States)

    Lacy, Benjamin Paul; Kottilingam, Srikanth Chandrudu; Dutta, Sandip; Schick, David Edward

    2017-12-26

    Provided are an article and a method of forming an article. The method includes providing a metallic powder, heating the metallic powder to a temperature sufficient to joint at least a portion of the metallic powder to form an initial layer, sequentially forming additional layers in a build direction by providing a distributed layer of the metallic powder over the initial layer and heating the distributed layer of the metallic powder, repeating the steps of sequentially forming the additional layers in the build direction to form a portion of the article having a hollow space formed in the build direction, and forming an overhang feature extending into the hollow space. The article includes an article formed by the method described herein.

  15. Application of Power Geometry and Normal Form Methods to the Study of Nonlinear ODEs

    Science.gov (United States)

    Edneral, Victor

    2018-02-01

    This paper describes power transformations of degenerate autonomous polynomial systems of ordinary differential equations which reduce such systems to a non-degenerative form. Example of creating exact first integrals of motion of some planar degenerate system in a closed form is given.

  16. Application of Power Geometry and Normal Form Methods to the Study of Nonlinear ODEs

    Directory of Open Access Journals (Sweden)

    Edneral Victor

    2018-01-01

    Full Text Available This paper describes power transformations of degenerate autonomous polynomial systems of ordinary differential equations which reduce such systems to a non-degenerative form. Example of creating exact first integrals of motion of some planar degenerate system in a closed form is given.

  17. A Mathematical Framework for Critical Transitions: Normal Forms, Variance and Applications

    Science.gov (United States)

    Kuehn, Christian

    2013-06-01

    Critical transitions occur in a wide variety of applications including mathematical biology, climate change, human physiology and economics. Therefore it is highly desirable to find early-warning signs. We show that it is possible to classify critical transitions by using bifurcation theory and normal forms in the singular limit. Based on this elementary classification, we analyze stochastic fluctuations and calculate scaling laws of the variance of stochastic sample paths near critical transitions for fast-subsystem bifurcations up to codimension two. The theory is applied to several models: the Stommel-Cessi box model for the thermohaline circulation from geoscience, an epidemic-spreading model on an adaptive network, an activator-inhibitor switch from systems biology, a predator-prey system from ecology and to the Euler buckling problem from classical mechanics. For the Stommel-Cessi model we compare different detrending techniques to calculate early-warning signs. In the epidemics model we show that link densities could be better variables for prediction than population densities. The activator-inhibitor switch demonstrates effects in three time-scale systems and points out that excitable cells and molecular units have information for subthreshold prediction. In the predator-prey model explosive population growth near a codimension-two bifurcation is investigated and we show that early-warnings from normal forms can be misleading in this context. In the biomechanical model we demonstrate that early-warning signs for buckling depend crucially on the control strategy near the instability which illustrates the effect of multiplicative noise.

  18. Smooth quantile normalization.

    Science.gov (United States)

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  19. The Impact of Normalization Methods on RNA-Seq Data Analysis

    Science.gov (United States)

    Zyprych-Walczak, J.; Szabelska, A.; Handschuh, L.; Górczak, K.; Klamecka, K.; Figlerowicz, M.; Siatkowski, I.

    2015-01-01

    High-throughput sequencing technologies, such as the Illumina Hi-seq, are powerful new tools for investigating a wide range of biological and medical problems. Massive and complex data sets produced by the sequencers create a need for development of statistical and computational methods that can tackle the analysis and management of data. The data normalization is one of the most crucial steps of data processing and this process must be carefully considered as it has a profound effect on the results of the analysis. In this work, we focus on a comprehensive comparison of five normalization methods related to sequencing depth, widely used for transcriptome sequencing (RNA-seq) data, and their impact on the results of gene expression analysis. Based on this study, we suggest a universal workflow that can be applied for the selection of the optimal normalization procedure for any particular data set. The described workflow includes calculation of the bias and variance values for the control genes, sensitivity and specificity of the methods, and classification errors as well as generation of the diagnostic plots. Combining the above information facilitates the selection of the most appropriate normalization method for the studied data sets and determines which methods can be used interchangeably. PMID:26176014

  20. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Directory of Open Access Journals (Sweden)

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  1. Generating All Circular Shifts by Context-Free Grammars in Greibach Normal Form

    NARCIS (Netherlands)

    Asveld, Peter R.J.

    2007-01-01

    For each alphabet Σn = {a1,a2,…,an}, linearly ordered by a1 < a2 < ⋯ < an, let Cn be the language of circular or cyclic shifts over Σn, i.e., Cn = {a1a2 ⋯ an-1an, a2a3 ⋯ ana1,…,ana1 ⋯ an-2an-1}. We study a few families of context-free grammars Gn (n ≥1) in Greibach normal form such that Gn generates

  2. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    Science.gov (United States)

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  3. Normalization Methods and Selection Strategies for Reference Materials in Stable Isotope Analyses - Review

    International Nuclear Information System (INIS)

    Skrzypek, G.; Sadler, R.; Paul, D.; Forizs, I.

    2011-01-01

    A stable isotope analyst has to make a number of important decisions regarding how to best determine the 'true' stable isotope composition of analysed samples in reference to an international scale. It has to be decided which reference materials should be used, the number of reference materials and how many repetitions of each standard is most appropriate for a desired level of precision, and what normalization procedure should be selected. In this paper we summarise what is known about propagation of uncertainties associated with normalization procedures and propagation of uncertainties associated with reference materials used as anchors for the determination of 'true' values for δ''1''3C and δ''1''8O. Normalization methods Several normalization methods transforming the 'raw' value obtained from mass spectrometers to one of the internationally recognized scales has been developed. However, as summarised by Paul et al. different normalization transforms alone may lead to inconsistencies between laboratories. The most common normalization procedures are: single-point anchoring (versus working gas and certified reference standard), modified single-point normalization, linear shift between the measured and the true isotopic composition of two certified reference standards, two-point and multipoint linear normalization methods. The accuracy of these various normalization methods has been compared by using analytical laboratory data by Paul et al., with the single-point and normalization versus tank calibrations resulting in the largest normalization errors, and that also exceed the analytical uncertainty recommended for δ 13 C. The normalization error depends greatly on the relative differences between the stable isotope composition of the reference material and the sample. On the other hand, the normalization methods using two or more certified reference standards produces a smaller normalization error, if the reference materials are bracketing the whole range of

  4. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  5. Method of forming an HTS article

    Science.gov (United States)

    Bhattacharya, Raghu N.; Zhang, Xun; Selvamanickam, Venkat

    2014-08-19

    A method of forming a superconducting article includes providing a substrate tape, forming a superconducting layer overlying the substrate tape, and depositing a capping layer overlying the superconducting layer. The capping layer includes a noble metal and has a thickness not greater than about 1.0 micron. The method further includes electrodepositing a stabilizer layer overlying the capping layer using a solution that is non-reactive to the superconducting layer. The superconducting layer has an as-formed critical current I.sub.C(AF) and a post-stabilized critical current I.sub.C(PS). The I.sub.C(PS) is at least about 95% of the I.sub.C(AF).

  6. Methods for forming particles from single source precursors

    Science.gov (United States)

    Fox, Robert V [Idaho Falls, ID; Rodriguez, Rene G [Pocatello, ID; Pak, Joshua [Pocatello, ID

    2011-08-23

    Single source precursors are subjected to carbon dioxide to form particles of material. The carbon dioxide may be in a supercritical state. Single source precursors also may be subjected to supercritical fluids other than supercritical carbon dioxide to form particles of material. The methods may be used to form nanoparticles. In some embodiments, the methods are used to form chalcopyrite materials. Devices such as, for example, semiconductor devices may be fabricated that include such particles. Methods of forming semiconductor devices include subjecting single source precursors to carbon dioxide to form particles of semiconductor material, and establishing electrical contact between the particles and an electrode.

  7. A structure-preserving approach to normal form analysis of power systems; Una propuesta de preservacion de estructura al analisis de su forma normal en sistemas de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Martinez Carrillo, Irma

    2008-01-15

    Power system dynamic behavior is inherently nonlinear and is driven by different processes at different time scales. The size and complexity of these mechanisms has stimulated the search for methods that reduce the original dimension but retain a certain degree of accuracy. In this dissertation, a novel nonlinear dynamical analysis method for the analysis of large amplitude oscillations that embraces ideas from normal form theory and singular perturbation techniques is proposed. This approach allows the full potential of the normal form method to be reached, and is suitably general for application to a wide variety of nonlinear systems. Drawing on the formal theory of dynamical systems, a structure-preserving model of the system is developed that preservers network and load characteristics. By exploiting the separation of fast and slow time scales of the model, an efficient approach based on singular perturbation techniques, is then derived for constructing a nonlinear power system representation that accurately preserves network structure. The method requires no reduction of the constraint equations and gives therefore, information about the effect of network and load characteristics on system behavior. Analytical expressions are then developed that provide approximate solutions to system performance near a singularity and techniques for interpreting these solutions in terms of modal functions are given. New insights into the nature of nonlinear oscillations are also offered and criteria for characterizing network effects on nonlinear system behavior are proposed. Theoretical insight into the behavior of dynamic coupling of differential-algebraic equations and the origin of nonlinearity is given, and implications for analyzing for design and placement of power system controllers in complex nonlinear systems are discussed. The extent of applicability of the proposed procedure is demonstrated by analyzing nonlinear behavior in two realistic test power systems

  8. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Comparison of normalization methods for the analysis of metagenomic gene abundance data.

    Science.gov (United States)

    Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik

    2018-04-20

    In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead

  10. Combining Illumination Normalization Methods for Better Face Recognition

    NARCIS (Netherlands)

    Boom, B.J.; Tao, Q.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. There are two categories of illumination normalization methods. The first category performs a local preprocessing, where they correct a pixel value based on a local neighborhood in the images. The second

  11. Theory and praxis pf map analsys in CHEF part 1: Linear normal form

    Energy Technology Data Exchange (ETDEWEB)

    Michelotti, Leo; /Fermilab

    2008-10-01

    This memo begins a series which, put together, could comprise the 'CHEF Documentation Project' if there were such a thing. The first--and perhaps only--three will telegraphically describe theory, algorithms, implementation and usage of the normal form map analysis procedures encoded in CHEF's collection of libraries. [1] This one will begin the sequence by explaining the linear manipulations that connect the Jacobian matrix of a symplectic mapping to its normal form. It is a 'Reader's Digest' version of material I wrote in Intermediate Classical Dynamics (ICD) [2] and randomly scattered across technical memos, seminar viewgraphs, and lecture notes for the past quarter century. Much of its content is old, well known, and in some places borders on the trivial.1 Nevertheless, completeness requires their inclusion. The primary objective is the 'fundamental theorem' on normalization written on page 8. I plan to describe the nonlinear procedures in a subsequent memo and devote a third to laying out algorithms and lines of code, connecting them with equations written in the first two. Originally this was to be done in one short paper, but I jettisoned that approach after its first section exceeded a dozen pages. The organization of this document is as follows. A brief description of notation is followed by a section containing a general treatment of the linear problem. After the 'fundamental theorem' is proved, two further subsections discuss the generation of equilibrium distributions and issue of 'phase'. The final major section reviews parameterizations--that is, lattice functions--in two and four dimensions with a passing glance at the six-dimensional version. Appearances to the contrary, for the most part I have tried to restrict consideration to matters needed to understand the code in CHEF's libraries.

  12. Normalization methods in time series of platelet function assays

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  13. Theory and praxis of map analsys in CHEF part 2: Nonlinear normal form

    International Nuclear Information System (INIS)

    Michelotti, Leo

    2009-01-01

    This is the second of three memos describing how normal form map analysis is implemented in CHEF. The first (1) explained the manipulations required to assure that initial, linear transformations preserved Poincare invariants, thereby confirming correct normalization of action-angle coordinates. In this one, the transformation will be extended to nonlinear terms. The third, describing how the algorithms were implemented within the software of CHEF's libraries, most likely will never be written. The first section, Section 2, quickly lays out preliminary concepts and relationships. In Section 3, we shall review the perturbation theory - an iterative sequence of transformations that converts a nonlinear mapping into its normal form - and examine the equation which moves calculations from one step to the next. Following that is a section titled 'Interpretation', which identifies connections between the normalized mappings and idealized, integrable, fictitious Hamiltonian models. A final section contains closing comments, some of which may - but probably will not - preview work to be done later. My reasons for writing this memo and its predecessor have already been expressed. (1) To them can be added this: 'black box code' encourages users to proceed with little or no understanding of what it does or how it operates. So far, CHEF has avoided this trap admirably by failing to attract potential users. However, we reached a watershed last year: even I now have difficulty following the software through its maze of operations. Extensions to CHEF's physics functionalities, software upgrades, and even simple maintenance are becoming more difficult than they should. I hope these memos will mark parts of the maze for easier navigation in the future. Despite appearances to the contrary, I tried to include no (or very little) more than the minimum needed to understand what CHEF's nonlinear analysis modules do.1 As with the first memo, material has been lifted - and modified - from

  14. Theory and praxis of map analsys in CHEF part 2: Nonlinear normal form

    Energy Technology Data Exchange (ETDEWEB)

    Michelotti, Leo; /FERMILAB

    2009-04-01

    This is the second of three memos describing how normal form map analysis is implemented in CHEF. The first [1] explained the manipulations required to assure that initial, linear transformations preserved Poincare invariants, thereby confirming correct normalization of action-angle coordinates. In this one, the transformation will be extended to nonlinear terms. The third, describing how the algorithms were implemented within the software of CHEF's libraries, most likely will never be written. The first section, Section 2, quickly lays out preliminary concepts and relationships. In Section 3, we shall review the perturbation theory - an iterative sequence of transformations that converts a nonlinear mapping into its normal form - and examine the equation which moves calculations from one step to the next. Following that is a section titled 'Interpretation', which identifies connections between the normalized mappings and idealized, integrable, fictitious Hamiltonian models. A final section contains closing comments, some of which may - but probably will not - preview work to be done later. My reasons for writing this memo and its predecessor have already been expressed. [1] To them can be added this: 'black box code' encourages users to proceed with little or no understanding of what it does or how it operates. So far, CHEF has avoided this trap admirably by failing to attract potential users. However, we reached a watershed last year: even I now have difficulty following the software through its maze of operations. Extensions to CHEF's physics functionalities, software upgrades, and even simple maintenance are becoming more difficult than they should. I hope these memos will mark parts of the maze for easier navigation in the future. Despite appearances to the contrary, I tried to include no (or very little) more than the minimum needed to understand what CHEF's nonlinear analysis modules do.1 As with the first memo, material

  15. Birkhoff normalization

    NARCIS (Netherlands)

    Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.

    2003-01-01

    The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for

  16. Slab edge insulating form system and methods

    Science.gov (United States)

    Lee, Brain E [Corral de Tierra, CA; Barsun, Stephan K [Davis, CA; Bourne, Richard C [Davis, CA; Hoeschele, Marc A [Davis, CA; Springer, David A [Winters, CA

    2009-10-06

    A method of forming an insulated concrete foundation is provided comprising constructing a foundation frame, the frame comprising an insulating form having an opening, inserting a pocket former into the opening; placing concrete inside the foundation frame; and removing the pocket former after the placed concrete has set, wherein the concrete forms a pocket in the placed concrete that is accessible through the opening. The method may further comprise sealing the opening by placing a sealing plug or sealing material in the opening. A system for forming an insulated concrete foundation is provided comprising a plurality of interconnected insulating forms, the insulating forms having a rigid outer member protecting and encasing an insulating material, and at least one gripping lip extending outwardly from the outer member to provide a pest barrier. At least one insulating form has an opening into which a removable pocket former is inserted. The system may also provide a tension anchor positioned in the pocket former and a tendon connected to the tension anchor.

  17. Nanofiber electrode and method of forming same

    Energy Technology Data Exchange (ETDEWEB)

    Pintauro, Peter N.; Zhang, Wenjing

    2018-02-27

    In one aspect, a method of forming an electrode for an electrochemical device is disclosed. In one embodiment, the method includes the steps of mixing at least a first amount of a catalyst and a second amount of an ionomer or uncharged polymer to form a solution and delivering the solution into a metallic needle having a needle tip. The method further includes the steps of applying a voltage between the needle tip and a collector substrate positioned at a distance from the needle tip, and extruding the solution from the needle tip at a flow rate such as to generate electrospun fibers and deposit the generated fibers on the collector substrate to form a mat with a porous network of fibers. Each fiber in the porous network of the mat has distributed particles of the catalyst. The method also includes the step of pressing the mat onto a membrane.

  18. Normalized cDNA libraries

    Science.gov (United States)

    Soares, Marcelo B.; Efstratiadis, Argiris

    1997-01-01

    This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to moderate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library.

  19. Three forms of relativity

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1992-01-01

    The physical sense of three forms of the relativity is discussed. The first - instant from - respects in fact the traditional approach based on the concept of instant distance. The normal form corresponds the radar formulation which is based on the light or retarded distances. The front form in the special case is characterized by 'observable' variables, and the known method of k-coefficient is its obvious expression. 16 refs

  20. EMG normalization method based on grade 3 of manual muscle testing: Within- and between-day reliability of normalization tasks and application to gait analysis.

    Science.gov (United States)

    Tabard-Fougère, Anne; Rose-Dulcina, Kevin; Pittet, Vincent; Dayer, Romain; Vuillerme, Nicolas; Armand, Stéphane

    2018-02-01

    Electromyography (EMG) is an important parameter in Clinical Gait Analysis (CGA), and is generally interpreted with timing of activation. EMG amplitude comparisons between individuals, muscles or days need normalization. There is no consensus on existing methods. The gold standard, maximum voluntary isometric contraction (MVIC), is not adapted to pathological populations because patients are often unable to perform an MVIC. The normalization method inspired by the isometric grade 3 of manual muscle testing (isoMMT3), which is the ability of a muscle to maintain a position against gravity, could be an interesting alternative. The aim of this study was to evaluate the within- and between-day reliability of the isoMMT3 EMG normalizing method during gait compared with the conventional MVIC method. Lower limb muscles EMG (gluteus medius, rectus femoris, tibialis anterior, semitendinosus) were recorded bilaterally in nine healthy participants (five males, aged 29.7±6.2years, BMI 22.7±3.3kgm -2 ) giving a total of 18 independent legs. Three repeated measurements of the isoMMT3 and MVIC exercises were performed with an EMG recording. EMG amplitude of the muscles during gait was normalized by these two methods. This protocol was repeated one week later. Within- and between-day reliability of normalization tasks were similar for isoMMT3 and MVIC methods. Within- and between-day reliability of gait EMG normalized by isoMMT3 was higher than with MVIC normalization. These results indicate that EMG normalization using isoMMT3 is a reliable method with no special equipment needed and will support CGA interpretation. The next step will be to evaluate this method in pathological populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Photovoltaic cell module and method of forming

    Science.gov (United States)

    Howell, Malinda; Juen, Donnie; Ketola, Barry; Tomalia, Mary Kay

    2017-12-12

    A photovoltaic cell module, a photovoltaic array including at least two modules, and a method of forming the module are provided. The module includes a first outermost layer and a photovoltaic cell disposed on the first outermost layer. The module also includes a second outermost layer disposed on the photovoltaic cell and sandwiching the photovoltaic cell between the second outermost layer and the first outermost layer. The method of forming the module includes the steps of disposing the photovoltaic cell on the first outermost layer, disposing a silicone composition on the photovoltaic cell, and compressing the first outermost layer, the photovoltaic cell, and the second layer to form the photovoltaic cell module.

  2. Normal form analysis of linear beam dynamics in a coupled storage ring

    International Nuclear Information System (INIS)

    Wolski, Andrzej; Woodley, Mark D.

    2004-01-01

    The techniques of normal form analysis, well known in the literature, can be used to provide a straightforward characterization of linear betatron dynamics in a coupled lattice. Here, we consider both the beam distribution and the betatron oscillations in a storage ring. We find that the beta functions for uncoupled motion generalize in a simple way to the coupled case. Defined in the way that we propose, the beta functions remain well behaved (positive and finite) under all circumstances, and have essentially the same physical significance for the beam size and betatron oscillation amplitude as in the uncoupled case. Application of this analysis to the online modeling of the PEP-II rings is also discussed

  3. On a computer implementation of the block Gauss–Seidel method for normal systems of equations

    Directory of Open Access Journals (Sweden)

    Alexander I. Zhdanov

    2016-12-01

    Full Text Available This article focuses on the modification of the block option Gauss-Seidel method for normal systems of equations, which is a sufficiently effective method of solving generally overdetermined, systems of linear algebraic equations of high dimensionality. The main disadvantage of methods based on normal equations systems is the fact that the condition number of the normal system is equal to the square of the condition number of the original problem. This fact has a negative impact on the rate of convergence of iterative methods based on normal equations systems. To increase the speed of convergence of iterative methods based on normal equations systems, for solving ill-conditioned problems currently different preconditioners options are used that reduce the condition number of the original system of equations. However, universal preconditioner for all applications does not exist. One of the effective approaches that improve the speed of convergence of the iterative Gauss–Seidel method for normal systems of equations, is to use its version of the block. The disadvantage of the block Gauss–Seidel method for production systems is the fact that it is necessary to calculate the pseudoinverse matrix for each iteration. We know that finding the pseudoinverse is a difficult computational procedure. In this paper, we propose a procedure to replace the matrix pseudo-solutions to the problem of normal systems of equations by Cholesky. Normal equations arising at each iteration of Gauss–Seidel method, have a relatively low dimension compared to the original system. The results of numerical experimentation demonstrating the effectiveness of the proposed approach are given.

  4. Normalized Excited Squeezed Vacuum State and Its Applications

    International Nuclear Information System (INIS)

    Meng Xiangguo; Wang Jisuo; Liang Baolong

    2007-01-01

    By using the intermediate coordinate-momentum representation in quantum optics and generating function for the normalization of the excited squeezed vacuum state (ESVS), the normalized ESVS is obtained. We find that its normalization constants obtained via two new methods are uniform and a new form which is different from the result obtained by Zhang and Fan [Phys. Lett. A 165 (1992) 14]. By virtue of the normalization constant of the ESVS and the intermediate coordinate-momentum representation, the tomogram of the normalized ESVS and some useful formulae are derived.

  5. Design of Normal Concrete Mixtures Using Workability-Dispersion-Cohesion Method

    OpenAIRE

    Qasrawi, Hisham

    2016-01-01

    The workability-dispersion-cohesion method is a new proposed method for the design of normal concrete mixes. The method uses special coefficients called workability-dispersion and workability-cohesion factors. These coefficients relate workability to mobility and stability of the concrete mix. The coefficients are obtained from special charts depending on mix requirements and aggregate properties. The method is practical because it covers various types of aggregates that may not be within sta...

  6. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  7. Comparing the normalization methods for the differential analysis of Illumina high-throughput RNA-Seq data.

    Science.gov (United States)

    Li, Peipei; Piao, Yongjun; Shon, Ho Sun; Ryu, Keun Ho

    2015-10-28

    Recently, rapid improvements in technology and decrease in sequencing costs have made RNA-Seq a widely used technique to quantify gene expression levels. Various normalization approaches have been proposed, owing to the importance of normalization in the analysis of RNA-Seq data. A comparison of recently proposed normalization methods is required to generate suitable guidelines for the selection of the most appropriate approach for future experiments. In this paper, we compared eight non-abundance (RC, UQ, Med, TMM, DESeq, Q, RPKM, and ERPKM) and two abundance estimation normalization methods (RSEM and Sailfish). The experiments were based on real Illumina high-throughput RNA-Seq of 35- and 76-nucleotide sequences produced in the MAQC project and simulation reads. Reads were mapped with human genome obtained from UCSC Genome Browser Database. For precise evaluation, we investigated Spearman correlation between the normalization results from RNA-Seq and MAQC qRT-PCR values for 996 genes. Based on this work, we showed that out of the eight non-abundance estimation normalization methods, RC, UQ, Med, TMM, DESeq, and Q gave similar normalization results for all data sets. For RNA-Seq of a 35-nucleotide sequence, RPKM showed the highest correlation results, but for RNA-Seq of a 76-nucleotide sequence, least correlation was observed than the other methods. ERPKM did not improve results than RPKM. Between two abundance estimation normalization methods, for RNA-Seq of a 35-nucleotide sequence, higher correlation was obtained with Sailfish than that with RSEM, which was better than without using abundance estimation methods. However, for RNA-Seq of a 76-nucleotide sequence, the results achieved by RSEM were similar to without applying abundance estimation methods, and were much better than with Sailfish. Furthermore, we found that adding a poly-A tail increased alignment numbers, but did not improve normalization results. Spearman correlation analysis revealed that RC, UQ

  8. Imagine-Self Perspective-Taking and Rational Self-Interested Behavior in a Simple Experimental Normal-Form Game

    Directory of Open Access Journals (Sweden)

    Adam Karbowski

    2017-09-01

    Full Text Available The purpose of this study is to explore the link between imagine-self perspective-taking and rational self-interested behavior in experimental normal-form games. Drawing on the concept of sympathy developed by Adam Smith and further literature on perspective-taking in games, we hypothesize that introduction of imagine-self perspective-taking by decision-makers promotes rational self-interested behavior in a simple experimental normal-form game. In our study, we examined behavior of 404 undergraduate students in the two-person game, in which the participant can suffer a monetary loss only if she plays her Nash equilibrium strategy and the opponent plays her dominated strategy. Results suggest that the threat of suffering monetary losses effectively discourages the participants from choosing Nash equilibrium strategy. In general, players may take into account that opponents choose dominated strategies due to specific not self-interested motivations or errors. However, adopting imagine-self perspective by the participants leads to more Nash equilibrium choices, perhaps by alleviating participants’ attributions of susceptibility to errors or non-self-interested motivation to the opponents.

  9. Imagine-Self Perspective-Taking and Rational Self-Interested Behavior in a Simple Experimental Normal-Form Game.

    Science.gov (United States)

    Karbowski, Adam; Ramsza, Michał

    2017-01-01

    The purpose of this study is to explore the link between imagine-self perspective-taking and rational self-interested behavior in experimental normal-form games. Drawing on the concept of sympathy developed by Adam Smith and further literature on perspective-taking in games, we hypothesize that introduction of imagine-self perspective-taking by decision-makers promotes rational self-interested behavior in a simple experimental normal-form game. In our study, we examined behavior of 404 undergraduate students in the two-person game, in which the participant can suffer a monetary loss only if she plays her Nash equilibrium strategy and the opponent plays her dominated strategy. Results suggest that the threat of suffering monetary losses effectively discourages the participants from choosing Nash equilibrium strategy. In general, players may take into account that opponents choose dominated strategies due to specific not self-interested motivations or errors. However, adopting imagine-self perspective by the participants leads to more Nash equilibrium choices, perhaps by alleviating participants' attributions of susceptibility to errors or non-self-interested motivation to the opponents.

  10. A simple global representation for second-order normal forms of Hamiltonian systems relative to periodic flows

    International Nuclear Information System (INIS)

    Avendaño-Camacho, M; Vallejo, J A; Vorobjev, Yu

    2013-01-01

    We study the determination of the second-order normal form for perturbed Hamiltonians relative to the periodic flow of the unperturbed Hamiltonian H 0 . The formalism presented here is global, and can be easily implemented in any computer algebra system. We illustrate it by means of two examples: the Hénon–Heiles and the elastic pendulum Hamiltonians. (paper)

  11. Methods of forming aluminum oxynitride-comprising bodies, including methods of forming a sheet of transparent armor

    Science.gov (United States)

    Chu, Henry Shiu-Hung [Idaho Falls, ID; Lillo, Thomas Martin [Idaho Falls, ID

    2008-12-02

    The invention includes methods of forming an aluminum oxynitride-comprising body. For example, a mixture is formed which comprises A:B:C in a respective molar ratio in the range of 9:3.6-6.2:0.1-1.1, where "A" is Al.sub.2O.sub.3, "B" is AlN, and "C" is a total of one or more of B.sub.2O.sub.3, SiO.sub.2, Si--Al--O--N, and TiO.sub.2. The mixture is sintered at a temperature of at least 1,600.degree. C. at a pressure of no greater than 500 psia effective to form an aluminum oxynitride-comprising body which is at least internally transparent and has at least 99% maximum theoretical density.

  12. A high precision method for normalization of cross sections

    International Nuclear Information System (INIS)

    Aguilera R, E.F.; Vega C, J.J.; Martinez Q, E.; Kolata, J.J.

    1988-08-01

    It was developed a system of 4 monitors and a program to eliminate, in the process of normalization of cross sections, the dependence of the alignment of the equipment and those condition of having centered of the beam. It was carried out a series of experiments with the systems 27 Al + 70, 72, 74, 76 Ge, 35 Cl + 58 Ni, 37 Cl + 58, 60, 62, 64 Ni and ( 81 Br, 109 Rh) + 60 Ni. For these experiments the typical precision of 1% was obtained in the normalization. It is demonstrated theoretical and experimentally the advantage of this method on those that use 1 or 2 monitors. (Author)

  13. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  14. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, Saira; Bissell, Mina J

    2004-12-17

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double

  15. A systematic study of genome context methods: calibration, normalization and combination

    Directory of Open Access Journals (Sweden)

    Dale Joseph M

    2010-10-01

    Full Text Available Abstract Background Genome context methods have been introduced in the last decade as automatic methods to predict functional relatedness between genes in a target genome using the patterns of existence and relative locations of the homologs of those genes in a set of reference genomes. Much work has been done in the application of these methods to different bioinformatics tasks, but few papers present a systematic study of the methods and their combination necessary for their optimal use. Results We present a thorough study of the four main families of genome context methods found in the literature: phylogenetic profile, gene fusion, gene cluster, and gene neighbor. We find that for most organisms the gene neighbor method outperforms the phylogenetic profile method by as much as 40% in sensitivity, being competitive with the gene cluster method at low sensitivities. Gene fusion is generally the worst performing of the four methods. A thorough exploration of the parameter space for each method is performed and results across different target organisms are presented. We propose the use of normalization procedures as those used on microarray data for the genome context scores. We show that substantial gains can be achieved from the use of a simple normalization technique. In particular, the sensitivity of the phylogenetic profile method is improved by around 25% after normalization, resulting, to our knowledge, on the best-performing phylogenetic profile system in the literature. Finally, we show results from combining the various genome context methods into a single score. When using a cross-validation procedure to train the combiners, with both original and normalized scores as input, a decision tree combiner results in gains of up to 20% with respect to the gene neighbor method. Overall, this represents a gain of around 15% over what can be considered the state of the art in this area: the four original genome context methods combined using a

  16. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    Science.gov (United States)

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  17. The morphological classification of normal and abnormal red blood cell using Self Organizing Map

    Science.gov (United States)

    Rahmat, R. F.; Wulandari, F. S.; Faza, S.; Muchtar, M. A.; Siregar, I.

    2018-02-01

    Blood is an essential component of living creatures in the vascular space. For possible disease identification, it can be tested through a blood test, one of which can be seen from the form of red blood cells. The normal and abnormal morphology of the red blood cells of a patient is very helpful to doctors in detecting a disease. With the advancement of digital image processing technology can be used to identify normal and abnormal blood cells of a patient. This research used self-organizing map method to classify the normal and abnormal form of red blood cells in the digital image. The use of self-organizing map neural network method can be implemented to classify the normal and abnormal form of red blood cells in the input image with 93,78% accuracy testing.

  18. NOLB: Nonlinear Rigid Block Normal Mode Analysis Method

    OpenAIRE

    Hoffmann , Alexandre; Grudinin , Sergei

    2017-01-01

    International audience; We present a new conceptually simple and computationally efficient method for nonlinear normal mode analysis called NOLB. It relies on the rotations-translations of blocks (RTB) theoretical basis developed by Y.-H. Sanejouand and colleagues. We demonstrate how to physically interpret the eigenvalues computed in the RTB basis in terms of angular and linear velocities applied to the rigid blocks and how to construct a nonlinear extrapolation of motion out of these veloci...

  19. Thermoelectric generator and method of forming same

    International Nuclear Information System (INIS)

    Wilson, K.T.

    1981-01-01

    A thermoelectric device is disclosed which comprises the formation of a multiplicity of thermocouples on a substrate in a narrow strip form, the thermocouples being formed by printing with first and second inks formed of suitable different powdered metals with a proper binder or flux. The thermocouples are formed in series and the opposed coupled areas are melted to form an intermingling of the two metals and the strips may be formed in substantial lengths and rolled onto a reel, or in relatively short strip form and disposed in a side-by-side abutting relationship in substantial numbers to define a generally rectangular panel form with opposed ends in electrical connection. The method of forming the panels includes the steps of feeding a suitable substrate, either in a continuous roll or sheet form, through first and second printers to form the series connected multiplicity of thermocouples thereon. From the printers the sheet or strip passes through a melter such as an induction furnace and from the furnace it passes through a sheeter, if the strip is in roll form. The sheets are then slit into narrow strips relative to the thermocouples, printed thereon and the strips are then formed into a bundle. A predetermined number of bundles are assembled into a panel form

  20. High molecular gas fractions in normal massive star-forming galaxies in the young Universe.

    Science.gov (United States)

    Tacconi, L J; Genzel, R; Neri, R; Cox, P; Cooper, M C; Shapiro, K; Bolatto, A; Bouché, N; Bournaud, F; Burkert, A; Combes, F; Comerford, J; Davis, M; Schreiber, N M Förster; Garcia-Burillo, S; Gracia-Carpio, J; Lutz, D; Naab, T; Omont, A; Shapley, A; Sternberg, A; Weiner, B

    2010-02-11

    Stars form from cold molecular interstellar gas. As this is relatively rare in the local Universe, galaxies like the Milky Way form only a few new stars per year. Typical massive galaxies in the distant Universe formed stars an order of magnitude more rapidly. Unless star formation was significantly more efficient, this difference suggests that young galaxies were much more molecular-gas rich. Molecular gas observations in the distant Universe have so far largely been restricted to very luminous, rare objects, including mergers and quasars, and accordingly we do not yet have a clear idea about the gas content of more normal (albeit massive) galaxies. Here we report the results of a survey of molecular gas in samples of typical massive-star-forming galaxies at mean redshifts of about 1.2 and 2.3, when the Universe was respectively 40% and 24% of its current age. Our measurements reveal that distant star forming galaxies were indeed gas rich, and that the star formation efficiency is not strongly dependent on cosmic epoch. The average fraction of cold gas relative to total galaxy baryonic mass at z = 2.3 and z = 1.2 is respectively about 44% and 34%, three to ten times higher than in today's massive spiral galaxies. The slow decrease between z approximately 2 and z approximately 1 probably requires a mechanism of semi-continuous replenishment of fresh gas to the young galaxies.

  1. On a computer implementation of the block Gauss–Seidel method for normal systems of equations

    OpenAIRE

    Alexander I. Zhdanov; Ekaterina Yu. Bogdanova

    2016-01-01

    This article focuses on the modification of the block option Gauss-Seidel method for normal systems of equations, which is a sufficiently effective method of solving generally overdetermined, systems of linear algebraic equations of high dimensionality. The main disadvantage of methods based on normal equations systems is the fact that the condition number of the normal system is equal to the square of the condition number of the original problem. This fact has a negative impact on the rate o...

  2. A pseudospectra-based approach to non-normal stability of embedded boundary methods

    Science.gov (United States)

    Rapaka, Narsimha; Samtaney, Ravi

    2017-11-01

    We present non-normal linear stability of embedded boundary (EB) methods employing pseudospectra and resolvent norms. Stability of the discrete linear wave equation is characterized in terms of the normalized distance of the EB to the nearest ghost node (α) in one and two dimensions. An important objective is that the CFL condition based on the Cartesian grid spacing remains unaffected by the EB. We consider various discretization methods including both central and upwind-biased schemes. Stability is guaranteed when α Funds under Award No. URF/1/1394-01.

  3. Investigation of reliability, validity and normality Persian version of the California Critical Thinking Skills Test; Form B (CCTST

    Directory of Open Access Journals (Sweden)

    Khallli H

    2003-04-01

    Full Text Available Background: To evaluate the effectiveness of the present educational programs in terms of students' achieving problem solving, decision making and critical thinking skills, reliable, valid and standard instrument are needed. Purposes: To Investigate the Reliability, validity and Norm of CCTST Form.B .The California Critical Thinking Skills Test contain 34 multi-choice questions with a correct answer in the jive Critical Thinking (CT cognitive skills domain. Methods: The translated CCTST Form.B were given t0405 BSN nursing students ojNursing Faculties located in Tehran (Tehran, Iran and Shahid Beheshti Universitiesthat were selected in the through random sampling. In order to determine the face and content validity the test was translated and edited by Persian and English language professor and researchers. it was also confirmed by judgments of a panel of medical education experts and psychology professor's. CCTST reliability was determined with internal consistency and use of KR-20. The construct validity of the test was investigated with factor analysis and internal consistency and group difference. Results: The test coefficien for reliablity was 0.62. Factor Analysis indicated that CCTST has been formed from 5 factor (element namely: Analysis, Evaluation, lriference, Inductive and Deductive Reasoning. Internal consistency method shows that All subscales have been high and positive correlation with total test score. Group difference method between nursing and philosophy students (n=50 indicated that there is meaningfUl difference between nursing and philosophy students scores (t=-4.95,p=0.OOO1. Scores percentile norm also show that percentile offifty scores related to 11 raw score and 95, 5 percentiles are related to 17 and 6 raw score ordinary. Conclusions: The Results revealed that the questions test is sufficiently reliable as a research tool, and all subscales measure a single construct (Critical Thinking and are able to distinguished the

  4. Alternative normalization methods demonstrate widespread cortical hypometabolism in untreated de novo Parkinson's disease

    DEFF Research Database (Denmark)

    Berti, Valentina; Polito, C; Borghammer, Per

    2012-01-01

    , recent studies suggested that conventional data normalization procedures may not always be valid, and demonstrated that alternative normalization strategies better allow detection of low magnitude changes. We hypothesized that these alternative normalization procedures would disclose more widespread...... metabolic alterations in de novo PD. METHODS: [18F]FDG PET scans of 26 untreated de novo PD patients (Hoehn & Yahr stage I-II) and 21 age-matched controls were compared using voxel-based analysis. Normalization was performed using gray matter (GM), white matter (WM) reference regions and Yakushev...... normalization. RESULTS: Compared to GM normalization, WM and Yakushev normalization procedures disclosed much larger cortical regions of relative hypometabolism in the PD group with extensive involvement of frontal and parieto-temporal-occipital cortices, and several subcortical structures. Furthermore...

  5. The impact of sample non-normality on ANOVA and alternative methods.

    Science.gov (United States)

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  6. Standard test method for static leaching of monolithic waste forms for disposal of radioactive waste

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method provides a measure of the chemical durability of a simulated or radioactive monolithic waste form, such as a glass, ceramic, cement (grout), or cermet, in a test solution at temperatures <100°C under low specimen surface- area-to-leachant volume (S/V) ratio conditions. 1.2 This test method can be used to characterize the dissolution or leaching behaviors of various simulated or radioactive waste forms in various leachants under the specific conditions of the test based on analysis of the test solution. Data from this test are used to calculate normalized elemental mass loss values from specimens exposed to aqueous solutions at temperatures <100°C. 1.3 The test is conducted under static conditions in a constant solution volume and at a constant temperature. The reactivity of the test specimen is determined from the amounts of components released and accumulated in the solution over the test duration. A wide range of test conditions can be used to study material behavior, includin...

  7. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    Science.gov (United States)

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  8. A novel mean-centering method for normalizing microRNA expression from high-throughput RT-qPCR data

    Directory of Open Access Journals (Sweden)

    Wylie Dennis

    2011-12-01

    Full Text Available Abstract Background Normalization is critical for accurate gene expression analysis. A significant challenge in the quantitation of gene expression from biofluids samples is the inability to quantify RNA concentration prior to analysis, underscoring the need for robust normalization tools for this sample type. In this investigation, we evaluated various methods of normalization to determine the optimal approach for quantifying microRNA (miRNA expression from biofluids and tissue samples when using the TaqMan® Megaplex™ high-throughput RT-qPCR platform with low RNA inputs. Findings We compared seven normalization methods in the analysis of variation of miRNA expression from biofluid and tissue samples. We developed a novel variant of the common mean-centering normalization strategy, herein referred to as mean-centering restricted (MCR normalization, which is adapted to the TaqMan Megaplex RT-qPCR platform, but is likely applicable to other high-throughput RT-qPCR-based platforms. Our results indicate that MCR normalization performs comparable to or better than both standard mean-centering and other normalization methods. We also propose an extension of this method to be used when migrating biomarker signatures from Megaplex to singleplex RT-qPCR platforms, based on the identification of a small number of normalizer miRNAs that closely track the mean of expressed miRNAs. Conclusions We developed the MCR method for normalizing miRNA expression from biofluids samples when using the TaqMan Megaplex RT-qPCR platform. Our results suggest that normalization based on the mean of all fully observed (fully detected miRNAs minimizes technical variance in normalized expression values, and that a small number of normalizer miRNAs can be selected when migrating from Megaplex to singleplex assays. In our study, we find that normalization methods that focus on a restricted set of miRNAs tend to perform better than methods that focus on all miRNAs, including

  9. Developing TOPSIS method using statistical normalization for selecting knowledge management strategies

    Directory of Open Access Journals (Sweden)

    Amin Zadeh Sarraf

    2013-09-01

    Full Text Available Purpose: Numerous companies are expecting their knowledge management (KM to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. Design/methodology/approach: An extension of TOPSIS, a multi-attribute decision making (MADM technique, to a group decision environment is investigated. TOPSIS is a practical and useful technique for ranking and selection of a number of externally determined alternatives through distance measures. The entropy method is often used for assessing weights in the TOPSIS method. Entropy in information theory is a criterion uses for measuring the amount of disorder represented by a discrete probability distribution. According to decrease resistance degree of employees opposite of implementing a new strategy, it seems necessary to spot all managers’ opinion. The normal distribution considered the most prominent probability distribution in statistics is used to normalize gathered data. Findings: The results of this study show that by considering 6 criteria for alternatives Evaluation, the most appropriate KM strategy to implement  in our company was ‘‘Personalization’’. Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the approach such as normal distribution of sample and community. These assumptions can be changed in future work. Originality/value: This paper proposes an effective solution based on combined entropy and TOPSIS approach to help companies that need to evaluate and select KM strategies. In represented solution, opinions of all managers is gathered and normalized by using standard normal distribution and central limit theorem. Keywords: Knowledge management; strategy; TOPSIS; Normal distribution; entropy

  10. Method of forming a dianhydrosugar alcohol

    Science.gov (United States)

    Holladay, Johnathan E [Kennewick, WA; Hu, Jianli [Kennewick, WA; Wang, Yong [Richland, WA; Werpy, Todd A [West Richland, WA; Zhang, Xinjie [Burlington, MA

    2010-01-19

    The invention includes methods of producing dianhydrosugars. A polyol is reacted in the presence of a first catalyst to form a monocyclic sugar. The monocyclic sugar is transferred to a second reactor where it is converted to a dianhydrosugar alcohol in the presence of a second catalyst. The invention includes a process of forming isosorbide. An initial reaction is conducted at a first temperature in the presence of a solid acid catalyst. The initial reaction involves reacting sorbitol to produce 1,4-sorbitan, 3,6-sorbitan, 2,5-mannitan and 2,5-iditan. Utilizing a second temperature, the 1,4-sorbitan and 3,6-sorbitan are converted to isosorbide. The invention includes a method of purifying isosorbide from a mixture containing isosorbide and at least one additional component. A first distillation removes a first portion of the isosorbide from the mixture. A second distillation is then conducted at a higher temperature to remove a second portion of isosorbide from the mixture.

  11. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    International Nuclear Information System (INIS)

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-01

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline

  12. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    Energy Technology Data Exchange (ETDEWEB)

    Tam, Allison [Stanford Institutes of Medical Research Program, Stanford University School of Medicine, Stanford, California 94305 (United States); Barker, Jocelyn [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 (United States); Rubin, Daniel [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 and Department of Medicine (Biomedical Informatics Research), Stanford University School of Medicine, Stanford, California 94305 (United States)

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  13. Principal Typings in a Restricted Intersection Type System for Beta Normal Forms with De Bruijn Indices

    Directory of Open Access Journals (Sweden)

    Daniel Ventura

    2010-01-01

    Full Text Available The lambda-calculus with de Bruijn indices assembles each alpha-class of lambda-terms in a unique term, using indices instead of variable names. Intersection types provide finitary type polymorphism and can characterise normalisable lambda-terms through the property that a term is normalisable if and only if it is typeable. To be closer to computations and to simplify the formalisation of the atomic operations involved in beta-contractions, several calculi of explicit substitution were developed mostly with de Bruijn indices. Versions of explicit substitutions calculi without types and with simple type systems are well investigated in contrast to versions with more elaborate type systems such as intersection types. In previous work, we introduced a de Bruijn version of the lambda-calculus with an intersection type system and proved that it preserves subject reduction, a basic property of type systems. In this paper a version with de Bruijn indices of an intersection type system originally introduced to characterise principal typings for beta-normal forms is presented. We present the characterisation in this new system and the corresponding versions for the type inference and the reconstruction of normal forms from principal typings algorithms. We briefly discuss the failure of the subject reduction property and some possible solutions for it.

  14. A statistical analysis of count normalization methods used in positron-emission tomography

    International Nuclear Information System (INIS)

    Holmes, T.J.; Ficke, D.C.; Snyder, D.L.

    1984-01-01

    As part of the Positron-Emission Tomography (PET) reconstruction process, annihilation counts are normalized for photon absorption, detector efficiency and detector-pair duty-cycle. Several normalization methods of time-of-flight and conventional systems are analyzed mathematically for count bias and variance. The results of the study have some implications on hardware and software complexity and on image noise and distortion

  15. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  16. Feasibility of Computed Tomography-Guided Methods for Spatial Normalization of Dopamine Transporter Positron Emission Tomography Image.

    Science.gov (United States)

    Kim, Jin Su; Cho, Hanna; Choi, Jae Yong; Lee, Seung Ha; Ryu, Young Hoon; Lyoo, Chul Hyoung; Lee, Myung Sik

    2015-01-01

    Spatial normalization is a prerequisite step for analyzing positron emission tomography (PET) images both by using volume-of-interest (VOI) template and voxel-based analysis. Magnetic resonance (MR) or ligand-specific PET templates are currently used for spatial normalization of PET images. We used computed tomography (CT) images acquired with PET/CT scanner for the spatial normalization for [18F]-N-3-fluoropropyl-2-betacarboxymethoxy-3-beta-(4-iodophenyl) nortropane (FP-CIT) PET images and compared target-to-cerebellar standardized uptake value ratio (SUVR) values with those obtained from MR- or PET-guided spatial normalization method in healthy controls and patients with Parkinson's disease (PD). We included 71 healthy controls and 56 patients with PD who underwent [18F]-FP-CIT PET scans with a PET/CT scanner and T1-weighted MR scans. Spatial normalization of MR images was done with a conventional spatial normalization tool (cvMR) and with DARTEL toolbox (dtMR) in statistical parametric mapping software. The CT images were modified in two ways, skull-stripping (ssCT) and intensity transformation (itCT). We normalized PET images with cvMR-, dtMR-, ssCT-, itCT-, and PET-guided methods by using specific templates for each modality and measured striatal SUVR with a VOI template. The SUVR values measured with FreeSurfer-generated VOIs (FSVOI) overlaid on original PET images were also used as a gold standard for comparison. The SUVR values derived from all four structure-guided spatial normalization methods were highly correlated with those measured with FSVOI (P normalization methods provided reliable striatal SUVR values comparable to those obtained with MR-guided methods. CT-guided methods can be useful for analyzing dopamine transporter PET images when MR images are unavailable.

  17. A new method locating good glass-forming compositions

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Dechuan [Department of Materials Physics and Chemistry, Northeastern University, No.3-11, Wenhua Road, Shenyang, 110819 (China); Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China); Geng, Yan [Department of Materials Physics and Chemistry, Northeastern University, No.3-11, Wenhua Road, Shenyang, 110819 (China); Li, Zhengkun [Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China); Liu, Dingming [Department of Materials Physics and Chemistry, Northeastern University, No.3-11, Wenhua Road, Shenyang, 110819 (China); Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China); Fu, Huameng; Zhu, Zhengwang [Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China); Qi, Yang, E-mail: qiyang@imp.neu.edu.cn [Department of Materials Physics and Chemistry, Northeastern University, No.3-11, Wenhua Road, Shenyang, 110819 (China); Zhang, Haifeng, E-mail: hfzhang@imr.ac.cn [Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China)

    2015-10-15

    A new method was proposed to pinpoint the compositions with good glass forming ability (GFA) by combining atomic clusters and mixing entropy. The clusters were confirmed by analyzing competing crystalline phases. The method was applied to the Zr–Al–Ni–Cu–Ag alloy system. A series of glass formers with diameter up to 20 mm were quickly detected in this system. The good glass formers were located only after trying 5 compositions around the calculated composition. The method was also effective in other multi-component systems. This method might provide a new way to understand glass formation and to quickly pinpoint compositions with high GFA. - Highlights: • A new method was proposed to quickly design glass formers with high glass forming ability. • The method of designing pentabasic Zr–Al–Ni–Cu–Ag alloys was applied. • A series of new Zr-based bulk metallic glasses with critical diameter of 20 mm were discovered.

  18. A new method locating good glass-forming compositions

    International Nuclear Information System (INIS)

    Yu, Dechuan; Geng, Yan; Li, Zhengkun; Liu, Dingming; Fu, Huameng; Zhu, Zhengwang; Qi, Yang; Zhang, Haifeng

    2015-01-01

    A new method was proposed to pinpoint the compositions with good glass forming ability (GFA) by combining atomic clusters and mixing entropy. The clusters were confirmed by analyzing competing crystalline phases. The method was applied to the Zr–Al–Ni–Cu–Ag alloy system. A series of glass formers with diameter up to 20 mm were quickly detected in this system. The good glass formers were located only after trying 5 compositions around the calculated composition. The method was also effective in other multi-component systems. This method might provide a new way to understand glass formation and to quickly pinpoint compositions with high GFA. - Highlights: • A new method was proposed to quickly design glass formers with high glass forming ability. • The method of designing pentabasic Zr–Al–Ni–Cu–Ag alloys was applied. • A series of new Zr-based bulk metallic glasses with critical diameter of 20 mm were discovered

  19. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data

    Directory of Open Access Journals (Sweden)

    Li Chen

    2018-04-01

    Full Text Available Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios—a simple but effective normalization method—for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  20. Algorithms for finding Chomsky and Greibach normal forms for a fuzzy context-free grammar using an algebraic approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, E.T.

    1983-01-01

    Algorithms for the construction of the Chomsky and Greibach normal forms for a fuzzy context-free grammar using the algebraic approach are presented and illustrated by examples. The results obtained in this paper may have useful applications in fuzzy languages, pattern recognition, information storage and retrieval, artificial intelligence, database and pictorial information systems. 16 references.

  1. Die singulation method and package formed thereby

    Science.gov (United States)

    Anderson, Robert C [Tucson, AZ; Shul, Randy J [Albuquerque, NM; Clews, Peggy J [Tijeras, NM; Baker, Michael S [Albuquerque, NM; De Boer, Maarten P [Albuquerque, NM

    2012-08-07

    A method is disclosed for singulating die from a substrate having a sacrificial layer and one or more device layers, with a retainer being formed in the device layer(s) and anchored to the substrate. Deep Reactive Ion Etching (DRIE) etching of a trench through the substrate from the bottom side defines a shape for each die. A handle wafer is then attached to the bottom side of the substrate, and the sacrificial layer is etched to singulate the die and to form a frame from the retainer and the substrate. The frame and handle wafer, which retain the singulated die in place, can be attached together with a clamp or a clip and to form a package for the singulated die. One or more stops can be formed from the device layer(s) to limit a sliding motion of the singulated die.

  2. An automatic method to discriminate malignant masses from normal tissue in digital mammograms

    International Nuclear Information System (INIS)

    Brake, Guido M. te; Karssemeijer, Nico; Hendriks, Jan H.C.L.

    2000-01-01

    Specificity levels of automatic mass detection methods in mammography are generally rather low, because suspicious looking normal tissue is often hard to discriminate from real malignant masses. In this work a number of features were defined that are related to image characteristics that radiologists use to discriminate real lesions from normal tissue. An artificial neural network was used to map the computed features to a measure of suspiciousness for each region that was found suspicious by a mass detection method. Two data sets were used to test the method. The first set of 72 malignant cases (132 films) was a consecutive series taken from the Nijmegen screening programme, 208 normal films were added to improve the estimation of the specificity of the method. The second set was part of the new DDSM data set from the University of South Florida. A total of 193 cases (772 films) with 372 annotated malignancies was used. The measure of suspiciousness that was computed using the image characteristics was successful in discriminating tumours from false positive detections. Approximately 75% of all cancers were detected in at least one view at a specificity level of 0.1 false positive per image. (author)

  3. A new normalization method based on electrical field lines for electrical capacitance tomography

    International Nuclear Information System (INIS)

    Zhang, L F; Wang, H X

    2009-01-01

    Electrical capacitance tomography (ECT) is considered to be one of the most promising process tomography techniques. The image reconstruction for ECT is an inverse problem to find the spatially distributed permittivities in a pipe. Usually, the capacitance measurements obtained from the ECT system are normalized at the high and low permittivity for image reconstruction. The parallel normalization model is commonly used during the normalization process, which assumes the distribution of materials in parallel. Thus, the normalized capacitance is a linear function of measured capacitance. A recently used model is a series normalization model which results in the normalized capacitance as a nonlinear function of measured capacitance. The newest presented model is based on electrical field centre lines (EFCL), and is a mixture of two normalization models. The multi-threshold method of this model is presented in this paper. The sensitivity matrices based on different normalization models were obtained, and image reconstruction was carried out accordingly. Simulation results indicate that reconstructed images with higher quality can be obtained based on the presented model

  4. Evaluation of directional normalization methods for Landsat TM/ETM+ over primary Amazonian lowland forests

    Science.gov (United States)

    Van doninck, Jasper; Tuomisto, Hanna

    2017-06-01

    Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.

  5. Normalization Of Thermal-Radiation Form-Factor Matrix

    Science.gov (United States)

    Tsuyuki, Glenn T.

    1994-01-01

    Report describes algorithm that adjusts form-factor matrix in TRASYS computer program, which calculates intraspacecraft radiative interchange among various surfaces and environmental heat loading from sources such as sun.

  6. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  7. Algebraic method for analysis of nonlinear systems with a normal matrix

    International Nuclear Information System (INIS)

    Konyaev, Yu.A.; Salimova, A.F.

    2014-01-01

    A promising method has been proposed for analyzing a class of quasilinear nonautonomous systems of differential equations whose matrix can be represented as a sum of nonlinear normal matrices, which makes it possible to analyze stability without using the Lyapunov functions [ru

  8. Post-UV colony-forming ability of normal fibroblast strains and of the xeroderma pigmentosum group G strain

    International Nuclear Information System (INIS)

    Barrett, S.F.; Tarone, R.E.; Moshell, A.N.; Ganges, M.B.; Robbins, J.H.

    1981-01-01

    In xeroderma pigmentosum, an inherited disorder of defective DNA repair, post-uv colony-forming ability of fibroblasts from patients in complementation groups A through F correlates with the patients' neurological status. The first xeroderma pigmentosum patient assigned to the recently discovered group G had the neurological abnormalities of XP. Researchers have determined the post-uv colony-forming ability of cultured fibroblasts from this patient and from 5 more control donors. Log-phase fibroblasts were irradiated with 254 nm uv light from a germicidal lamp, trypsinized, and replated at known densities. After 2 to 4 weeks' incubation the cells were fixed, stained and scored for colony formation. The strains' post-uv colony-forming ability curves were obtained by plotting the log of the percent remaining post-uv colony-forming ability as a function of the uv dose. The post-uv colony-forming ability of 2 of the 5 new normal strains was in the previously defined control donor zone, but that of the other 3 extended down to the level of the most resistant xeroderma pigmentosum strain. The post-uv colony-forming ability curve of the group G fibroblasts was not significantly different from the curves of the group D fibroblast strains from patients with clinical histories similar to that of the group G patient

  9. Experimental Method for Characterizing Electrical Steel Sheets in the Normal Direction

    Directory of Open Access Journals (Sweden)

    Thierry Belgrand

    2010-10-01

    Full Text Available This paper proposes an experimental method to characterise magnetic laminations in the direction normal to the sheet plane. The principle, which is based on a static excitation to avoid planar eddy currents, is explained and specific test benches are proposed. Measurements of the flux density are made with a sensor moving in and out of an air-gap. A simple analytical model is derived in order to determine the permeability in the normal direction. The experimental results for grain oriented steel sheets are presented and a comparison is provided with values obtained from literature.

  10. Bicervical normal uterus with normal vagina | Okeke | Annals of ...

    African Journals Online (AJOL)

    To the best of our knowledge, only few cases of bicervical normal uterus with normal vagina exist in the literature; one of the cases had an anterior‑posterior disposition. This form of uterine abnormality is not explicable by the existing classical theory of mullerian anomalies and suggests that a complex interplay of events ...

  11. Influences of Normalization Method on Biomarker Discovery in Gas Chromatography-Mass Spectrometry-Based Untargeted Metabolomics: What Should Be Considered?

    Science.gov (United States)

    Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo

    2017-05-16

    Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.

  12. Hollow fiber membranes and methods for forming same

    Science.gov (United States)

    Bhandari, Dhaval Ajit; McCloskey, Patrick Joseph; Howson, Paul Edward; Narang, Kristi Jean; Koros, William

    2016-03-22

    The invention provides improved hollow fiber membranes having at least two layers, and methods for forming the same. The methods include co-extruding a first composition, a second composition, and a third composition to form a dual layer hollow fiber membrane. The first composition includes a glassy polymer; the second composition includes a polysiloxane; and the third composition includes a bore fluid. The dual layer hollow fiber membranes include a first layer and a second layer, the first layer being a porous layer which includes the glassy polymer of the first composition, and the second layer being a polysiloxane layer which includes the polysiloxane of the second composition.

  13. Static and Vibrational Analysis of Partially Composite Beams Using the Weak-Form Quadrature Element Method

    Directory of Open Access Journals (Sweden)

    Zhiqiang Shen

    2012-01-01

    Full Text Available Deformation of partially composite beams under distributed loading and free vibrations of partially composite beams under various boundary conditions are examined in this paper. The weak-form quadrature element method, which is characterized by direct evaluation of the integrals involved in the variational description of a problem, is used. One quadrature element is normally sufficient for a partially composite beam regardless of the magnitude of the shear connection stiffness. The number of integration points in a quadrature element is adjustable in accordance with convergence requirement. Results are compared with those of various finite element formulations. It is shown that the weak form quadrature element solution for partially composite beams is free of slip locking, and high computational accuracy is achieved with smaller number of degrees of freedom. Besides, it is found that longitudinal inertia of motion cannot be simply neglected in assessment of dynamic behavior of partially composite beams.

  14. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  15. Multi-satellites normalization of the FengYun-2s visible detectors by the MVP method

    Science.gov (United States)

    Li, Yuan; Rong, Zhi-guo; Zhang, Li-jun; Sun, Ling; Xu, Na

    2013-08-01

    After January 13, 2012, FY-2F had successfully launched, the total number of the in orbit operating FengYun-2 geostationary meteorological satellites reached three. For accurate and efficient application of multi-satellite observation data, the study of the multi-satellites normalization of the visible detector was urgent. The method required to be non-rely on the in orbit calibration. So as to validate the calibration results before and after the launch; calculate day updating surface bidirectional reflectance distribution function (BRDF); at the same time track the long-term decay phenomenon of the detector's linearity and responsivity. By research of the typical BRDF model, the normalization method was designed. Which could effectively solute the interference of surface directional reflectance characteristics, non-rely on visible detector in orbit calibration. That was the Median Vertical Plane (MVP) method. The MVP method was based on the symmetry of principal plane, which were the directional reflective properties of the general surface targets. Two geostationary satellites were taken as the endpoint of a segment, targets on the intersecting line of the segment's MVP and the earth surface could be used as a normalization reference target (NRT). Observation on the NRT by two satellites at the moment the sun passing through the MVP brought the same observation zenith, solar zenith, and opposite relative direction angle. At that time, the linear regression coefficients of the satellite output data were the required normalization coefficients. The normalization coefficients between FY-2D, FY-2E and FY-2F were calculated, and the self-test method of the normalized results was designed and realized. The results showed the differences of the responsivity between satellites could up to 10.1%(FY-2E to FY-2F); the differences of the output reflectance calculated by the broadcast calibration look-up table could up to 21.1%(FY-2D to FY-2F); the differences of the output

  16. The research on AP1000 nuclear main pumps’ complete characteristics and the normalization method

    International Nuclear Information System (INIS)

    Zhu, Rongsheng; Liu, Yong; Wang, Xiuli; Fu, Qiang; Yang, Ailing; Long, Yun

    2017-01-01

    Highlights: • Complete characteristics of main pump are researched into. • The quadratic character of head and torque under some operatings. • The characteristics tend to be the same under certain conditions. • The normalization method gives proper estimations on external characteristics. • The normalization method can efficiently improve the security computing. - Abstract: The paper summarizes the complete characteristics of nuclear main pumps based on experimental results and makes a detailed study, and then draws a series of important conclusions: with regard to the overall flow area, the runaway operating and 0-revolving-speed operating of nuclear main pumps both have quadratic characteristics; with regard to the infinite flow, the braking operation and the 0-revolving-speed operation show consistent external characteristics. To remedy the shortcomings of the traditional complete-characteristic expression with regards to only describing limited flow sections at specific revolving speeds, the paper proposes a normalization method. As an important boundary condition of the security computing of unstable transient process of the primary reactor coolant pump and the nuclear island primary circuit and secondary circuit, the precision of complete-characteristic data and curve impacts the precision of security computing. A normalization curve obtained by applying the normalization method to process complete-characteristic data could correctly, completely and precisely express the complete characteristics of the primary reactor coolant pump under any rotational speed and full flow, and is capable of giving proper estimations on external characteristics of the flow outside the test range and even of the infinite flow. These advantages are of great significance for the improvement of security computing of transient processes of the primary reactor coolant pump and the circuit system.

  17. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  18. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    Science.gov (United States)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  19. Impact of PET/CT image reconstruction methods and liver uptake normalization strategies on quantitative image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kuhnert, Georg; Sterzer, Sergej; Kahraman, Deniz; Dietlein, Markus; Drzezga, Alexander; Kobe, Carsten [University Hospital of Cologne, Department of Nuclear Medicine, Cologne (Germany); Boellaard, Ronald [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Scheffler, Matthias; Wolf, Juergen [University Hospital of Cologne, Lung Cancer Group Cologne, Department I of Internal Medicine, Center for Integrated Oncology Cologne Bonn, Cologne (Germany)

    2016-02-15

    In oncological imaging using PET/CT, the standardized uptake value has become the most common parameter used to measure tracer accumulation. The aim of this analysis was to evaluate ultra high definition (UHD) and ordered subset expectation maximization (OSEM) PET/CT reconstructions for their potential impact on quantification. We analyzed 40 PET/CT scans of lung cancer patients who had undergone PET/CT. Standardized uptake values corrected for body weight (SUV) and lean body mass (SUL) were determined in the single hottest lesion in the lung and normalized to the liver for UHD and OSEM reconstruction. Quantitative uptake values and their normalized ratios for the two reconstruction settings were compared using the Wilcoxon test. The distribution of quantitative uptake values and their ratios in relation to the reconstruction method used were demonstrated in the form of frequency distribution curves, box-plots and scatter plots. The agreement between OSEM and UHD reconstructions was assessed through Bland-Altman analysis. A significant difference was observed after OSEM and UHD reconstruction for SUV and SUL data tested (p < 0.0005 in all cases). The mean values of the ratios after OSEM and UHD reconstruction showed equally significant differences (p < 0.0005 in all cases). Bland-Altman analysis showed that the SUV and SUL and their normalized values were, on average, up to 60 % higher after UHD reconstruction as compared to OSEM reconstruction. OSEM and HD reconstruction brought a significant difference for SUV and SUL, which remained constantly high after normalization to the liver, indicating that standardization of reconstruction and the use of comparable SUV measurements are crucial when using PET/CT. (orig.)

  20. Development of standard testing methods for nuclear-waste forms

    International Nuclear Information System (INIS)

    Mendel, J.E.; Nelson, R.D.

    1981-11-01

    Standard test methods for waste package component development and design, safety analyses, and licensing are being developed for the Nuclear Waste Materials Handbook. This paper describes mainly the testing methods for obtaining waste form materials data

  1. Standard Test Method for Normal Spectral Emittance at Elevated Temperatures of Nonconducting Specimens

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1971-01-01

    1.1 This test method describes an accurate technique for measuring the normal spectral emittance of electrically nonconducting materials in the temperature range from 1000 to 1800 K, and at wavelengths from 1 to 35 μm. It is particularly suitable for measuring the normal spectral emittance of materials such as ceramic oxides, which have relatively low thermal conductivity and are translucent to appreciable depths (several millimetres) below the surface, but which become essentially opaque at thicknesses of 10 mm or less. 1.2 This test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is particularly suitable for research laboratories, where the highest precision and accuracy are desired, and is not recommended for routine production or acceptance testing. Because of its high accuracy, this test method may be used as a reference method to be applied to production and acceptance testing in case of dispute. 1.3 This test metho...

  2. Method of forming a ceramic matrix composite and a ceramic matrix component

    Science.gov (United States)

    de Diego, Peter; Zhang, James

    2017-05-30

    A method of forming a ceramic matrix composite component includes providing a formed ceramic member having a cavity, filling at least a portion of the cavity with a ceramic foam. The ceramic foam is deposited on a barrier layer covering at least one internal passage of the cavity. The method includes processing the formed ceramic member and ceramic foam to obtain a ceramic matrix composite component. Also provided is a method of forming a ceramic matrix composite blade and a ceramic matrix composite component.

  3. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  4. Flux form Semi-Lagrangian methods for parabolic problems

    Directory of Open Access Journals (Sweden)

    Bonaventura Luca

    2016-09-01

    Full Text Available A semi-Lagrangian method for parabolic problems is proposed, that extends previous work by the authors to achieve a fully conservative, flux-form discretization of linear and nonlinear diffusion equations. A basic consistency and stability analysis is proposed. Numerical examples validate the proposed method and display its potential for consistent semi-Lagrangian discretization of advection diffusion and nonlinear parabolic problems.

  5. Shear Stress-Normal Stress (Pressure) Ratio Decides Forming Callus in Patients with Diabetic Neuropathy

    Science.gov (United States)

    Noguchi, Hiroshi; Takehara, Kimie; Ohashi, Yumiko; Suzuki, Ryo; Yamauchi, Toshimasa; Kadowaki, Takashi; Sanada, Hiromi

    2016-01-01

    Aim. Callus is a risk factor, leading to severe diabetic foot ulcer; thus, prevention of callus formation is important. However, normal stress (pressure) and shear stress associated with callus have not been clarified. Additionally, as new valuables, a shear stress-normal stress (pressure) ratio (SPR) was examined. The purpose was to clarify the external force associated with callus formation in patients with diabetic neuropathy. Methods. The external force of the 1st, 2nd, and 5th metatarsal head (MTH) as callus predilection regions was measured. The SPR was calculated by dividing shear stress by normal stress (pressure), concretely, peak values (SPR-p) and time integral values (SPR-i). The optimal cut-off point was determined. Results. Callus formation region of the 1st and 2nd MTH had high SPR-i rather than noncallus formation region. The cut-off value of the 1st MTH was 0.60 and the 2nd MTH was 0.50. For the 5th MTH, variables pertaining to the external forces could not be determined to be indicators of callus formation because of low accuracy. Conclusions. The callus formation cut-off values of the 1st and 2nd MTH were clarified. In the future, it will be necessary to confirm the effect of using appropriate footwear and gait training on lowering SPR-i. PMID:28050567

  6. Shear Stress-Normal Stress (Pressure Ratio Decides Forming Callus in Patients with Diabetic Neuropathy

    Directory of Open Access Journals (Sweden)

    Ayumi Amemiya

    2016-01-01

    Full Text Available Aim. Callus is a risk factor, leading to severe diabetic foot ulcer; thus, prevention of callus formation is important. However, normal stress (pressure and shear stress associated with callus have not been clarified. Additionally, as new valuables, a shear stress-normal stress (pressure ratio (SPR was examined. The purpose was to clarify the external force associated with callus formation in patients with diabetic neuropathy. Methods. The external force of the 1st, 2nd, and 5th metatarsal head (MTH as callus predilection regions was measured. The SPR was calculated by dividing shear stress by normal stress (pressure, concretely, peak values (SPR-p and time integral values (SPR-i. The optimal cut-off point was determined. Results. Callus formation region of the 1st and 2nd MTH had high SPR-i rather than noncallus formation region. The cut-off value of the 1st MTH was 0.60 and the 2nd MTH was 0.50. For the 5th MTH, variables pertaining to the external forces could not be determined to be indicators of callus formation because of low accuracy. Conclusions. The callus formation cut-off values of the 1st and 2nd MTH were clarified. In the future, it will be necessary to confirm the effect of using appropriate footwear and gait training on lowering SPR-i.

  7. Evaluation of four methods for separation of lymphocytes from normal individuals and patients with cancer and tuberculosis.

    Science.gov (United States)

    Patrick, C C; Graber, C D; Loadholt, C B

    1976-01-01

    An optimal technique was sought for lymphocyte recovery from normal and chronic diseased individuals. Lymphocytes were separated by four techniques: Plasmagel, Ficoll--Hypaque, a commercial semiautomatic method, and simple centrifugation using blood drawn from ten normal individuals, ten cancer patients, and ten tuberculosis patients. The lymphocyte mixture obtained after using each method was analyzed for percent recovery, amount if contamination by erythrocytes and neutrophils, and percent viability. The results show that the semiautomatic method yielded the best percent recovery of lymphocytes for normal individuals, while the simple centrifugation method contributed the highest percent recovery for cancer and tuberculosis patients. The Ficoll-Hypaque method gave the lowest erythrocyte contamination for all three types of individuals tested, while the Plasmagel method gave the lowest neutrophil contamination for all three types of individuals. The simple centrifugation method yielded all viable lymphocytes and thus gave the highest percent viability.

  8. Denotational Aspects of Untyped Normalization by Evaluation

    DEFF Research Database (Denmark)

    Filinski, Andrzej; Rohde, Henning Korsholm

    2005-01-01

    of soundness (the output term, if any, is in normal form and ß-equivalent to the input term); identification (ß-equivalent terms are mapped to the same result); and completeness (the function is defined for all terms that do have normal forms). We also show how the semantic construction enables a simple yet...... formal correctness proof for the normalization algorithm, expressed as a functional program in an ML-like, call-by-value language. Finally, we generalize the construction to produce an infinitary variant of normal forms, namely Böhm trees. We show that the three-part characterization of correctness...

  9. Effects Of Combinations Of Patternmaking Methods And Dress Forms On Garment Appearance

    Directory of Open Access Journals (Sweden)

    Fujii Chinami

    2017-09-01

    Full Text Available We investigated the effects of the combinations of patternmaking methods and dress forms on the appearance of a garment. Six upper garments were made using three patternmaking methods used in France, Italy, and Japan, and two dress forms made in Japan and France. The patterns and the appearances of the garments were compared using geometrical measurements. Sensory evaluations of the differences in garment appearance and fit on each dress form were also carried out. In the patterns, the positions of bust and waist darts were different. The waist dart length, bust dart length, and positions of the bust top were different depending on the patternmaking method, even when the same dress form was used. This was a result of differences in the measurements used and the calculation methods employed for other dimensions. This was because the ideal body shape was different for each patternmaking method. Even for garments produced for the same dress form, the appearances of the shoulder, bust, and waist from the front, side, and back views were different depending on the patternmaking method. As a result of the sensory evaluation, it was also found that the bust and waist shapes of the garments were different depending on the combination of patternmaking method and dress form. Therefore, to obtain a garment with better appearance, it is necessary to understand the effects of the combinations of patternmaking methods and body shapes.

  10. A Bootstrap Based Measure Robust to the Choice of Normalization Methods for Detecting Rhythmic Features in High Dimensional Data.

    Science.gov (United States)

    Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A; Peddada, Shyamal D

    2018-01-01

    Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html.

  11. FORMED: Bringing Formal Methods to the Engineering Desktop

    Science.gov (United States)

    2016-02-01

    FORMED: BRINGING FORMAL METHODS TO THE ENGINEERING DESKTOP BAE SYSTEMS FEBRUARY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE...This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government’s...BRINGING FORMAL METHODS TO THE ENGINEERING DESKTOP 5a. CONTRACT NUMBER FA8750-14-C-0024 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 63781D

  12. Machine learning methods for clinical forms analysis in mental health.

    Science.gov (United States)

    Strauss, John; Peguero, Arturo Martinez; Hirst, Graeme

    2013-01-01

    In preparation for a clinical information system implementation, the Centre for Addiction and Mental Health (CAMH) Clinical Information Transformation project completed multiple preparation steps. An automated process was desired to supplement the onerous task of manual analysis of clinical forms. We used natural language processing (NLP) and machine learning (ML) methods for a series of 266 separate clinical forms. For the investigation, documents were represented by feature vectors. We used four ML algorithms for our examination of the forms: cluster analysis, k-nearest neigh-bours (kNN), decision trees and support vector machines (SVM). Parameters for each algorithm were optimized. SVM had the best performance with a precision of 64.6%. Though we did not find any method sufficiently accurate for practical use, to our knowledge this approach to forms has not been used previously in mental health.

  13. Analysis of Voltage Forming Methods for Multiphase Inverters

    Directory of Open Access Journals (Sweden)

    Tadas Lipinskis

    2013-05-01

    Full Text Available The article discusses advantages of the multiphase AC induction motor over three or less phase motors. It presents possible stator winding configurations for a multiphase induction motor. Various fault control strategies were reviewed for phases feeding the motor. The authors propose a method for quality evaluation of voltage forming algorithm in the inverter. Simulation of a six-phase voltage source inverter, voltage in which is formed using a simple SPWM control algorithm, was performed in Matlab Simulink. Simulation results were evaluated using the proposed method. Inverter’s power stage was powered by 400 V DC source. The spectrum of output currents was analysed and the magnitude of the main frequency component was at least 12 times greater than the next biggest-magnitude component. The value of rectified inverter voltage was 373 V.Article in Lithuanian

  14. An analysis of normalization methods for Drosophila RNAi genomic screens and development of a robust validation scheme

    Science.gov (United States)

    Wiles, Amy M.; Ravi, Dashnamoorthy; Bhavani, Selvaraj; Bishop, Alexander J.R.

    2010-01-01

    Genome-wide RNAi screening is a powerful, yet relatively immature technology that allows investigation into the role of individual genes in a process of choice. Most RNAi screens identify a large number of genes with a continuous gradient in the assessed phenotype. Screeners must then decide whether to examine just those genes with the most robust phenotype or to examine the full gradient of genes that cause an effect and how to identify the candidate genes to be validated. We have used RNAi in Drosophila cells to examine viability in a 384-well plate format and compare two screens, untreated control and treatment. We compare multiple normalization methods, which take advantage of different features within the data, including quantile normalization, background subtraction, scaling, cellHTS2 1, and interquartile range measurement. Considering the false-positive potential that arises from RNAi technology, a robust validation method was designed for the purpose of gene selection for future investigations. In a retrospective analysis, we describe the use of validation data to evaluate each normalization method. While no normalization method worked ideally, we found that a combination of two methods, background subtraction followed by quantile normalization and cellHTS2, at different thresholds, captures the most dependable and diverse candidate genes. Thresholds are suggested depending on whether a few candidate genes are desired or a more extensive systems level analysis is sought. In summary, our normalization approaches and experimental design to perform validation experiments are likely to apply to those high-throughput screening systems attempting to identify genes for systems level analysis. PMID:18753689

  15. Accelerated in-vitro release testing methods for extended-release parenteral dosage forms.

    Science.gov (United States)

    Shen, Jie; Burgess, Diane J

    2012-07-01

    This review highlights current methods and strategies for accelerated in-vitro drug release testing of extended-release parenteral dosage forms such as polymeric microparticulate systems, lipid microparticulate systems, in-situ depot-forming systems and implants. Extended-release parenteral dosage forms are typically designed to maintain the effective drug concentration over periods of weeks, months or even years. Consequently, 'real-time' in-vitro release tests for these dosage forms are often run over a long time period. Accelerated in-vitro release methods can provide rapid evaluation and therefore are desirable for quality control purposes. To this end, different accelerated in-vitro release methods using United States Pharmacopeia (USP) apparatus have been developed. Different mechanisms of accelerating drug release from extended-release parenteral dosage forms, along with the accelerated in-vitro release testing methods currently employed are discussed. Accelerated in-vitro release testing methods with good discriminatory ability are critical for quality control of extended-release parenteral products. Methods that can be used in the development of in-vitro-in-vivo correlation (IVIVC) are desirable; however, for complex parenteral products this may not always be achievable. © 2012 The Authors. JPP © 2012 Royal Pharmaceutical Society.

  16. Accelerated in vitro release testing methods for extended release parenteral dosage forms

    Science.gov (United States)

    Shen, Jie; Burgess, Diane J.

    2012-01-01

    Objectives This review highlights current methods and strategies for accelerated in vitro drug release testing of extended release parenteral dosage forms such as polymeric microparticulate systems, lipid microparticulate systems, in situ depot-forming systems, and implants. Key findings Extended release parenteral dosage forms are typically designed to maintain the effective drug concentration over periods of weeks, months or even years. Consequently, “real-time” in vitro release tests for these dosage forms are often run over a long time period. Accelerated in vitro release methods can provide rapid evaluation and therefore are desirable for quality control purposes. To this end, different accelerated in vitro release methods using United States Pharmacopoeia (USP) apparatus have been developed. Different mechanisms of accelerating drug release from extended release parenteral dosage forms, along with the accelerated in vitro release testing methods currently employed are discussed. Conclusions Accelerated in vitro release testing methods with good discriminatory ability are critical for quality control of extended release parenteral products. Methods that can be used in the development of in vitro-in vivo correlation (IVIVC) are desirable, however for complex parenteral products this may not always be achievable. PMID:22686344

  17. Inside-sediment partitioning of PAH, PCB and organochlorine compounds and inferences on sampling and normalization methods

    International Nuclear Information System (INIS)

    Opel, Oliver; Palm, Wolf-Ulrich; Steffen, Dieter; Ruck, Wolfgang K.L.

    2011-01-01

    Comparability of sediment analyses for semivolatile organic substances is still low. Neither screening of the sediments nor organic-carbon based normalization is sufficient to obtain comparable results. We are showing the interdependency of grain-size effects with inside-sediment organic-matter distribution for PAH, PCB and organochlorine compounds. Surface sediment samples collected by Van-Veen grab were sieved and analyzed for 16 PAH, 6 PCB and 18 organochlorine pesticides (OCP) as well as organic-matter content. Since bulk concentrations are influenced by grain-size effects themselves, we used a novel normalization method based on the sum of concentrations in the separate grain-size fractions of the sediments. By calculating relative normalized concentrations, it was possible to clearly show underlying mechanisms throughout a heterogeneous set of samples. Furthermore, we were able to show that, for comparability, screening at <125 μm is best suited and can be further improved by additional organic-carbon normalization. - Research highlights: → New method for the comparison of heterogeneous sets of sediment samples. → Assessment of organic pollutants partitioning mechanisms in sediments. → Proposed method for more comparable sediment sampling. - Inside-sediment partitioning mechanisms are shown using a new mathematical approach and discussed in terms of sediment sampling and normalization.

  18. The pathophysiology of the aqueduct stroke volume in normal pressure hydrocephalus: can co-morbidity with other forms of dementia be excluded?

    International Nuclear Information System (INIS)

    Bateman, Grant A.; Levi, Christopher R.; Wang, Yang; Lovett, Elizabeth C.; Schofield, Peter

    2005-01-01

    Variable results are obtained from the treatment of normal pressure hydrocephalus (NPH) by shunt insertion. There is a high correlation between NPH and the pathology of Alzheimer's disease (AD) on brain biopsy. There is an overlap between AD and vascular dementia (VaD), suggesting that a correlation exists between NPH and other forms of dementia. This study seeks to (1) understand the physiological factors behind, and (2) define the ability of, the aqueduct stroke volume to exclude dementia co-morbidity. Twenty-four patients from a dementia clinic were classified as having either early AD or VaD on the basis of clinical features, Hachinski score and neuropsychological testing. They were compared with 16 subjects with classical clinical findings of NPH and 12 aged-matched non-cognitively impaired subjects. MRI flow quantification was used to measure aqueduct stroke volume and arterial pulse volume. An arterio-cerebral compliance ratio was calculated from the two volumes in each patient. The aqueduct stroke volume was elevated in all three forms of dementia, with no significant difference noted between the groups. The arterial pulse volume was elevated by 24% in VaD and reduced by 35% in NPH, compared to normal (P=0.05 and P=0.002, respectively), and was normal in AD. There was a spectrum of relative compliance with normal compliance in VaD and reduced compliance in AD and NPH. The aqueduct stroke volume depends on the arterial pulse volume and the relative compliance between the arterial tree and brain. The aqueduct stroke volume cannot exclude significant co-morbidity in NPH. (orig.)

  19. The pathophysiology of the aqueduct stroke volume in normal pressure hydrocephalus: can co-morbidity with other forms of dementia be excluded?

    Energy Technology Data Exchange (ETDEWEB)

    Bateman, Grant A. [John Hunter Hospital, Department of Medical Imaging, Newcastle (Australia); Levi, Christopher R.; Wang, Yang; Lovett, Elizabeth C. [Hunter Medical Research Institute, Clinical Neurosciences Program, Newcastle (Australia); Schofield, Peter [James Fletcher Hospital, Neuropsychiatry Unit, Newcastle (Australia)

    2005-10-01

    Variable results are obtained from the treatment of normal pressure hydrocephalus (NPH) by shunt insertion. There is a high correlation between NPH and the pathology of Alzheimer's disease (AD) on brain biopsy. There is an overlap between AD and vascular dementia (VaD), suggesting that a correlation exists between NPH and other forms of dementia. This study seeks to (1) understand the physiological factors behind, and (2) define the ability of, the aqueduct stroke volume to exclude dementia co-morbidity. Twenty-four patients from a dementia clinic were classified as having either early AD or VaD on the basis of clinical features, Hachinski score and neuropsychological testing. They were compared with 16 subjects with classical clinical findings of NPH and 12 aged-matched non-cognitively impaired subjects. MRI flow quantification was used to measure aqueduct stroke volume and arterial pulse volume. An arterio-cerebral compliance ratio was calculated from the two volumes in each patient. The aqueduct stroke volume was elevated in all three forms of dementia, with no significant difference noted between the groups. The arterial pulse volume was elevated by 24% in VaD and reduced by 35% in NPH, compared to normal (P=0.05 and P=0.002, respectively), and was normal in AD. There was a spectrum of relative compliance with normal compliance in VaD and reduced compliance in AD and NPH. The aqueduct stroke volume depends on the arterial pulse volume and the relative compliance between the arterial tree and brain. The aqueduct stroke volume cannot exclude significant co-morbidity in NPH. (orig.)

  20. Cognitive Factors in the Choice of Syntactic Form by Aphasic and Normal Speakers of English and Japanese: The Speaker's Impulse.

    Science.gov (United States)

    Menn, Lise; And Others

    This study examined the role of empathy in the choice of syntactic form and the degree of independence of pragmatic and syntactic abilities in a range of aphasic patients. Study 1 involved 9 English-speaking and 9 Japanese-speaking aphasic subjects with 10 English-speaking and 4 Japanese normal controls. Study 2 involved 14 English- and 6…

  1. Normal modes and continuous spectra

    International Nuclear Information System (INIS)

    Balmforth, N.J.; Morrison, P.J.

    1994-12-01

    The authors consider stability problems arising in fluids, plasmas and stellar systems that contain singularities resulting from wave-mean flow or wave-particle resonances. Such resonances lead to singularities in the differential equations determining the normal modes at the so-called critical points or layers. The locations of the singularities are determined by the eigenvalue of the problem, and as a result, the spectrum of eigenvalues forms a continuum. They outline a method to construct the singular eigenfunctions comprising the continuum for a variety of problems

  2. The Effect of Normal Force on Tribocorrosion Behaviour of Ti-10Zr Alloy and Porous TiO2-ZrO2 Thin Film Electrochemical Formed

    Science.gov (United States)

    Dănăilă, E.; Benea, L.

    2017-06-01

    The tribocorrosion behaviour of Ti-10Zr alloy and porous TiO2-ZrO2 thin film electrochemical formed on Ti-10Zr alloy was evaluated in Fusayama-Mayer artificial saliva solution. Tribocorrosion experiments were performed using a unidirectional pin-on-disc experimental set-up which was mechanically and electrochemically instrumented, under various solicitation conditions. The effect of applied normal force on tribocorrosion performance of the tested materials was determined. Open circuit potential (OCP) measurements performed before, during and after sliding tests were applied in order to determine the tribocorrosion degradation. The applied normal force was found to greatly affect the potential during tribocorrosion experiments, an increase in the normal force inducing a decrease in potential accelerating the depassivation of the materials studied. The results show a decrease in friction coefficient with gradually increasing the normal load. It was proved that the porous TiO2-ZrO2 thin film electrochemical formed on Ti-10Zr alloy lead to an improvement of tribocorrosion resistance compared to non-anodized Ti-10Zr alloy intended for biomedical applications.

  3. Application of specific gravity method for normalization of urinary excretion rates of radionuclides

    International Nuclear Information System (INIS)

    Thakur, Smita S.; Yadav, J.R.; Rao, D.D.

    2015-01-01

    In vitro bioassay monitoring is based on the determination of activity concentration in biological samples excreted from the body and is most suitable for alpha and beta emitters. For occupational workers handling actinides in reprocessing facilities possibility of internal exposure exists and urine assay is preferred method for monitoring such exposure. Urine samples collected for 24 h duration, is the true representative of bioassay sample and hence in the case of insufficient collection time, specific gravity applied method of normalization of urine sample is used. The present study reports the data of specific gravity generated for controlled group of Indian population by the use of densitometer and its application in urinary sample activity normalization. The average specific gravity value obtained for the controlled group was 1.008±0.005 gm/ml. (author)

  4. Article, component, and method of forming an article

    Science.gov (United States)

    Lacy, Benjamin Paul; Itzel, Gary Michael; Kottilingam, Srikanth Chandrudu; Dutta, Sandip; Schick, David Edward

    2018-05-22

    An article and method of forming an article are provided. The article includes a body portion separating an inner region and an outer region, an aperture in the body portion, the aperture fluidly connecting the inner region to the outer region, and a conduit extending from an outer surface of the body portion at the aperture and being arranged and disposed to controllably direct fluid from the inner region to the outer region. The method includes providing a body portion separating an inner region and an outer region, providing an aperture in the body portion, and forming a conduit over the aperture, the conduit extending from an outer surface of the body portion and being arranged and disposed to controllably direct fluid from the inner region to the outer region. The article is arranged and disposed for insertion within a hot gas path component.

  5. Method for forming H2-permselective oxide membranes

    Science.gov (United States)

    Gavalas, G.R.; Nam, S.W.; Tsapatsis, M.; Kim, S.

    1995-09-26

    Methods are disclosed for forming permselective oxide membranes that are highly selective to permeation of hydrogen by chemical deposition of reactants in the pore of porous tubes, such as Vycor{trademark} glass or Al{sub 2}O{sub 3} tubes. The porous tubes have pores extending through the tube wall. The process involves forming a stream containing a first reactant of the formula RX{sub n}, wherein R is silicon, titanium, boron or aluminum, X is chlorine, bromine or iodine, and n is a number which is equal to the valence of R; and forming another stream containing water vapor as the second reactant. Both of the reactant streams are passed along either the outside or the inside surface of a porous tube and the streams react in the pores of the porous tube to form a nonporous layer of R-oxide in the pores. The membranes are formed by the hydrolysis of the respective halides. In another embodiment, the first reactant stream contains a first reactant having the formula SiH{sub n}Cl{sub 4{minus}n} where n is 1, 2 or 3; and the second reactant stream contains water vapor and oxygen. In still another embodiment the first reactant stream containing a first reactant selected from the group consisting of Cl{sub 3}SiOSiCl{sub 3}, Cl{sub 3}SiOSiCl{sub 2}OSiCl{sub 3}, and mixtures thereof and the second reactant stream contains water vapor. In still another embodiment, membrane formation is carried out by an alternating flow deposition method. This involves a sequence of cycles, each cycle comprising introduction of the halide-containing stream and allowance of a specific time for reaction followed by purge and flow of the water vapor containing stream for a specific length of time. In all embodiments the nonporous layers formed are selectively permeable to hydrogen. 11 figs.

  6. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    Science.gov (United States)

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  7. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  8. The Case Method as a Form of Communication.

    Science.gov (United States)

    Kingsley, Lawrence

    1982-01-01

    Questions the wisdom of obscurantism as a basis for case writing. Contends that in its present state the case method, for most students, is an inefficient way of learning. Calls for a consensus that cases should be as well-written as other forms of scholarship. (PD)

  9. Standard test method for splitting tensile strength for brittle nuclear waste forms

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1989-01-01

    1.1 This test method is used to measure the static splitting tensile strength of cylindrical specimens of brittle nuclear waste forms. It provides splitting tensile-strength data that can be used to compare the strength of waste forms when tests are done on one size of specimen. 1.2 The test method is applicable to glass, ceramic, and concrete waste forms that are sufficiently homogeneous (Note 1) but not to coated-particle, metal-matrix, bituminous, or plastic waste forms, or concretes with large-scale heterogeneities. Cementitious waste forms with heterogeneities >1 to 2 mm and 5 mm can be tested using this procedure provided the specimen size is increased from the reference size of 12.7 mm diameter by 6 mm length, to 51 mm diameter by 100 mm length, as recommended in Test Method C 496 and Practice C 192. Note 1—Generally, the specimen structural or microstructural heterogeneities must be less than about one-tenth the diameter of the specimen. 1.3 This test method can be used as a quality control chec...

  10. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Numerical Validation of the Delaunay Normalization and the Krylov-Bogoliubov-Mitropolsky Method

    Directory of Open Access Journals (Sweden)

    David Ortigosa

    2014-01-01

    Full Text Available A scalable second-order analytical orbit propagator programme based on modern and classical perturbation methods is being developed. As a first step in the validation and verification of part of our orbit propagator programme, we only consider the perturbation produced by zonal harmonic coefficients in the Earth’s gravity potential, so that it is possible to analyze the behaviour of the mathematical expressions involved in Delaunay normalization and the Krylov-Bogoliubov-Mitropolsky method in depth and determine their limits.

  12. Emission computer tomographic orthopan display of the jaws - method and normal values

    International Nuclear Information System (INIS)

    Bockisch, A.; Koenig, R.; Biersack, H.J.; Wahl, G.

    1990-01-01

    A tomoscintigraphic method is described to create orthopan-like projections of the jaws from SPECT bone scans using cylinder projection. On the basis of this projection a numerical analysis of the dental regions is performed in the same computer code. For each dental region the activity relative to the contralateral region and relative to the average activity of the corresponding jaw is calculated. Using this method, a set of normal activity relations has been established by investigation of 24 patients. (orig.) [de

  13. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    International Nuclear Information System (INIS)

    Gigase, Yves

    2007-01-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)

  14. Standard Test Method for Normal Spectral Emittance at Elevated Temperatures

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1972-01-01

    1.1 This test method describes a highly accurate technique for measuring the normal spectral emittance of electrically conducting materials or materials with electrically conducting substrates, in the temperature range from 600 to 1400 K, and at wavelengths from 1 to 35 μm. 1.2 The test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is suitable for research laboratories where the highest precision and accuracy are desired, but is not recommended for routine production or acceptance testing. However, because of its high accuracy this test method can be used as a referee method to be applied to production and acceptance testing in cases of dispute. 1.3 The values stated in SI units are to be regarded as the standard. The values in parentheses are for information only. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this stan...

  15. Possibilities of Particle Finite Element Methods in Industrial Forming Processes

    Science.gov (United States)

    Oliver, J.; Cante, J. C.; Weyler, R.; Hernandez, J.

    2007-04-01

    The work investigates the possibilities offered by the particle finite element method (PFEM) in the simulation of forming problems involving large deformations, multiple contacts, and new boundaries generation. The description of the most distinguishing aspects of the PFEM, and its application to simulation of representative forming processes, illustrate the proposed methodology.

  16. Imaging the corpus callosum, septum pellucidum and fornix in children: normal anatomy and variations of normality

    International Nuclear Information System (INIS)

    Griffiths, Paul D.; Batty, Ruth; Connolly, Dan J.A.; Reeves, Michael J.

    2009-01-01

    The midline structures of the supra-tentorial brain are important landmarks for judging if the brain has formed correctly. In this article, we consider the normal appearances of the corpus callosum, septum pellucidum and fornix as shown on MR imaging in normal and near-normal states. (orig.)

  17. An imbalance fault detection method based on data normalization and EMD for marine current turbines.

    Science.gov (United States)

    Zhang, Milu; Wang, Tianzhen; Tang, Tianhao; Benbouzid, Mohamed; Diallo, Demba

    2017-05-01

    This paper proposes an imbalance fault detection method based on data normalization and Empirical Mode Decomposition (EMD) for variable speed direct-drive Marine Current Turbine (MCT) system. The method is based on the MCT stator current under the condition of wave and turbulence. The goal of this method is to extract blade imbalance fault feature, which is concealed by the supply frequency and the environment noise. First, a Generalized Likelihood Ratio Test (GLRT) detector is developed and the monitoring variable is selected by analyzing the relationship between the variables. Then, the selected monitoring variable is converted into a time series through data normalization, which makes the imbalance fault characteristic frequency into a constant. At the end, the monitoring variable is filtered out by EMD method to eliminate the effect of turbulence. The experiments show that the proposed method is robust against turbulence through comparing the different fault severities and the different turbulence intensities. Comparison with other methods, the experimental results indicate the feasibility and efficacy of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Detection of a normal zone in the MFTF magnets

    International Nuclear Information System (INIS)

    Owen, E.W.

    1979-01-01

    A method is described for the electrical detection of a normal zone in inductively coupled superconducting coils. Measurements are made with two kinds of bridges, mutual inductance bridges and self-inductance bridges. The bridge outputs are combined with other measured voltages to form a detector that can be realized with either analog circuits or a computer algorithm. The detection of a normal zone in a pair of coupled coils, each with taps, is discussed in detail. It is also shown that the method applies to a pair of coils when one has no taps and to a pair when one coil is superconducting and the other is not. The method is extended, in principle, to a number of coils. A description is given of a technique for balancing the bridges at near the operating currents of the coils

  19. A study of the up-and-down method for non-normal distribution functions

    DEFF Research Database (Denmark)

    Vibholm, Svend; Thyregod, Poul

    1988-01-01

    The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...

  20. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  1. Normalized modes at selected points without normalization

    Science.gov (United States)

    Kausel, Eduardo

    2018-04-01

    As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á

  2. New Graphical Methods and Test Statistics for Testing Composite Normality

    Directory of Open Access Journals (Sweden)

    Marc S. Paolella

    2015-07-01

    Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.

  3. Electrodynamics, Differential Forms and the Method of Images

    Science.gov (United States)

    Low, Robert J.

    2011-01-01

    This paper gives a brief description of how Maxwell's equations are expressed in the language of differential forms and use this to provide an elegant demonstration of how the method of images (well known in electrostatics) also works for electrodynamics in the presence of an infinite plane conducting boundary. The paper should be accessible to an…

  4. Different methods of measuring ADC values in normal human brain

    International Nuclear Information System (INIS)

    Wei Youping; Sheng Junkang; Zhang Caiyuan

    2009-01-01

    Objective: To investigate better method of measuring ADC values of normal brain, and provide reference for further research. Methods: Twenty healthy people's MR imaging were reviewed. All of them underwent routine MRI scans and echo-planar diffusion-weighted imaging (DWI), and ADC maps were reconstructed on work station. Six regions of interest (ROI) were selected for each object, the mean ADC values were obtained for each position on DWI and ADC maps respectively. Results: On the anisotropic DWI map calculated in the hypothalamus, ADC M , ADC P , ADC S values were no significant difference (P>0.05), in the frontal white matter and internal capsule hindlimb, there was a significant difference (P ave value exist significant difference to direct measurement on the anisotropic (isotropic) ADC map (P<0.001). Conclusion: Diffusion of water in the frontal white matter and internal capsule are anisotropic, but it is isotropic in the hypothalamus; different quantitative methods of diffusion measurement of 4ADC values have significant difference, but ADC values calculated through the DWI map is more accurate, quantitative diffusion study of brain tissue should also consider the diffusion measurement method. (authors)

  5. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  6. Methods of forming and realization of assortment policy of retail business enterprises

    Directory of Open Access Journals (Sweden)

    Kudenko Kiril

    2016-07-01

    Full Text Available Within the framework of the article systematisation of methods of forming and realisation of assortment policy of enterprises of retail business is done. Recommendations concerning the priority of the use of separate methods of forming and realisation of assortment policy with different purposes, taking into account their content, advantages and disadvantages are developed.

  7. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization.

    Science.gov (United States)

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.

  8. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    Science.gov (United States)

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  9. Delivery Device and Method for Forming the Same

    Science.gov (United States)

    Ma, Peter X. (Inventor); Liu, Xiaohua (Inventor); McCauley, Laurie (Inventor)

    2014-01-01

    A delivery device includes a hollow container, and a plurality of biodegradable and/or erodible polymeric layers established in the container. A layer including a predetermined substance is established between each of the plurality of polymeric layers, whereby degradation of the polymeric layer and release of the predetermined substance occur intermittently. Methods for forming the device are also disclosed herein.

  10. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  11. Carbon nanotubes and methods of forming same at low temperature

    Science.gov (United States)

    Biris, Alexandru S.; Dervishi, Enkeleda

    2017-05-02

    In one aspect of the invention, a method for growth of carbon nanotubes includes providing a graphitic composite, decorating the graphitic composite with metal nanostructures to form graphene-contained powders, and heating the graphene-contained powders at a target temperature to form the carbon nanotubes in an argon/hydrogen environment that is devoid of a hydrocarbon source. In one embodiment, the target temperature can be as low as about 150.degree. C. (.+-.5.degree. C.).

  12. Form gene clustering method about pan-ethnic-group products based on emotional semantic

    Science.gov (United States)

    Chen, Dengkai; Ding, Jingjing; Gao, Minzhuo; Ma, Danping; Liu, Donghui

    2016-09-01

    The use of pan-ethnic-group products form knowledge primarily depends on a designer's subjective experience without user participation. The majority of studies primarily focus on the detection of the perceptual demands of consumers from the target product category. A pan-ethnic-group products form gene clustering method based on emotional semantic is constructed. Consumers' perceptual images of the pan-ethnic-group products are obtained by means of product form gene extraction and coding and computer aided product form clustering technology. A case of form gene clustering about the typical pan-ethnic-group products is investigated which indicates that the method is feasible. This paper opens up a new direction for the future development of product form design which improves the agility of product design process in the era of Industry 4.0.

  13. Adjustment technique without explicit formation of normal equations /conjugate gradient method/

    Science.gov (United States)

    Saxena, N. K.

    1974-01-01

    For a simultaneous adjustment of a large geodetic triangulation system, a semiiterative technique is modified and used successfully. In this semiiterative technique, known as the conjugate gradient (CG) method, original observation equations are used, and thus the explicit formation of normal equations is avoided, 'huge' computer storage space being saved in the case of triangulation systems. This method is suitable even for very poorly conditioned systems where solution is obtained only after more iterations. A detailed study of the CG method for its application to large geodetic triangulation systems was done that also considered constraint equations with observation equations. It was programmed and tested on systems as small as two unknowns and three equations up to those as large as 804 unknowns and 1397 equations. When real data (573 unknowns, 965 equations) from a 1858-km-long triangulation system were used, a solution vector accurate to four decimal places was obtained in 2.96 min after 1171 iterations (i.e., 2.0 times the number of unknowns).

  14. Method of forming components for a high-temperature secondary electrochemical cell

    Science.gov (United States)

    Mrazek, Franklin C.; Battles, James E.

    1983-01-01

    A method of forming a component for a high-temperature secondary electrochemical cell having a positive electrode including a sulfide selected from the group consisting of iron sulfides, nickel sulfides, copper sulfides and cobalt sulfides, a negative electrode including an alloy of aluminum and an electrically insulating porous separator between said electrodes. The improvement comprises forming a slurry of solid particles dispersed in a liquid electrolyte such as the lithium chloride-potassium chloride eutetic, casting the slurry into a form having the shape of one of the components and smoothing the exposed surface of the slurry, cooling the cast slurry to form the solid component, and removing same. Electrodes and separators can be thus formed.

  15. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    Science.gov (United States)

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  16. Simultaneous sound velocity and thickness measurement by the ultrasonic pitch-catch method for corrosion-layer-forming polymeric materials.

    Science.gov (United States)

    Kusano, Masahiro; Takizawa, Shota; Sakai, Tetsuya; Arao, Yoshihiko; Kubouchi, Masatoshi

    2018-01-01

    Since thermosetting resins have excellent resistance to chemicals, fiber reinforced plastics composed of such resins and reinforcement fibers are widely used as construction materials for equipment in chemical plants. Such equipment is usually used for several decades under severe corrosive conditions so that failure due to degradation may result. One of the degradation behaviors in thermosetting resins under chemical solutions is "corrosion-layer-forming" degradation. In this type of degradation, surface resins in contact with a solution corrode, and some of them remain asa corrosion layer on the pristine part. It is difficult to precisely measure the thickness of the pristine part of such degradation type materials by conventional pulse-echo ultrasonic testing, because the sound velocity depends on the degree of corrosion of the polymeric material. In addition, the ultrasonic reflection interface between the pristine part and the corrosion layer is obscure. Thus, we propose a pitch-catch method using a pair of normal and angle probes to measure four parameters: the thicknesses of the pristine part and the corrosion layer, and their respective sound velocities. The validity of the proposed method was confirmed by measuring a two-layer sample and a sample including corroded parts. The results demonstrate that the pitch-catch method can successfully measure the four parameters and evaluate the residual thickness of the pristine part in the corrosion-layer-forming sample. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Normal Values of Tissue-Muscle Perfusion Indexes of Lower Limbs Obtained with a Scintigraphic Method.

    Science.gov (United States)

    Manevska, Nevena; Stojanoski, Sinisa; Pop Gjorceva, Daniela; Todorovska, Lidija; Miladinova, Daniela; Zafirova, Beti

    2017-09-01

    Introduction Muscle perfusion is a physiologic process that can undergo quantitative assessment and thus define the range of normal values of perfusion indexes and perfusion reserve. The investigation of the microcirculation has a crucial role in determining the muscle perfusion. Materials and method The study included 30 examinees, 24-74 years of age, without a history of confirmed peripheral artery disease and all had normal findings on Doppler ultrasonography and pedo-brachial index of lower extremity (PBI). 99mTc-MIBI tissue muscle perfusion scintigraphy of lower limbs evaluates tissue perfusion in resting condition "rest study" and after workload "stress study", through quantitative parameters: Inter-extremity index (for both studies), left thigh/right thigh (LT/RT) left calf/right calf (LC/RC) and perfusion reserve (PR) for both thighs and calves. Results In our investigated group we assessed the normal values of quantitative parameters of perfusion indexes. Indexes ranged for LT/RT in rest study 0.91-1.05, in stress study 0.92-1.04. LC/RC in rest 0.93-1.07 and in stress study 0.93-1.09. The examinees older than 50 years had insignificantly lower perfusion reserve of these parameters compared with those younger than 50, LC (p=0.98), and RC (p=0.6). Conclusion This non-invasive scintigraphic method allows in individuals without peripheral artery disease to determine the range of normal values of muscle perfusion at rest and stress condition and to clinically implement them in evaluation of patients with peripheral artery disease for differentiating patients with normal from those with impaired lower limbs circulation.

  18. Comparison of validation methods for forming simulations

    Science.gov (United States)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  19. Application of in situ current normalized PIGE method for determination of total boron and its isotopic composition

    International Nuclear Information System (INIS)

    Chhillar, Sumit; Acharya, R.; Sodaye, S.; Pujari, P.K.

    2014-01-01

    A particle induced gamma-ray emission (PIGE) method using proton beam has been standardized for determination of isotopic composition of natural boron and enriched boron samples. Target pellets of boron standard and samples were prepared in cellulose matrix. The prompt gamma rays of 429 keV, 718 keV and 2125 keV were measured from 10 B(p,αγ) 7 Be, 10 B(p, p'γ) 10 B and 11 B(p, p'γ) 11 B nuclear reactions, respectively. For normalizing the beam current variations in situ current normalization method was used. Validation of method was carried out using synthetic samples of boron carbide, borax, borazine and lithium metaborate in cellulose matrix. (author)

  20. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... image analysis methods used in longitudinal studies. The implication is increased measurement variation and a risk of bias in the estimations (e.g. in the volume change for a structure). We proposed two quite different approaches for scanner drift normalization and demonstrated the performance...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  1. METHODS OF FORMING THE STRUCTURE OF KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    Tatyana A. Snegiryova

    2015-01-01

    Full Text Available The aim of the study is to describe the method of forming thestructure of knowledge of students on the basis of an integrated approach (expert, taxonomy and thesaurus and the presentation of the results of its use in the study of medical and biological physics at the Izhevsk State Medical Academy.Methods. The methods used in the work involve: an integrated approach that includes group expert method, developed by V. S. Cherepanov; taxonomy and thesaurus approach when creating a model of taxonomic structure of knowledge, as well as models of the formation of the knowledge structure.Results. The algorithm, stages and procedures of knowledge structure formation of trainees are considered in detail; the model of the given process is created; the technology of content selection of a teaching material due to the fixed time that has been released on studying of concrete discipline is shown.Scientific novelty and practical significance. Advantage of the proposed method and model of students’ knowledge structure formation consists in their flexibility: at certain adaptation they can be used while training to any discipline apart of its specificity and educational institution. Observance of all stages of the presented technology of content selection of a teaching material on the basis of an expert estimation will promote substantial increase of quality of training; make it possible to develop the unified method uniting the various points of view of teachers on knowledge formation of trainees.

  2. Scintigraphy for the detection of myocardial damage in the indeterminate form of Chagas disease

    International Nuclear Information System (INIS)

    Pedroso, Enio Roberto Pietra; Rezende, Nilton Alves de

    2010-01-01

    Background: non-invasive cardiological methods have been used for the identification of myocardial damage in Chagas disease. Objective: to verify whether the rest/stress myocardial perfusion scintigraphy is able to identify early myocardial damage in the indeterminate form of Chagas disease. Methods: eighteen patients with the indeterminate form of Chagas Disease and the same number of normal controls, paired by sex and age, underwent rest/stress myocardial scintigraphy using sestamibi-99mTc, aiming at detecting early cardiac damage. Results: the results did not show perfusion or ventricular function defects in patients at the indeterminate phase of Chagas disease and in the normal controls, except for a patient who presented signs of ventricular dysfunction in the myocardial perfusion scintigraphy with electrocardiographic gating. Conclusion: the results of this study, considering the small sample size, showed that the rest/stress myocardial scintigraphy using sestamibi-99mTc is not an effective method to detect early myocardial alterations in the indeterminate form of Chagas disease (author)

  3. Scintigraphy for the detection of myocardial damage in the indeterminate form of Chagas disease

    Energy Technology Data Exchange (ETDEWEB)

    Pedroso, Enio Roberto Pietra; Rezende, Nilton Alves de, E-mail: narezende@terra.com.b [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Faculdade de Medicina; Abuhid, Ivana Moura [Instituto de Medicina Nuclear e Diagnostico Molecular, Belo Horizonte, MG (Brazil)

    2010-07-15

    Background: non-invasive cardiological methods have been used for the identification of myocardial damage in Chagas disease. Objective: to verify whether the rest/stress myocardial perfusion scintigraphy is able to identify early myocardial damage in the indeterminate form of Chagas disease. Methods: eighteen patients with the indeterminate form of Chagas Disease and the same number of normal controls, paired by sex and age, underwent rest/stress myocardial scintigraphy using sestamibi-99mTc, aiming at detecting early cardiac damage. Results: the results did not show perfusion or ventricular function defects in patients at the indeterminate phase of Chagas disease and in the normal controls, except for a patient who presented signs of ventricular dysfunction in the myocardial perfusion scintigraphy with electrocardiographic gating. Conclusion: the results of this study, considering the small sample size, showed that the rest/stress myocardial scintigraphy using sestamibi-99mTc is not an effective method to detect early myocardial alterations in the indeterminate form of Chagas disease (author)

  4. A simple method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation

    International Nuclear Information System (INIS)

    Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.

    1994-01-01

    Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de

  5. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming Analysis, Simulation and Engineering Applications

    CERN Document Server

    Hu, Ping; Liu, Li-zhong; Zhu, Yi-guo

    2013-01-01

    Over the last 15 years, the application of innovative steel concepts in the automotive industry has increased steadily. Numerical simulation technology of hot forming of high-strength steel allows engineers to modify the formability of hot forming steel metals and to optimize die design schemes. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming focuses on hot and cold forming theories, numerical methods, relative simulation and experiment techniques for high-strength steel forming and die design in the automobile industry. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming introduces the general theories of cold forming, then expands upon advanced hot forming theories and simulation methods, including: • the forming process, • constitutive equations, • hot boundary constraint treatment, and • hot forming equipment and experiments. Various calculation methods of cold and hot forming, based on the authors’ experience in commercial CAE software f...

  6. Manufacturing technology for practical Josephson voltage normals

    International Nuclear Information System (INIS)

    Kohlmann, Johannes; Kieler, Oliver

    2016-01-01

    In this contribution we present the manufacturing technology for the fabrication of integrated superconducting Josephson serial circuits for voltage normals. First we summarize some foundations for Josephson voltage normals and sketch the concept and the setup of the circuits, before we describe the manufacturing technology form modern practical Josephson voltage normals.

  7. Perhitungan Iuran Normal Program Pensiun dengan Asumsi Suku Bunga Mengikuti Model Vasicek

    Directory of Open Access Journals (Sweden)

    I Nyoman Widana

    2017-12-01

    Full Text Available Labor has a very important role for national development. One way to optimize their productivity is to guarantee a certainty to earn income after retirement. Therefore the government and the private sector must have a program that can ensure the sustainability of this financial support. One option is a pension plan. The purpose of this study is to calculate the  normal cost  with the interest rate assumed to follow the Vasicek model and analyze the normal contribution of the pension program participants. Vasicek model is used to match with  the actual conditions. The method used in this research is the Projected Unit Credit Method and the Entry Age Normal method. The data source of this research is lecturers of FMIPA Unud. In addition, secondary data is also used in the form of the interest  rate of Bank Indonesia for the period of January 2006-December 2015. The results of this study indicate that  the older the age of the participants, when starting the pension program, the greater the first year normal cost  and the smaller the benefit which he or she  will get. Then, normal cost with constant interest rate  greater than normal cost with Vasicek interest rate. This occurs because the Vasicek model predicts interest between 4.8879%, up to 6.8384%. While constant interest is only 4.25%.  In addition, using normal cost that proportional to salary, it is found that the older the age of the participants the greater the proportion of the salary for normal cost.

  8. Investigating the Effect of Normalization Norms in Flexible Manufacturing Sytem Selection Using Multi-Criteria Decision-Making Methods

    Directory of Open Access Journals (Sweden)

    Prasenjit Chatterjee

    2014-07-01

    Full Text Available The main objective of this paper is to assess the effect of different normalization norms within multi-criteria decisionmaking (MADM models. Three well accepted MCDM tools, namely, preference ranking organization method for enrichment evaluation (PROMETHEE, grey relation analysis (GRA and technique for order preference by similarity to ideal solution (TOPSIS methods are applied for solving a flexible manufacturing system (FMS selection problem in a discrete manufacturing environment. Finally, by the introduction of different normalization norms to the decision algorithms, its effct on the FMS selection problem using these MCDM models are also studied.

  9. Towards adapting a normal patient database for SPECT brain perfusion imaging

    International Nuclear Information System (INIS)

    Smith, N D; Soleimani, M; Mitchell, C N; Holmes, R B; Evans, M J; Cade, S C

    2012-01-01

    Single-photon emission computerized tomography (SPECT) is a tool which can be used to image perfusion in the brain. Clinicians can use such images to help diagnose dementias such as Alzheimer's disease. Due to the intrinsic stochasticity in the photon imaging system, some form of statistical comparison of an individual image with a 'normal' patient database gives a clinician additional confidence in interpreting the image. Due to the variations between SPECT camera systems, ideally a normal patient database is required for each individual system. However, cost or ethical considerations often prohibit the collection of such a database for each new camera system. Some method of adapting existing normal patient databases to new camera systems would be beneficial. This paper introduces a method which may be regarded as a 'first-pass' attempt based on 2-norm regularization and a codebook of discrete spatially stationary convolutional kernels. Some preliminary illustrative results are presented, together with discussion on limitations and possible improvements

  10. Method of forming a ceramic to ceramic joint

    Science.gov (United States)

    Cutler, Raymond Ashton; Hutchings, Kent Neal; Kleinlein, Brian Paul; Carolan, Michael Francis

    2010-04-13

    A method of joining at least two sintered bodies to form a composite structure, includes: providing a joint material between joining surfaces of first and second sintered bodies; applying pressure from 1 kP to less than 5 MPa to provide an assembly; heating the assembly to a conforming temperature sufficient to allow the joint material to conform to the joining surfaces; and further heating the assembly to a joining temperature below a minimum sintering temperature of the first and second sintered bodies. The joint material includes organic component(s) and ceramic particles. The ceramic particles constitute 40-75 vol. % of the joint material, and include at least one element of the first and/or second sintered bodies. Composite structures produced by the method are also disclosed.

  11. Numerical form-finding method for large mesh reflectors with elastic rim trusses

    Science.gov (United States)

    Yang, Dongwu; Zhang, Yiqun; Li, Peng; Du, Jingli

    2018-06-01

    Traditional methods for designing a mesh reflector usually treat the rim truss as rigid. Due to large aperture, light weight and high accuracy requirements on spaceborne reflectors, the rim truss deformation is indeed not negligible. In order to design a cable net with asymmetric boundaries for the front and rear nets, a cable-net form-finding method is firstly introduced. Then, the form-finding method is embedded into an iterative approach for designing a mesh reflector considering the elasticity of the supporting rim truss. By iterations on form-findings of the cable-net based on the updated boundary conditions due to the rim truss deformation, a mesh reflector with a fairly uniform tension distribution in its equilibrium state could be finally designed. Applications on offset mesh reflectors with both circular and elliptical rim trusses are illustrated. The numerical results show the effectiveness of the proposed approach and that a circular rim truss is more stable than an elliptical rim truss.

  12. Method for forming permanent magnets with different polarities for use in microelectromechanical devices

    Science.gov (United States)

    Roesler, Alexander W [Tijeras, NM; Christenson, Todd R [Albuquerque, NM

    2007-04-24

    Methods are provided for forming a plurality of permanent magnets with two different north-south magnetic pole alignments for use in microelectromechanical (MEM) devices. These methods are based on initially magnetizing the permanent magnets all in the same direction, and then utilizing a combination of heating and a magnetic field to switch the polarity of a portion of the permanent magnets while not switching the remaining permanent magnets. The permanent magnets, in some instances, can all have the same rare-earth composition (e.g. NdFeB) or can be formed of two different rare-earth materials (e.g. NdFeB and SmCo). The methods can be used to form a plurality of permanent magnets side-by-side on or within a substrate with an alternating polarity, or to form a two-dimensional array of permanent magnets in which the polarity of every other row of the array is alternated.

  13. Effects of Different LiDAR Intensity Normalization Methods on Scotch Pine Forest Leaf Area Index Estimation

    Directory of Open Access Journals (Sweden)

    YOU Haotian

    2018-02-01

    Full Text Available The intensity data of airborne light detection and ranging (LiDAR are affected by many factors during the acquisition process. It is of great significance for the normalization and application of LiDAR intensity data to study the effective quantification and normalization of the effect from each factor. In this paper, the LiDAR data were normalized with range, angel of incidence, range and angle of incidence based on radar equation, respectively. Then two metrics, including canopy intensity sum and ratio of intensity, were extracted and used to estimate forest LAI, which was aimed at quantifying the effects of intensity normalization on forest LAI estimation. It was found that the range intensity normalization could improve the accuracy of forest LAI estimation. While the angle of incidence intensity normalization did not improve the accuracy and made the results worse. Although the range and incidence angle normalized intensity data could improve the accuracy, the improvement was less than the result of range intensity normalization. Meanwhile, the differences between the results of forest LAI estimation from raw intensity data and normalized intensity data were relatively big for canopy intensity sum metrics. However, the differences were relatively small for the ratio of intensity metrics. The results demonstrated that the effects of intensity normalization on forest LAI estimation were depended on the choice of affecting factor, and the influential level is closely related to the characteristics of metrics used. Therefore, the appropriate method of intensity normalization should be chosen according to the characteristics of metrics used in the future research, which could avoid the waste of cost and the reduction of estimation accuracy caused by the introduction of inappropriate affecting factors into intensity normalization.

  14. A method for named entity normalization in biomedical articles: application to diseases and plants.

    Science.gov (United States)

    Cho, Hyejin; Choi, Wonjun; Lee, Hyunju

    2017-10-13

    In biomedical articles, a named entity recognition (NER) technique that identifies entity names from texts is an important element for extracting biological knowledge from articles. After NER is applied to articles, the next step is to normalize the identified names into standard concepts (i.e., disease names are mapped to the National Library of Medicine's Medical Subject Headings disease terms). In biomedical articles, many entity normalization methods rely on domain-specific dictionaries for resolving synonyms and abbreviations. However, the dictionaries are not comprehensive except for some entities such as genes. In recent years, biomedical articles have accumulated rapidly, and neural network-based algorithms that incorporate a large amount of unlabeled data have shown considerable success in several natural language processing problems. In this study, we propose an approach for normalizing biological entities, such as disease names and plant names, by using word embeddings to represent semantic spaces. For diseases, training data from the National Center for Biotechnology Information (NCBI) disease corpus and unlabeled data from PubMed abstracts were used to construct word representations. For plants, a training corpus that we manually constructed and unlabeled PubMed abstracts were used to represent word vectors. We showed that the proposed approach performed better than the use of only the training corpus or only the unlabeled data and showed that the normalization accuracy was improved by using our model even when the dictionaries were not comprehensive. We obtained F-scores of 0.808 and 0.690 for normalizing the NCBI disease corpus and manually constructed plant corpus, respectively. We further evaluated our approach using a data set in the disease normalization task of the BioCreative V challenge. When only the disease corpus was used as a dictionary, our approach significantly outperformed the best system of the task. The proposed approach shows robust

  15. Worthwhile optical method for free-form mirrors qualification

    Science.gov (United States)

    Sironi, G.; Canestrari, R.; Toso, G.; Pareschi, G.

    2013-09-01

    We present an optical method for free-form mirrors qualification developed by the Italian National Institute for Astrophysics (INAF) in the context of the ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) Project which includes, among its items, the design, development and installation of a dual-mirror telescope prototype for the Cherenkov Telescope Array (CTA) observatory. The primary mirror panels of the telescope prototype are free-form concave mirrors with few microns accuracy required on the shape error. The developed technique is based on the synergy between a Ronchi-like optical test performed on the reflecting surface and the image, obtained by means of the TraceIT ray-tracing proprietary code, a perfect optics should generate in the same configuration. This deflectometry test allows the reconstruction of the slope error map that the TraceIT code can process to evaluate the measured mirror optical performance at the telescope focus. The advantages of the proposed method is that it substitutes the use of 3D coordinates measuring machine reducing production time and costs and offering the possibility to evaluate on-site the mirror image quality at the focus. In this paper we report the measuring concept and compare the obtained results to the similar ones obtained processing the shape error acquired by means of a 3D coordinates measuring machine.

  16. Visual attention and flexible normalization pools

    Science.gov (United States)

    Schwartz, Odelia; Coen-Cagli, Ruben

    2013-01-01

    Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413

  17. Composite materials and bodies including silicon carbide and titanium diboride and methods of forming same

    Science.gov (United States)

    Lillo, Thomas M.; Chu, Henry S.; Harrison, William M.; Bailey, Derek

    2013-01-22

    Methods of forming composite materials include coating particles of titanium dioxide with a substance including boron (e.g., boron carbide) and a substance including carbon, and reacting the titanium dioxide with the substance including boron and the substance including carbon to form titanium diboride. The methods may be used to form ceramic composite bodies and materials, such as, for example, a ceramic composite body or material including silicon carbide and titanium diboride. Such bodies and materials may be used as armor bodies and armor materials. Such methods may include forming a green body and sintering the green body to a desirable final density. Green bodies formed in accordance with such methods may include particles comprising titanium dioxide and a coating at least partially covering exterior surfaces thereof, the coating comprising a substance including boron (e.g., boron carbide) and a substance including carbon.

  18. 1H MR spectroscopy of the normal human brains : comparison of automated prescan method with manual method

    International Nuclear Information System (INIS)

    Lim, Myung Kwan; Suh, Chang Hae; Cho, Young Kook; Kim, Jin Hee

    1998-01-01

    The purpose of this paper is to evaluate regional differences in relative metabolite ratios in the normal human brain by 1 H MR spectroscopy (MRS), and compare the spectral quality obtained by the automated prescan method (PROBE) and the manual method. A total of 61 reliable spectra were obtained by PROBE (28/34=82% success) and by the manual method (33/33=100% success). Regional differences in the spectral patterns of the five regions were clearly demonstrated by both PROBE and the manual methods. for prescanning, the manual method took slightly longer than PROBE (3-5 mins and 2 mins, respectively). There were no significant differences in spectral patterns and relative metabolic ratios between the two methods. However, auto-prescan by PROBE seemed to be very vulnerable to slight movement by patients, and in three cases, an acceptable spectrum was thus not obtained. PROBE is a highly practical and reliable method for single voxel 1 H MRS of the human brain; the two methods of prescanning do not result in significantly different spectral patterns and the relative metabolite ratios. PROBE, however, is vulnerable to slight movement by patients, and if the success rate for obtaining quality spectra is to be increased, regardless of the patient's condition and the region of the brain, it must be used in conjunction with the manual method. (author). 23 refs., 2 tabs., 3 figs

  19. Normalization in Lie algebras via mould calculus and applications

    Science.gov (United States)

    Paul, Thierry; Sauzin, David

    2017-11-01

    We establish Écalle's mould calculus in an abstract Lie-theoretic setting and use it to solve a normalization problem, which covers several formal normal form problems in the theory of dynamical systems. The mould formalism allows us to reduce the Lie-theoretic problem to a mould equation, the solutions of which are remarkably explicit and can be fully described by means of a gauge transformation group. The dynamical applications include the construction of Poincaré-Dulac formal normal forms for a vector field around an equilibrium point, a formal infinite-order multiphase averaging procedure for vector fields with fast angular variables (Hamiltonian or not), or the construction of Birkhoff normal forms both in classical and quantum situations. As a by-product we obtain, in the case of harmonic oscillators, the convergence of the quantum Birkhoff form to the classical one, without any Diophantine hypothesis on the frequencies of the unperturbed Hamiltonians.

  20. An asymptotic expression for the eigenvalues of the normalization kernel of the resonating group method

    International Nuclear Information System (INIS)

    Lomnitz-Adler, J.; Brink, D.M.

    1976-01-01

    A generating function for the eigenvalues of the RGM Normalization Kernel is expressed in terms of the diagonal matrix elements of thw GCM Overlap Kernel. An asymptotic expression for the eigenvalues is obtained by using the Method of Steepest Descent. (Auth.)

  1. Numerical Methods for Plate Forming by Line Heating

    DEFF Research Database (Denmark)

    Clausen, Henrik Bisgaard

    2000-01-01

    Line heating is the process of forming originally flat plates into a desired shape by means of heat treatment. Parameter studies are carried out on a finite element model to provide knowledge of how the process behaves with varying heating conditions. For verification purposes, experiments are ca...... are carried out; one set of experiments investigates the actual heat flux distribution from a gas torch and another verifies the validty of the FE calculations. Finally, a method to predict the heating pattern is described....

  2. Strong normalization by type-directed partial evaluation and run-time code generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1998-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  3. Strong Normalization by Type-Directed Partial Evaluation and Run-Time Code Generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1997-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  4. Basic sculpturing methods as innovatory incentives in the development of aesthetic form concepts

    DEFF Research Database (Denmark)

    Thomsen, Bente Dahl

    2009-01-01

      Many project teams grapple for a long time with developing ideas to the form concept because of a lack of methods to solve the many form problems they face in sketching. They also have difficulty in translating the project requirements for product proportions or volumes to an aesthetic form...

  5. Forms and Methods of Agricultural Sector Innovative Activity Improvement

    Directory of Open Access Journals (Sweden)

    Aisha S. Ablyaeva

    2013-01-01

    Full Text Available The article is focused on basic forms and methods to improve the efficiency of innovative activity in the agricultural sector of Ukraine. It was determined that the development of agriculture in Ukraine is affected by a number of factors that must be considered to design innovative models of entrepreneurship development and ways to improve the efficiency of innovative entrepreneurship activity.

  6. Method of forming composite fiber blends

    Science.gov (United States)

    McMahon, Paul E. (Inventor); Chung, Tai-Shung (Inventor); Ying, Lincoln (Inventor)

    1989-01-01

    The instant invention involves a process used in preparing fibrous tows which may be formed into polymeric plastic composites. The process involves the steps of (a) forming a tow of strong filamentary materials; (b) forming a thermoplastic polymeric fiber; (c) intermixing the two tows; and (d) withdrawing the intermixed tow for further use.

  7. Method for forming thermally stable nanoparticles on supports

    Science.gov (United States)

    Roldan Cuenya, Beatriz; Naitabdi, Ahmed R.; Behafarid, Farzad

    2013-08-20

    An inverse micelle-based method for forming nanoparticles on supports includes dissolving a polymeric material in a solvent to provide a micelle solution. A nanoparticle source is dissolved in the micelle solution. A plurality of micelles having a nanoparticle in their core and an outer polymeric coating layer are formed in the micelle solution. The micelles are applied to a support. The polymeric coating layer is then removed from the micelles to expose the nanoparticles. A supported catalyst includes a nanocrystalline powder, thin film, or single crystal support. Metal nanoparticles having a median size from 0.5 nm to 25 nm, a size distribution having a standard deviation .ltoreq.0.1 of their median size are on or embedded in the support. The plurality of metal nanoparticles are dispersed and in a periodic arrangement. The metal nanoparticles maintain their periodic arrangement and size distribution following heat treatments of at least 1,000.degree. C.

  8. SU-E-J-178: A Normalization Method Can Remove Discrepancy in Ventilation Function Due to Different Breathing Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Qu, H; Yu, N; Stephans, K; Xia, P [Cleveland Clinic, Cleveland, OH (United States)

    2014-06-01

    Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed.

  9. SU-E-J-178: A Normalization Method Can Remove Discrepancy in Ventilation Function Due to Different Breathing Patterns

    International Nuclear Information System (INIS)

    Qu, H; Yu, N; Stephans, K; Xia, P

    2014-01-01

    Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed

  10. Capacitor assembly and related method of forming

    Science.gov (United States)

    Zhang, Lili; Tan, Daniel Qi; Sullivan, Jeffrey S.

    2017-12-19

    A capacitor assembly is disclosed. The capacitor assembly includes a housing. The capacitor assembly further includes a plurality of capacitors disposed within the housing. Furthermore, the capacitor assembly includes a thermally conductive article disposed about at least a portion of a capacitor body of the capacitors, and in thermal contact with the capacitor body. Moreover, the capacitor assembly also includes a heat sink disposed within the housing and in thermal contact with at least a portion of the housing and the thermally conductive article such that the heat sink is configured to remove heat from the capacitor in a radial direction of the capacitor assembly. Further, a method of forming the capacitor assembly is also presented.

  11. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  12. Histological versus stereological methods applied at spermatogonia during normal human development

    DEFF Research Database (Denmark)

    Cortes, D

    1990-01-01

    The number of spermatogonia per tubular transverse section (S/T), and the percentage of seminiferous tubulus containing spermatogonia (the fertility index (FI] were measured in 40 pairs of normal autopsy testes aged 28 weeks of gestation-40 years. S/T and FI showed similar changes during the whole...... period, and were minimal between 1 and 4 years. The number of spermatogonia per testis (S/testis) and the number of spermatogonia per cm3 testis tissue (S/cm3) were estimated by stereological methods in the same testes. S/T and FI respectively were significantly correlated both to S/testis and S/cm3. So...

  13. Data-driven intensity normalization of PET group comparison studies is superior to global mean normalization

    DEFF Research Database (Denmark)

    Borghammer, Per; Aanerud, Joel; Gjedde, Albert

    2009-01-01

    BACKGROUND: Global mean (GM) normalization is one of the most commonly used methods of normalization in PET and SPECT group comparison studies of neurodegenerative disorders. It requires that no between-group GM difference is present, which may be strongly violated in neurodegenerative disorders....... Importantly, such GM differences often elude detection due to the large intrinsic variance in absolute values of cerebral blood flow or glucose consumption. Alternative methods of normalization are needed for this type of data. MATERIALS AND METHODS: Two types of simulation were performed using CBF images...

  14. Density- and wavefunction-normalized Cartesian spherical harmonics for l ≤ 20.

    Science.gov (United States)

    Michael, J Robert; Volkov, Anatoliy

    2015-03-01

    The widely used pseudoatom formalism [Stewart (1976). Acta Cryst. A32, 565-574; Hansen & Coppens (1978). Acta Cryst. A34, 909-921] in experimental X-ray charge-density studies makes use of real spherical harmonics when describing the angular component of aspherical deformations of the atomic electron density in molecules and crystals. The analytical form of the density-normalized Cartesian spherical harmonic functions for up to l ≤ 7 and the corresponding normalization coefficients were reported previously by Paturle & Coppens [Acta Cryst. (1988), A44, 6-7]. It was shown that the analytical form for normalization coefficients is available primarily for l ≤ 4 [Hansen & Coppens, 1978; Paturle & Coppens, 1988; Coppens (1992). International Tables for Crystallography, Vol. B, Reciprocal space, 1st ed., edited by U. Shmueli, ch. 1.2. Dordrecht: Kluwer Academic Publishers; Coppens (1997). X-ray Charge Densities and Chemical Bonding. New York: Oxford University Press]. Only in very special cases it is possible to derive an analytical representation of the normalization coefficients for 4 4 the density normalization coefficients were calculated numerically to within seven significant figures. In this study we review the literature on the density-normalized spherical harmonics, clarify the existing notations, use the Paturle-Coppens (Paturle & Coppens, 1988) method in the Wolfram Mathematica software to derive the Cartesian spherical harmonics for l ≤ 20 and determine the density normalization coefficients to 35 significant figures, and computer-generate a Fortran90 code. The article primarily targets researchers who work in the field of experimental X-ray electron density, but may be of some use to all who are interested in Cartesian spherical harmonics.

  15. Bilinear nodal transport method in weighted diamond difference form

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1987-01-01

    Nodal methods have been developed and implemented for the numerical solution of the discrete ordinates neutron transport equation. Numerical testing of these methods and comparison of their results to those obtained by conventional methods have established the high accuracy of nodal methods. Furthermore, it has been suggested that the linear-linear approximation is the most computationally efficient, practical nodal approximation. Indeed, this claim has been substantiated by comparing the accuracy in the solution, and the CPU time required to achieve convergence to that solution by several nodal approximations, as well as the diamond difference scheme. Two types of linear-linear nodal methods have been developed in the literature: analytic linear-linear (NLL) methods, in which the transverse-leakage terms are derived analytically, and approximate linear-linear (PLL) methods, in which these terms are approximated. In spite of their higher accuracy, NLL methods result in very complicated discrete-variable equations that exhibit a high degree of coupling, thus requiring special solution algorithms. On the other hand, the sacrificed accuracy in PLL methods is compensated for by the simple discrete-variable equations and diamond-difference-like solution algorithm. In this paper the authors outline the development of an NLL nodal method, the bilinear method, which can be written in a weighted diamond difference form with one spatial weight per dimension that is analytically derived rather than preassigned in an ad hoc fashion

  16. Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis

    Science.gov (United States)

    Střelec, Luboš

    2011-09-01

    The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from

  17. Assessment of a novel multi-array normalization method based on spike-in control probes suitable for microRNA datasets with global decreases in expression.

    Science.gov (United States)

    Sewer, Alain; Gubian, Sylvain; Kogel, Ulrike; Veljkovic, Emilija; Han, Wanjiang; Hengstermann, Arnd; Peitsch, Manuel C; Hoeng, Julia

    2014-05-17

    High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the "common reference design" and processed as "pseudo-single-channel". They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription-polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data

  18. Method of predicting surface deformation in the form of sinkholes

    Energy Technology Data Exchange (ETDEWEB)

    Chudek, M.; Arkuszewski, J.

    1980-06-01

    Proposes a method for predicting probability of sinkhole shaped subsidence, number of funnel-shaped subsidences and size of individual funnels. The following factors which influence the sudden subsidence of the surface in the form of funnels are analyzed: geologic structure of the strata between mining workings and the surface, mining depth, time factor, and geologic disolocations. Sudden surface subsidence is observed only in the case of workings situated up to a few dozen meters from the surface. Using the proposed method is explained with some examples. It is suggested that the method produces correct results which can be used in coal mining and in ore mining. (1 ref.) (In Polish)

  19. Strange mesons and kaon-to-pion transition form factors from holography

    International Nuclear Information System (INIS)

    Abidin, Zainul; Carlson, Carl E.

    2009-01-01

    We present a calculation of the K l3 transition form factors using the AdS/QCD correspondence. We also solidify and extend our ability to calculate quantities in the flavor-broken versions of AdS/QCD. The normalization of the form factors is a crucial ingredient for extracting |V us | from data, and the results obtained here agree well with results from chiral perturbation theory and lattice gauge theory. The slopes and curvature of the form factors agree well with the data, and with what results are available from other methods of calculation.

  20. Neutron absorbers and methods of forming at least a portion of a neutron absorber

    Energy Technology Data Exchange (ETDEWEB)

    Guillen, Donna P; Porter, Douglas L; Swank, W David; Erickson, Arnold W

    2014-12-02

    Methods of forming at least a portion of a neutron absorber include combining a first material and a second material to form a compound, reducing the compound into a plurality of particles, mixing the plurality of particles with a third material, and pressing the mixture of the plurality of particles and the third material. One or more components of neutron absorbers may be formed by such methods. Neutron absorbers may include a composite material including an intermetallic compound comprising hafnium aluminide and a matrix material comprising pure aluminum.

  1. A One-Sample Test for Normality with Kernel Methods

    OpenAIRE

    Kellner , Jérémie; Celisse , Alain

    2015-01-01

    We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. O...

  2. Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.

    Science.gov (United States)

    Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.

  3. Group normalization for genomic data.

    Science.gov (United States)

    Ghandi, Mahmoud; Beer, Michael A

    2012-01-01

    Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN), to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  4. Discovery of small molecules binding to the normal conformation of prion by combining virtual screening and multiple biological activity evaluation methods

    Science.gov (United States)

    Li, Lanlan; Wei, Wei; Jia, Wen-Juan; Zhu, Yongchang; Zhang, Yan; Chen, Jiang-Huai; Tian, Jiaqi; Liu, Huanxiang; He, Yong-Xing; Yao, Xiaojun

    2017-12-01

    Conformational conversion of the normal cellular prion protein, PrPC, into the misfolded isoform, PrPSc, is considered to be a central event in the development of fatal neurodegenerative diseases. Stabilization of prion protein at the normal cellular form (PrPC) with small molecules is a rational and efficient strategy for treatment of prion related diseases. However, few compounds have been identified as potent prion inhibitors by binding to the normal conformation of prion. In this work, to rational screening of inhibitors capable of stabilizing cellular form of prion protein, multiple approaches combining docking-based virtual screening, steady-state fluorescence quenching, surface plasmon resonance and thioflavin T fluorescence assay were used to discover new compounds interrupting PrPC to PrPSc conversion. Compound 3253-0207 that can bind to PrPC with micromolar affinity and inhibit prion fibrillation was identified from small molecule databases. Molecular dynamics simulation indicated that compound 3253-0207 can bind to the hotspot residues in the binding pocket composed by β1, β2 and α2, which are significant structure moieties in conversion from PrPC to PrPSc.

  5. 10 CFR 71.71 - Normal conditions of transport.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Normal conditions of transport. 71.71 Section 71.71 Energy..., Special Form, and LSA-III Tests 2 § 71.71 Normal conditions of transport. (a) Evaluation. Evaluation of each package design under normal conditions of transport must include a determination of the effect on...

  6. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  7. Method of forming a ceramic superconducting composite wire using a molten pool

    International Nuclear Information System (INIS)

    Geballe, T.H.; Feigelson, R.S.; Gazit, D.

    1991-01-01

    This paper describes a method for making a flexible superconductive composite wire. It comprises: drawing a wire of noble metal through a molten material, formed by melting a solid formed by pressing powdered Bi 2 O 3 , CaCO 3 SrCO 3 and CuO in a ratio of components necessary for forming a Bi-Sr-Ca-Cu-O superconductor, into the solid and sintering at a temperature in the range of 750 degrees - 800 degrees C. for 10-20 hours, whereby the wire is coated by the molten material; and cooling the coated wire to solidify the molten material to form the superconductive flexible composite wire without need of further annealing

  8. Development and Validation of a HPLC Method for the Determination of Lacidipine in Pure Form and in Pharmaceutical Dosage Form

    International Nuclear Information System (INIS)

    Vinodh, M.; Vinayak, M.; Rahul, K.; Pankaj, P.

    2012-01-01

    A simple and reliable high-performance liquid chromatography (HPLC) method was developed and validated for Lacidipine in pure form and pharmaceutical dosage form. The method was developed on X bridge C-18 column (150 mm x 4.6 mm, 5 μm) with a mobile phase gradient system of ammonium acetate and acetonitrile. The effluent was monitored by PDA detector at 240 nm. Calibration curve was linear over the concentration range of 50-250 μg/ml. For Intra-day and inter-day precision % RSD values were found to be 0.83 % and 0.41 % respectively. Recovery of Lacidipine was found to be in the range of 99.78-101.76 %. The limits of detection (LOD) and quantification (LOQ) were 1.0 and 7.3 μg/ml respectively. The developed RP-HPLC method was successfully applied for the quantitative determination of lacidipine in pharmaceutical dosage. (author)

  9. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent

  10. Normal-zone detectors for the MFTF-B coils. Revision 1

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed

  11. Future of the Learning Activities in Teenage School: Content, Methods, and Forms

    Directory of Open Access Journals (Sweden)

    Vorontsov A.B.

    2015-11-01

    Full Text Available the early 1990s their scientific research results have been formed in the educational system and began to be used in general primary school. However, when the widespread use of developmental education in elementary school, further studies on the age possibilities of adolescents and the content of their education have not been completed. Targeted research was organized again under the leadership of B.D. Elkonin only in 2000. Designing of teenage school in the framework of the principles and ideology of this system started at the same time at the Psychological Institute of the Russian Academy of Education and many other educational institutions. The article presents the hypothetical ideas about the content, forms and methods of organization of educational process in the second stage of schooling. Particular attention is paid to the fate of the educational activity in teenage school, as well as methods and forms of organization of other activities in the adolescent school.

  12. Acoustic wave spread in superconducting-normal-superconducting sandwich

    International Nuclear Information System (INIS)

    Urushadze, G.I.

    2004-01-01

    The acoustic wave spread, perpendicular to the boundaries between superconducting and normal metals in superconducting-normal-superconducting (SNS) sandwich has been considered. The alternate current flow sound induced by the Green function method has been found and the coefficient of the acoustic wave transmission through the junction γ=(S 1 -S 2 )/S 1 , (where S 1 and S 2 are average energy flows formed on the first and second boundaries) as a function of the phase difference between superconductors has been investigated. It is shown that while the SNS sandwich is almost transparent for acoustic waves (γ 0 /τ), n=0,1,2, ... (where τ 0 /τ is the ratio of the broadening of the quasiparticle energy levels in impurity normal metal as a result of scattering of the carriers by impurities 1/τ to the spacing between energy levels 1/τ 0 ), γ=2, (S 2 =-S 1 ), which corresponds to the full reflection of the acoustic wave from SNS sandwich. This result is valid for the limit of a pure normal metal but in the main impurity case there are two amplification and reflection regions for acoustic waves. The result obtained shows promise for the SNS sandwich as an ideal mirror for acoustic wave reflection

  13. A methodology for generating normal and pathological brain perfusion SPECT images for evaluation of MRI/SPECT fusion methods: application in epilepsy

    Energy Technology Data Exchange (ETDEWEB)

    Grova, C [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France); Jannin, P [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France); Biraben, A [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France); Buvat, I [INSERM U494, CHU Pitie Salpetriere, Paris (France); Benali, H [INSERM U494, CHU Pitie Salpetriere, Paris (France); Bernard, A M [Service de Medecine Nucleaire, Centre Eugene Marquis, Rennes (France); Scarabin, J M [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France); Gibaud, B [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France)

    2003-12-21

    Quantitative evaluation of brain MRI/SPECT fusion methods for normal and in particular pathological datasets is difficult, due to the frequent lack of relevant ground truth. We propose a methodology to generate MRI and SPECT datasets dedicated to the evaluation of MRI/SPECT fusion methods and illustrate the method when dealing with ictal SPECT. The method consists in generating normal or pathological SPECT data perfectly aligned with a high-resolution 3D T1-weighted MRI using realistic Monte Carlo simulations that closely reproduce the response of a SPECT imaging system. Anatomical input data for the SPECT simulations are obtained from this 3D T1-weighted MRI, while functional input data result from an inter-individual analysis of anatomically standardized SPECT data. The method makes it possible to control the 'brain perfusion' function by proposing a theoretical model of brain perfusion from measurements performed on real SPECT images. Our method provides an absolute gold standard for assessing MRI/SPECT registration method accuracy since, by construction, the SPECT data are perfectly registered with the MRI data. The proposed methodology has been applied to create a theoretical model of normal brain perfusion and ictal brain perfusion characteristic of mesial temporal lobe epilepsy. To approach realistic and unbiased perfusion models, real SPECT data were corrected for uniform attenuation, scatter and partial volume effect. An anatomic standardization was used to account for anatomic variability between subjects. Realistic simulations of normal and ictal SPECT deduced from these perfusion models are presented. The comparison of real and simulated SPECT images showed relative differences in regional activity concentration of less than 20% in most anatomical structures, for both normal and ictal data, suggesting realistic models of perfusion distributions for evaluation purposes. Inter-hemispheric asymmetry coefficients measured on simulated data were

  14. Asymptotic Normality of the Optimal Solution in Multiresponse Surface Mathematical Programming

    OpenAIRE

    Díaz-García, José A.; Caro-Lopera, Francisco J.

    2015-01-01

    An explicit form for the perturbation effect on the matrix of regression coeffi- cients on the optimal solution in multiresponse surface methodology is obtained in this paper. Then, the sensitivity analysis of the optimal solution is studied and the critical point characterisation of the convex program, associated with the optimum of a multiresponse surface, is also analysed. Finally, the asymptotic normality of the optimal solution is derived by the standard methods.

  15. Application of the moving frame method to deformed Willmore surfaces in space forms

    Science.gov (United States)

    Paragoda, Thanuja

    2018-06-01

    The main goal of this paper is to use the theory of exterior differential forms in deriving variations of the deformed Willmore energy in space forms and study the minimizers of the deformed Willmore energy in space forms. We derive both first and second order variations of deformed Willmore energy in space forms explicitly using moving frame method. We prove that the second order variation of deformed Willmore energy depends on the intrinsic Laplace Beltrami operator, the sectional curvature and some special operators along with mean and Gauss curvatures of the surface embedded in space forms, while the first order variation depends on the extrinsic Laplace Beltrami operator.

  16. 48 CFR 215.404-70 - DD Form 1547, Record of Weighted Guidelines Method Application.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false DD Form 1547, Record of... TYPES CONTRACTING BY NEGOTIATION Contract Pricing 215.404-70 DD Form 1547, Record of Weighted Guidelines Method Application. Follow the procedures at PGI 215.404-70 for use of DD Form 1547 whenever a structured...

  17. Radial arrays of nano-electrospray ionization emitters and methods of forming electrosprays

    Science.gov (United States)

    Kelly, Ryan T [West Richland, WA; Tang, Keqi [Richland, WA; Smith, Richard D [Richland, WA

    2010-10-19

    Electrospray ionization emitter arrays, as well as methods for forming electrosprays, are described. The arrays are characterized by a radial configuration of three or more nano-electrospray ionization emitters without an extractor electrode. The methods are characterized by distributing fluid flow of the liquid sample among three or more nano-electrospray ionization emitters, forming an electrospray at outlets of the emitters without utilizing an extractor electrode, and directing the electrosprays into an entrance to a mass spectrometry device. Each of the nano-electrospray ionization emitters can have a discrete channel for fluid flow. The nano-electrospray ionization emitters are circularly arranged such that each is shielded substantially equally from an electrospray-inducing electric field.

  18. Spectrophotometric methods for the determination of benazepril hydrochloride in its single and multi-component dosage forms.

    Science.gov (United States)

    El-Yazbi, F A; Abdine, H H; Shaalan, R A

    1999-06-01

    Three sensitive and accurate methods are presented for the determination of benazepril in its dosage forms. The first method uses derivative spectrophotometry to resolve the interference due to formulation matrix. The second method depends on the color formed by the reaction of the drug with bromocresol green (BCG). The third one utilizes the reaction of benazepril, after alkaline hydrolysis, with 3-methylbenzothialozone (MBTH) hydrazone where the produced color is measured at 593 nm. The latter method was extended to develop a stability-indicating method for this drug. Moreover, the derivative method was applied for the determination of benazepril in its combination with hydrochlorothiazide. The proposed methods were applied for the analysis of benazepril in the pure form and in tablets. The coefficient of variation was less than 2%.

  19. A strand specific high resolution normalization method for chip-sequencing data employing multiple experimental control measurements

    DEFF Research Database (Denmark)

    Enroth, Stefan; Andersson, Claes; Andersson, Robin

    2012-01-01

    High-throughput sequencing is becoming the standard tool for investigating protein-DNA interactions or epigenetic modifications. However, the data generated will always contain noise due to e.g. repetitive regions or non-specific antibody interactions. The noise will appear in the form of a backg......, the background is only used to adjust peak calling and not as a pre-processing step that aims at discerning the signal from the background noise. A normalization procedure that extracts the signal of interest would be of universal use when investigating genomic patterns....

  20. Group normalization for genomic data.

    Directory of Open Access Journals (Sweden)

    Mahmoud Ghandi

    Full Text Available Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN, to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  1. Model-free methods of analyzing domain motions in proteins from simulation : A comparison of normal mode analysis and molecular dynamics simulation of lysozyme

    NARCIS (Netherlands)

    Hayward, S.; Kitao, A.; Berendsen, H.J.C.

    Model-free methods are introduced to determine quantities pertaining to protein domain motions from normal mode analyses and molecular dynamics simulations, For the normal mode analysis, the methods are based on the assumption that in low frequency modes, domain motions can be well approximated by

  2. An approach to normal forms of Kuramoto model with distributed delays and the effect of minimal delay

    Energy Technology Data Exchange (ETDEWEB)

    Niu, Ben, E-mail: niubenhit@163.com [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Guo, Yuxiao [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Jiang, Weihua [Department of Mathematics, Harbin Institute of Technology, Harbin 150001 (China)

    2015-09-25

    Heterogeneous delays with positive lower bound (gap) are taken into consideration in Kuramoto model. On the Ott–Antonsen's manifold, the dynamical transitional behavior from incoherence to coherence is mediated by Hopf bifurcation. We establish a perturbation technique on complex domain, by which universal normal forms, stability and criticality of the Hopf bifurcation are obtained. Theoretically, a hysteresis loop is found near the subcritically bifurcated coherent state. With respect to Gamma distributed delay with fixed mean and variance, we find that the large gap decreases Hopf bifurcation value, induces supercritical bifurcations, avoids the hysteresis loop and significantly increases in the number of coexisting coherent states. The effect of gap is finally interpreted from the viewpoint of excess kurtosis of Gamma distribution. - Highlights: • Heterogeneously delay-coupled Kuramoto model with minimal delay is considered. • Perturbation technique on complex domain is established for bifurcation analysis. • Hysteresis phenomenon is investigated in a theoretical way. • The effect of excess kurtosis of distributed delays is discussed.

  3. Weak convergence and uniform normalization in infinitary rewriting

    DEFF Research Database (Denmark)

    Simonsen, Jakob Grue

    2010-01-01

    the starkly surprising result that for any orthogonal system with finitely many rules, the system is weakly normalizing under weak convergence if{f} it is strongly normalizing under weak convergence if{f} it is weakly normalizing under strong convergence if{f} it is strongly normalizing under strong...... convergence. As further corollaries, we derive a number of new results for weakly convergent rewriting: Systems with finitely many rules enjoy unique normal forms, and acyclic orthogonal systems are confluent. Our results suggest that it may be possible to recover some of the positive results for strongly...

  4. Random Generators and Normal Numbers

    OpenAIRE

    Bailey, David H.; Crandall, Richard E.

    2002-01-01

    Pursuant to the authors' previous chaotic-dynamical model for random digits of fundamental constants, we investigate a complementary, statistical picture in which pseudorandom number generators (PRNGs) are central. Some rigorous results are achieved: We establish b-normality for constants of the form $\\sum_i 1/(b^{m_i} c^{n_i})$ for certain sequences $(m_i), (n_i)$ of integers. This work unifies and extends previously known classes of explicit normals. We prove that for coprime $b,c>1$ the...

  5. A task specific uncertainty analysis method for least-squares-based form characterization of ultra-precision freeform surfaces

    International Nuclear Information System (INIS)

    Ren, M J; Cheung, C F; Kong, L B

    2012-01-01

    In the measurement of ultra-precision freeform surfaces, least-squares-based form characterization methods are widely used to evaluate the form error of the measured surfaces. Although many methodologies have been proposed in recent years to improve the efficiency of the characterization process, relatively little research has been conducted on the analysis of associated uncertainty in the characterization results which may result from those characterization methods being used. As a result, this paper presents a task specific uncertainty analysis method with application in the least-squares-based form characterization of ultra-precision freeform surfaces. That is, the associated uncertainty in the form characterization results is estimated when the measured data are extracted from a specific surface with specific sampling strategy. Three factors are considered in this study which include measurement error, surface form error and sample size. The task specific uncertainty analysis method has been evaluated through a series of experiments. The results show that the task specific uncertainty analysis method can effectively estimate the uncertainty of the form characterization results for a specific freeform surface measurement

  6. 29 CFR 1904.29 - Forms.

    Science.gov (United States)

    2010-07-01

    ... OSHA 300 Log. Instead, enter “privacy case” in the space normally used for the employee's name. This...) Basic requirement. You must use OSHA 300, 300-A, and 301 forms, or equivalent forms, for recordable injuries and illnesses. The OSHA 300 form is called the Log of Work-Related Injuries and Illnesses, the 300...

  7. Composite media for fluid stream processing, a method of forming the composite media, and a related method of processing a fluid stream

    Science.gov (United States)

    Garn, Troy G; Law, Jack D; Greenhalgh, Mitchell R; Tranter, Rhonda

    2014-04-01

    A composite media including at least one crystalline aluminosilicate material in polyacrylonitrile. A method of forming a composite media is also disclosed. The method comprises dissolving polyacrylonitrile in an organic solvent to form a matrix solution. At least one crystalline aluminosilicate material is combined with the matrix solution to form a composite media solution. The organic solvent present in the composite media solution is diluted. The composite media solution is solidified. In addition, a method of processing a fluid stream is disclosed. The method comprises providing a beads of a composite media comprising at least one crystalline aluminosilicate material dispersed in a polyacrylonitrile matrix. The beads of the composite media are contacted with a fluid stream comprising at least one constituent. The at least one constituent is substantially removed from the fluid stream.

  8. Masturbation, sexuality, and adaptation: normalization in adolescence.

    Science.gov (United States)

    Shapiro, Theodore

    2008-03-01

    During adolescence the central masturbation fantasy that is formulated during childhood takes its final form and paradoxically must now be directed outward for appropriate object finding and pair matching in the service of procreative aims. This is a step in adaptation that requires a further developmental landmark that I have called normalization. The path toward airing these private fantasies is facilitated by chumship relationships as a step toward further exposure to the social surround. Hartmann's structuring application of adaptation within psychoanalysis is used as a framework for understanding the process that simultaneously serves intrapsychic and social demands and permits goals that follow evolutionary principles. Variations in the normalization process from masturbatory isolation to a variety of forms of sexual socialization are examined in sociological data concerning current adolescent sexual behavior and in case examples that indicate some routes to normalized experience and practice.

  9. RP-HPLC Method for the Estimation of Nebivolol in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    M. K. Sahoo

    2009-01-01

    Full Text Available A reverse phase HPLC method is described for the determination of nebivolol in tablet dosage form. Chromatography was carried on a Hypersil ODS C18 column using a mixture of methanol and water (80:20 v/v as the mobile phase at a flow rate of 1.0 mL/min with detection at 282 nm. Chlorzoxazone was used as the internal standard. The retention times were 3.175 min and 4.158 min for nebivolol and chlorzoxazone respectively. The detector response was linear in the concentration of 1-400 μg/mL. The limit of detection and limit of quantification was 0.0779 and 0.2361 μg/mL respectively. The percentage assay of nebivolol was 99.974%. The method was validated by determining its sensitivity, accuracy and precision. The proposed method is simple, fast, accurate and precise and hence can be applied for routine quality control of nebivolol in bulk and tablet dosage form.

  10. Platinum catalyst formed on carbon nanotube by the in-liquid plasma method for fuel cell

    Energy Technology Data Exchange (ETDEWEB)

    Show, Yoshiyuki; Hirai, Akira; Almowarai, Anas; Ueno, Yutaro

    2015-12-01

    In-liquid plasma was generated in the carbon nanotube (CNT) dispersion fluid using platinum electrodes. The generated plasma spattered the surface of the platinum electrodes and dispersed platinum particles into the CNT dispersion. Therefore, the platinum nanoparticles were successfully formed on the CNT surface in the dispersion. The platinum nanoparticles were applied to the proton exchange membrane fuel cell (PEMFC) as a catalyst. The electrical power of 108 mW/cm{sup 2} was observed from the fuel cell which was assembled with the platinum catalyst formed on the CNT by the in-liquid plasma method. - Highlights: • The platinum catalyst was successfully formed on the CNT surface in the dispersion by the in-liquid plasma method. • The electrical power of 108 mW/cm{sup 2} was observed from the fuel cell which was assembled with the platinum catalyst formed on the CNT by the in-liquid plasma method.

  11. Review of friction modeling in metal forming processes

    DEFF Research Database (Denmark)

    Nielsen, C.V.; Bay, N.

    2018-01-01

    Abstract In metal forming processes, friction between tool and workpiece is an important parameter influencing the material flow, surface quality and tool life. Theoretical models of friction in metal forming are based on analysis of the real contact area in tool-workpiece interfaces. Several...... research groups have studied and modeled the asperity flattening of workpiece material against tool surface in dry contact or in contact interfaces with only thin layers of lubrication with the aim to improve understanding of friction in metal forming. This paper aims at giving a review of the most...... conditions, normal pressure, sliding length and speed, temperature changes, friction on the flattened plateaus and deformation of the underlying material. The review illustrates the development in the understanding of asperity flattening and the methods of analysis....

  12. Environmental dose-assessment methods for normal operations at DOE nuclear sites

    International Nuclear Information System (INIS)

    Strenge, D.L.; Kennedy, W.E. Jr.; Corley, J.P.

    1982-09-01

    Methods for assessing public exposure to radiation from normal operations at DOE facilities are reviewed in this report. The report includes a discussion of environmental doses to be calculated, a review of currently available environmental pathway models and a set of recommended models for use when environmental pathway modeling is necessary. Currently available models reviewed include those used by DOE contractors, the Environmental Protection Agency (EPA), the Nuclear Regulatory Commission (NRC), and other organizations involved in environmental assessments. General modeling areas considered for routine releases are atmospheric transport, airborne pathways, waterborne pathways, direct exposure to penetrating radiation, and internal dosimetry. The pathway models discussed in this report are applicable to long-term (annual) uniform releases to the environment: they do not apply to acute releases resulting from accidents or emergency situations

  13. Method of normal coordinates in the formulation of a system with dissipation: The harmonic oscillator

    International Nuclear Information System (INIS)

    Mshelia, E.D.

    1994-07-01

    The method of normal coordinates of the theory of vibrations is used in decoupling the motion of n oscillators (1 ≤ n ≤4) representing intrinsic degrees of freedom coupled to collective motion in a quantum mechanical model that allows the determination of the probability for energy transfer from collective to intrinsic excitations in a dissipative system. (author). 21 refs

  14. A Simple and Effective Image Normalization Method to Monitor Boreal Forest Change in a Siberian Burn Chronosequence across Sensors and across Time

    Science.gov (United States)

    Chen, X.; Vierling, L. A.; Deering, D. W.

    2004-12-01

    Satellite data offer unique perspectives for monitoring and quantifying land cover change, however, the radiometric consistency among co-located multi-temporal images is difficult to maintain due to variations in sensors and atmosphere. To detect accurate landscape change using multi-temporal images, we developed a new relative radiometric normalization scheme: the temporally invariant cluster (TIC) method. Image data were acquired on 9 June 1990 (Landsat 4), 20 June 2000, and 26 August 2001 (Landsat 7) for analyses over boreal forests near the Siberian city of Krasnoyarsk. Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Reduced Simple Ratio (RSR) were investigated in the normalization study. The temporally invariant cluster (TIC) centers were identified through a point density map of the base image and the target image and a normalization regression line was created through all TIC centers. The target image digital data were then converted using the regression function so that the two images could be compared using the resulting common radiometric scale. We found that EVI was very sensitive to vegetation structure and could thus be used to separate conifer forests from deciduous forests and grass/crop lands. NDVI was a very effective vegetation index to reduce the influence of shadow, while EVI was very sensitive to shadowing. After normalization, correlations of NDVI and EVI with field collected total Leaf Area Index (LAI) data in 2000 and 2001 were significantly improved; the r-square values in these regressions increased from 0.49 to 0.69 and from 0.46 to 0.61, respectively. An EVI ¡°cancellation effect¡± where EVI was positively related to understory greenness but negatively related to forest canopy coverage was evident across a post fire chronosequence. These findings indicate that the TIC method provides a simple, effective and repeatable method to create radiometrically comparable data sets for remote detection of

  15. Indomethacin nanocrystals prepared by different laboratory scale methods: effect on crystalline form and dissolution behavior

    Energy Technology Data Exchange (ETDEWEB)

    Martena, Valentina; Censi, Roberta [University of Camerino, School of Pharmacy (Italy); Hoti, Ela; Malaj, Ledjan [University of Tirana, Department of Pharmacy (Albania); Di Martino, Piera, E-mail: piera.dimartino@unicam.it [University of Camerino, School of Pharmacy (Italy)

    2012-12-15

    The objective of this study is to select very simple and well-known laboratory scale methods able to reduce particle size of indomethacin until the nanometric scale. The effect on the crystalline form and the dissolution behavior of the different samples was deliberately evaluated in absence of any surfactants as stabilizers. Nanocrystals of indomethacin (native crystals are in the {gamma} form) (IDM) were obtained by three laboratory scale methods: A (Batch A: crystallization by solvent evaporation in a nano-spray dryer), B (Batch B-15 and B-30: wet milling and lyophilization), and C (Batch C-20-N and C-40-N: Cryo-milling in the presence of liquid nitrogen). Nanocrystals obtained by the method A (Batch A) crystallized into a mixture of {alpha} and {gamma} polymorphic forms. IDM obtained by the two other methods remained in the {gamma} form and a different attitude to the crystallinity decrease were observed, with a more considerable decrease in crystalline degree for IDM milled for 40 min in the presence of liquid nitrogen. The intrinsic dissolution rate (IDR) revealed a higher dissolution rate for Batches A and C-40-N, due to the higher IDR of {alpha} form than {gamma} form for the Batch A, and the lower crystallinity degree for both the Batches A and C-40-N. These factors, as well as the decrease in particle size, influenced the IDM dissolution rate from the particle samples. Modifications in the solid physical state that may occur using different particle size reduction treatments have to be taken into consideration during the scale up and industrial development of new solid dosage forms.

  16. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    Science.gov (United States)

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  17. MODEL OF METHODS OF FORMING BIOLOGICAL PICTURE OF THE WORLD OF SECONDARY SCHOOL PUPILS

    Directory of Open Access Journals (Sweden)

    Mikhail A. Yakunchev

    2016-12-01

    Full Text Available Introduction: the problem of development of a model of methods of forming the biological picture of the world of pupils as a multicomponent and integrative expression of the complete educational process is considered in the article. It is stated that the results of the study have theoretical and practical importance for effective subject preparation of senior pupils based on acquiring of systematic and generalized knowledge about wildlife. The correspondence of the main idea of the article to the scientific profile of the journal “Integration of Education” determines the choice of the periodical for publication. Materials and Methods: the results of the analysis of materials on modeling of the educational process, on specific models of the formation of a complete comprehension of the scientific picture of the world and its biological component make it possible to suggest a lack of elaboration of the aspect of pedagogical research under study. Therefore, the search for methods to overcome these gaps and to substantiate a particular model, relevant for its practical application by a teacher, is important. The study was based on the use of methods of theoretical level, including the analysis of pedagogical and methodological literature, modeling and generalized expression of the model of forming the biological picture of the world of secondary school senior pupils, which were of higher priority. Results: the use of models of organization of subject preparation of secondary school pupils takes a priority position, as they help to achieve the desired results of training, education and development. The model of methods of forming a biological picture of the world is represented as a theoretical construct in the unity of objective, substantive, procedural, diagnostic and effective blocks. Discussion and Conclusions: in a generalized form the article expresses the model of methods of forming the biological picture of the world of secondary school

  18. Materials interactions test methods to measure radionuclide release from waste forms under repository-relevant conditions

    International Nuclear Information System (INIS)

    Strickert, R.G.; Erikson, R.L.; Shade, J.W.

    1984-10-01

    At the request of the Basalt Waste Isolation Project, the Materials Characterization Center has collected and developed a set of procedures into a waste form compliance test method (MCC-14.4). The purpose of the test is to measure the steady-state concentrations of specified radionuclides in solutions contacting a waste form material. The test method uses a crushed waste form and basalt material suspended in a synthetic basalt groundwater and agitated for up to three months at 150 0 C under anoxic conditions. Elemental and radioisotopic analyses are made on filtered and unfiltered aliquots of the solution. Replicate experiments are performed and simultaneous tests are conducted with an approved test material (ATM) to help ensure precise and reliable data for the actual waste form material. Various features of the test method, equipment, and test conditions are reviewed. Experimental testing using actinide-doped borosilicate glasses are also discussed. 9 references, 2 tables

  19. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed

  20. Normalization of satellite imagery

    Science.gov (United States)

    Kim, Hongsuk H.; Elman, Gregory C.

    1990-01-01

    Sets of Thematic Mapper (TM) imagery taken over the Washington, DC metropolitan area during the months of November, March and May were converted into a form of ground reflectance imagery. This conversion was accomplished by adjusting the incident sunlight and view angles and by applying a pixel-by-pixel correction for atmospheric effects. Seasonal color changes of the area can be better observed when such normalization is applied to space imagery taken in time series. In normalized imagery, the grey scale depicts variations in surface reflectance and tonal signature of multi-band color imagery can be directly interpreted for quantitative information of the target.

  1. Automated Quantification of Optic Nerve Axons in Primate Glaucomatous and Normal Eyes—Method and Comparison to Semi-Automated Manual Quantification

    Science.gov (United States)

    Reynaud, Juan; Cull, Grant; Wang, Lin; Fortune, Brad; Gardiner, Stuart; Burgoyne, Claude F; Cioffi, George A

    2012-01-01

    Purpose. To describe an algorithm and software application (APP) for 100% optic nerve axon counting and to compare its performance with a semi-automated manual (SAM) method in optic nerve cross-section images (images) from normal and experimental glaucoma (EG) nonhuman primate (NHP) eyes. Methods. ON cross sections from eight EG eyes from eight NHPs, five EG and five normal eyes from five NHPs, and 12 normal eyes from 12 NHPs were imaged at 100×. Calibration (n = 500) and validation (n = 50) image sets ranging from normal to end-stage damage were assembled. Correlation between APP and SAM axon counts was assessed by Deming regression within the calibration set and a compensation formula was generated to account for the subtle, systematic differences. Then, compensated APP counts for each validation image were compared with the mean and 95% confidence interval of five SAM counts of the validation set performed by a single observer. Results. Calibration set APP counts linearly correlated to SAM counts (APP = 10.77 + 1.03 [SAM]; R2 = 0.94, P < 0.0001) in normal to end-stage damage images. In the validation set, compensated APP counts fell within the 95% confidence interval of the SAM counts in 42 of the 50 images and were within 12 axons of the confidence intervals in six of the eight remaining images. Uncompensated axon density maps for the normal and EG eyes of a representative NHP were generated. Conclusions. An APP for 100% ON axon counts has been calibrated and validated relative to SAM counts in normal and EG NHP eyes. PMID:22467571

  2. Characterization of a Stabilized Form of Microplasmin for the Induction of Posterior Vitreous Detachment

    NARCIS (Netherlands)

    Gad Elkareem, Ashraf M.; Willekens, Ben; Vanhove, Marc; Noppen, Bernard; Stassen, Jean Marie; de Smet, Marc D.

    2010-01-01

    Purpose: To investigate the stability and safety of a diluted acidified form of microplasmin and its ability to induce a posterior vitreous detachment (PVD) following intravitreal injection in postmortem porcine eyes. Methods: Microplasmin diluted in normal saline (NS) and balanced salt solution

  3. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits

    Directory of Open Access Journals (Sweden)

    Thomson Peter C

    2003-05-01

    Full Text Available Abstract To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits.

  4. A high pressure liquid chromatography method for separation of prolactin forms.

    Science.gov (United States)

    Bell, Damon A; Hoad, Kirsten; Leong, Lillian; Bakar, Juwaini Abu; Sheehan, Paul; Vasikaran, Samuel D

    2012-05-01

    Prolactin has multiple forms and macroprolactin, which is thought not to be bioavailable, can cause a raised serum prolactin concentration. Gel filtration chromatography (GFC) is currently the gold standard method for separating macroprolactin, but is labour-intensive. Polyethylene glycol (PEG) precipitation is suitable for routine use but may not always be accurate. We developed a high pressure liquid chromatography (HPLC) assay for macroprolactin measurement. Chromatography was carried out using an Agilent Zorbax GF-250 (9.4 × 250 mm, 4 μm) size exclusion column and 50 mmol/L Tris buffer with 0.15 mmol/L NaCl at pH 7.2 as mobile phase, with a flow rate of 1 mL/min. Serum or plasma was diluted 1:1 with mobile phase and filtered and 100 μL injected. Fractions of 155 μL were collected for prolactin measurement and elution profile plotted. The area under the curve of each prolactin peak was calculated to quantify each prolactin form, and compared with GFC. Clear separation of monomeric-, big- and macroprolactin forms was achieved. Quantification was comparable to GFC and precision was acceptable. Total time from injection to collection of the final fraction was 16 min. We have developed an HPLC method for quantification of macroprolactin, which is rapid and easy to perform and therefore can be used for routine measurement.

  5. Valuation of Normal Range of Ankle Systolic Blood Pressure in Subjects with Normal Arm Systolic Blood Pressure.

    Science.gov (United States)

    Gong, Yi; Cao, Kai-wu; Xu, Jin-song; Li, Ju-xiang; Hong, Kui; Cheng, Xiao-shu; Su, Hai

    2015-01-01

    This study aimed to establish a normal range for ankle systolic blood pressure (SBP). A total of 948 subjects who had normal brachial SBP (90-139 mmHg) at investigation were enrolled. Supine BP of four limbs was simultaneously measured using four automatic BP measurement devices. The ankle-arm difference (An-a) on SBP of both sides was calculated. Two methods were used for establishing normal range of ankle SBP: the 99% method was decided on the 99% reference range of actual ankle BP, and the An-a method was the sum of An-a and the low or up limits of normal arm SBP (90-139 mmHg). Whether in the right or left side, the ankle SBP was significantly higher than the arm SBP (right: 137.1 ± 16.9 vs 119.7 ± 11.4 mmHg, P<0.05). Based on the 99% method, the normal range of ankle SBP was 94~181 mmHg for the total population, 84~166 mmHg for the young (18-44 y), 107~176 mmHg for the middle-aged(45-59 y) and 113~179 mmHg for the elderly (≥ 60 y) group. As the An-a on SBP was 13 mmHg in the young group and 20 mmHg in both middle-aged and elderly groups, the normal range of ankle SBP on the An-a method was 103-153 mmHg for young and 110-160 mmHg for middle-elderly subjects. A primary reference for normal ankle SBP was suggested as 100-165 mmHg in the young and 110-170 mmHg in the middle-elderly subjects.

  6. Methods for forming complex oxidation reaction products including superconducting articles

    International Nuclear Information System (INIS)

    Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.

    1992-01-01

    This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product

  7. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  8. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  9. The normal and pathological language

    OpenAIRE

    Espejo, Luis D.

    2014-01-01

    The extraordinary development of normal and pathological psychology has achieved in recent decades, thanks to the dual method of objective observation and oral survey enabled the researcher spirit of neuro-psychiatrist penetrate the intimate mechanism of the nervous system whose supreme manifestation is thought. It is normal psychology explaining the complicated game of perceptions: their methods of transmission, their centers of projection, its transformations and its synthesis to construct ...

  10. Anomalous normal mode oscillations in semiconductor microcavities

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H. [Univ. of Oregon, Eugene, OR (United States). Dept. of Physics; Hou, H.Q.; Hammons, B.E. [Sandia National Labs., Albuquerque, NM (United States)

    1997-04-01

    Semiconductor microcavities as a composite exciton-cavity system can be characterized by two normal modes. Under an impulsive excitation by a short laser pulse, optical polarizations associated with the two normal modes have a {pi} phase difference. The total induced optical polarization is then expected to exhibit a sin{sup 2}({Omega}t)-like oscillation where 2{Omega} is the normal mode splitting, reflecting a coherent energy exchange between the exciton and cavity. In this paper the authors present experimental studies of normal mode oscillations using three-pulse transient four wave mixing (FWM). The result reveals surprisingly that when the cavity is tuned far below the exciton resonance, normal mode oscillation in the polarization is cos{sup 2}({Omega}t)-like, in contrast to what is expected form the simple normal mode model. This anomalous normal mode oscillation reflects the important role of virtual excitation of electronic states in semiconductor microcavities.

  11. Fraud adversely affecting the budget of the Europen Union: the forms, methods and causes

    Directory of Open Access Journals (Sweden)

    Zlata Đurđević

    2006-09-01

    Full Text Available The paper analyses the forms, methods and causes of fraud that are perpetrated to the detriment of the budget of the European Union. The forms in which EU fraud appears are shown according to the criterion of kind of budgetary resource. Crime affecting the budgetary revenue of the EU tends to appear in the form of customs duty-evasion and false declarations concerning the customs-relevant information about goods. Crime adversely affecting the expenditure side of the EU budget appears in the form of subsidy fraud in the area of the Common Agricultural Policy, and subsidy fraud in the area of the structural policies. The methods used for the EU fraud committed and considered in the paper are document forgery, concealment of goods, corruption, violence and fictional business and evasion of the laws. In conclusion an explanation is given of the main exogenous criminogenic factors that lead to the EU frauds commonly perpetrated.

  12. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Comparison of different methods of spatial normalization of FDG-PET brain images in the voxel-wise analysis of MCI patients and controls

    International Nuclear Information System (INIS)

    Martino, M.E.; Villoria, J.G. de; Lacalle-Aurioles, M.; Olazaran, J.; Navarro, E.; Desco, M.; Cruz, I.; Garcia-Vazquez, V.; Carreras, J.L.

    2013-01-01

    One of the most interesting clinical applications of 18F-fluorodexyglucose (FDG) positron emission tomography (PET) imaging in neurodegenerative pathologies is that of establishing the prognosis of patients with mild cognitive impairment (MCI), some of whom have a high risk of progressing to Alzheimer's disease (AD). One method of analyzing these images is to perform statistical parametric mapping (SPM) analysis. Spatial normalization is a critical step in such an analysis. The purpose of this study was to assess the effect of using different methods of spatial normalization on the results of SPM analysis of 18F-FDG PET images by comparing patients with MCI and controls. We evaluated the results of three spatial normalization methods in an SPM analysis by comparing patients diagnosed with MCI with a group of control subjects. We tested three methods of spatial normalization: MRI-diffeomorphic anatomical registration through exponentiated lie algebra (DARTEL) and MRI-SPM8, which combine structural and functional images, and FDG-SPM8, which is based on the functional images only. The results obtained with the three methods were consistent in terms of the main pattern of functional alterations detected; namely, a bilateral reduction in glucose metabolism in the frontal and parietal cortices in the patient group. However, MRI-SPM8 also revealed differences in the left temporal cortex, and MRI-DARTEL revealed further differences in the left temporal cortex, precuneus, and left posterior cingulate. The results obtained with MRI-DARTEL were the most consistent with the pattern of changes in AD. When we compared our observations with those of previous reports, MRI-SPM8 and FDG-SPM8 seemed to show an incomplete pattern. Our results suggest that basing the spatial normalization method on functional images only can considerably impair the results of SPM analysis of 18F-FDG PET studies. (author)

  14. How far is the root apex of a unilateral impacted canine from the root apices' arch form?

    Science.gov (United States)

    Kim, Sung-Hun; Kim, You-Min; Oh, Sewoong; Kim, Seong-Sik; Park, Soo-Byung; Son, Woo-Sung; Kim, Yong-Il

    2017-02-01

    The purpose of this study was to determine the arch form of the root apices of normally erupting teeth and then determine the differences in the location of the apex of impacted canines relative to normally erupting canines. In addition, we sought to determine whether the labiopalatal position of the impacted canines influences the position of the apices. The study included 21 patients with unerupted canines that subsequently had a normal eruption, 21 patients with palatally impacted canines, 27 patients with labially impacted canines, and 17 patients with midalveolus impacted canines. Images were obtained using cone beam computed tomography, and the x, y, and z coordinates of the root apices were determined using Ondemand3D software (Cybermed Co., Seoul, Korea). Two-dimensional coordinates were converted from acquired 3-dimensional coordinates via projection on a palatal plane, and the Procrustes method was used to process the converted 2-dimensional coordinates and to draw the arch forms of the root apices. Finally, we measured the extent of root apex deviation from the arch forms of the root apices. Normally erupting canines showed that even though calcifications may be immature, their positions were aligned with a normal arch form. The root apices of the impacted canines were an average of 6.572 mm away from the root apices' arch form, whereas those of the contralateral nonimpacted canines were an average distance of 2.221 mm away, a statistically significant difference. The palatally impacted canines' root apices distribution tended toward the first premolar root apices. Incompletely calcified, unerupted teeth with a subsequent normal eruption showed a normal arch form of the root apices. The root apices of impacted canines were farther from the arch forms than were the nonimpacted canines. Also, the root apices of impacted canines in the palatal area showed distributions different from those of the other impacted canine groups. Copyright © 2017 American

  15. Normal radiographic findings. 4. act. ed.

    International Nuclear Information System (INIS)

    Moeller, T.B.

    2003-01-01

    This book can serve the reader in three ways: First, it presents normal findings for all radiographic techniques including KM. Important data which are criteria of normal findings are indicated directly in the pictures and are also explained in full text and in summary form. Secondly, it teaches the systematics of interpreting a picture - how to look at it, what structures to regard in what order, and for what to look in particular. Checklists are presented in each case. Thirdly, findings are formulated in accordance with the image analysis procedure. All criteria of normal findings are defined in these formulations, which make them an important didactic element. (orig.)

  16. Method of forming a package for MEMS-based fuel cell

    Science.gov (United States)

    Morse, Jeffrey D; Jankowski, Alan F

    2013-05-21

    A MEMS-based fuel cell package and method thereof is disclosed. The fuel cell package comprises seven layers: (1) a sub-package fuel reservoir interface layer, (2) an anode manifold support layer, (3) a fuel/anode manifold and resistive heater layer, (4) a Thick Film Microporous Flow Host Structure layer containing a fuel cell, (5) an air manifold layer, (6) a cathode manifold support structure layer, and (7) a cap. Fuel cell packages with more than one fuel cell are formed by positioning stacks of these layers in series and/or parallel. The fuel cell package materials such as a molded plastic or a ceramic green tape material can be patterned, aligned and stacked to form three dimensional microfluidic channels that provide electrical feedthroughs from various layers which are bonded together and mechanically support a MEMS-based miniature fuel cell. The package incorporates resistive heating elements to control the temperature of the fuel cell stack. The package is fired to form a bond between the layers and one or more microporous flow host structures containing fuel cells are inserted within the Thick Film Microporous Flow Host Structure layer of the package.

  17. Optimization of instruction and training process through content, form and methods

    International Nuclear Information System (INIS)

    Rozinek, P.

    1983-01-01

    The content orientation and development of forms and methods of nuclear power plant personnel training are described. The subject matter content consisted of two units: group and professional. Professional was divided into specialized sub-units: the primary circuit part, secondary circuit part, electric, chemistry, dosimetry. The system of final examinations is described. (J.P.)

  18. Development and application of the analytical energy gradient for the normalized elimination of the small component method

    NARCIS (Netherlands)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter

    2011-01-01

    The analytical energy gradient of the normalized elimination of the small component (NESC) method is derived for the first time and implemented for the routine calculation of NESC geometries and other first order molecular properties. Essential for the derivation is the correct calculation of the

  19. Interaction between droplets in a ternary microemulsion evaluated by the relative form factor method

    International Nuclear Information System (INIS)

    Nagao, Michihiro; Seto, Hideki; Yamada, Norifumi L.

    2007-01-01

    This paper describes the concentration dependence of the interaction between water droplets coated by a surfactant monolayer using the contrast variation small-angle neutron scattering technique. In the first part, we explain the idea of how to extract a relatively model free structure factor from the scattering data, which is called the relative form factor method. In the second part, the experimental results for the shape of the droplets (form factor) are described. In the third part the relatively model free structure factor is shown, and finally the concentration dependence of the interaction potential between droplets is discussed. The result indicates the validity of the relative form factor method, and the importance of the estimation of the model free structure factor to discuss the nature of structure formation in microemulsion systems

  20. Determination of the main solid-state form of albendazole in bulk drug, employing Raman spectroscopy coupled to multivariate analysis.

    Science.gov (United States)

    Calvo, Natalia L; Arias, Juan M; Altabef, Aída Ben; Maggio, Rubén M; Kaufman, Teodoro S

    2016-09-10

    Albendazole (ALB) is a broad-spectrum anthelmintic, which exhibits two solid-state forms (Forms I and II). The Form I is the metastable crystal at room temperature, while Form II is the stable one. Because the drug has poor aqueous solubility and Form II is less soluble than Form I, it is desirable to have a method to assess the solid-state form of the drug employed for manufacturing purposes. Therefore, a Partial Least Squares (PLS) model was developed for the determination of Form I of ALB in its mixtures with Form II. For model development, both solid-state forms of ALB were prepared and characterized by microscopic (optical and with normal and polarized light), thermal (DSC) and spectroscopic (ATR-FTIR, Raman) techniques. Mixtures of solids in different ratios were prepared by weighing and mechanical mixing of the components. Their Raman spectra were acquired, and subjected to peak smoothing, normalization, standard normal variate correction and de-trending, before performing the PLS calculations. The optimal spectral region (1396-1280cm(-1)) and number of latent variables (LV=3) were obtained employing a moving window of variable size strategy. The method was internally validated by means of the leave one out procedure, providing satisfactory statistics (r(2)=0.9729 and RMSD=5.6%) and figures of merit (LOD=9.4% and MDDC=1.4). Furthermore, the method's performance was also evaluated by analysis of two validation sets. Validation set I was used for assessment of linearity and range and Validation set II, to demonstrate accuracy and precision (Recovery=101.4% and RSD=2.8%). Additionally, a third set of spiked commercial samples was evaluated, exhibiting excellent recoveries (94.2±6.4%). The results suggest that the combination of Raman spectroscopy with multivariate analysis could be applied to the assessment of the main crystal form and its quantitation in samples of ALB bulk drug, in the routine quality control laboratory. Copyright © 2016 Elsevier B.V. All

  1. Geometric Methods in the Algebraic Theory of Quadratic Forms : Summer School

    CERN Document Server

    2004-01-01

    The geometric approach to the algebraic theory of quadratic forms is the study of projective quadrics over arbitrary fields. Function fields of quadrics have been central to the proofs of fundamental results since the renewal of the theory by Pfister in the 1960's. Recently, more refined geometric tools have been brought to bear on this topic, such as Chow groups and motives, and have produced remarkable advances on a number of outstanding problems. Several aspects of these new methods are addressed in this volume, which includes - an introduction to motives of quadrics by Alexander Vishik, with various applications, notably to the splitting patterns of quadratic forms under base field extensions; - papers by Oleg Izhboldin and Nikita Karpenko on Chow groups of quadrics and their stable birational equivalence, with application to the construction of fields which carry anisotropic quadratic forms of dimension 9, but none of higher dimension; - a contribution in French by Bruno Kahn which lays out a general fra...

  2. Normal and Abnormal Behavior in Early Childhood

    OpenAIRE

    Spinner, Miriam R.

    1981-01-01

    Evaluation of normal and abnormal behavior in the period to three years of age involves many variables. Parental attitudes, determined by many factors such as previous childrearing experience, the bonding process, parental psychological status and parental temperament, often influence the labeling of behavior as normal or abnormal. This article describes the forms of crying, sleep and wakefulness, and affective responses from infancy to three years of age.

  3. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  4. Formulae for the determination of the elements of the E\\"otvos matrix of the Earth's normal gravity field and a relation between normal and actual Gaussian curvature

    OpenAIRE

    Manoussakis, G.; Delikaraoglou, D.

    2011-01-01

    In this paper we form relations for the determination of the elements of the E\\"otv\\"os matrix of the Earth's normal gravity field. In addition a relation between the Gauss curvature of the normal equipotential surface and the Gauss curvature of the actual equipotential surface both passing through the point P is presented. For this purpose we use a global Cartesian system (X, Y, Z) and use the variables X, and Y to form a local parameterization a normal equipotential surface to describe its ...

  5. Lubricant Test Methods for Sheet Metal Forming

    DEFF Research Database (Denmark)

    Bay, Niels; Olsson, David Dam; Andreasen, Jan Lasson

    2008-01-01

    appearing in different sheet forming operations such as stretch forming, deep drawing, ironing and punching. The laboratory tests have been especially designed to model the conditions in industrial production. Application of the tests for evaluating new lubricants before introducing them in production has......Sheet metal forming of tribologically difficult materials such as stainless steel, Al-alloys and Ti-alloys or forming in tribologically difficult operations like ironing, punching or deep drawing of thick plate requires often use of environmentally hazardous lubricants such as chlorinated paraffin...... oils in order to avoid galling. The present paper describes a systematic research in the development of new, environmentally harmless lubricants focusing on the lubricant testing aspects. A system of laboratory tests has been developed to study the lubricant performance under the very varied conditions...

  6. Normal zone detectors for a large number of inductively coupled coils. Revision 1

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. The effect on accuracy of changes in the system parameters is discussed

  7. Effects of Foveal Ablation on Emmetropization and Form-Deprivation Myopia

    Science.gov (United States)

    Smith, Earl L.; Ramamirtham, Ramkumar; Qiao-Grider, Ying; Hung, Li-Fang; Huang, Juan; Kee, Chea-su; Coats, David; Paysse, Evelyn

    2009-01-01

    Purpose Because of the prominence of central vision in primates, it has generally been assumed that signals from the fovea dominate refractive development. To test this assumption, the authors determined whether an intact fovea was essential for either normal emmetropization or the vision-induced myopic errors produced by form deprivation. Methods In 13 rhesus monkeys at 3 weeks of age, the fovea and most of the perifovea in one eye were ablated by laser photocoagulation. Five of these animals were subsequently allowed unrestricted vision. For the other eight monkeys with foveal ablations, a diffuser lens was secured in front of the treated eyes to produce form deprivation. Refractive development was assessed along the pupillary axis by retinoscopy, keratometry, and A-scan ultrasonography. Control data were obtained from 21 normal monkeys and three infants reared with plano lenses in front of both eyes. Results Foveal ablations had no apparent effect on emmetropization. Refractive errors for both eyes of the treated infants allowed unrestricted vision were within the control range throughout the observation period, and there were no systematic interocular differences in refractive error or axial length. In addition, foveal ablation did not prevent form deprivation myopia; six of the eight infants that experienced monocular form deprivation developed myopic axial anisometropias outside the control range. Conclusions Visual signals from the fovea are not essential for normal refractive development or the vision-induced alterations in ocular growth produced by form deprivation. Conversely, the peripheral retina, in isolation, can regulate emmetropizing responses and produce anomalous refractive errors in response to abnormal visual experience. These results indicate that peripheral vision should be considered when assessing the effects of visual experience on refractive development. PMID:17724167

  8. Methods to evvaluate normal rainfall for short-term wetland hydrology assessment

    Science.gov (United States)

    Jaclyn Sumner; Michael J. Vepraskas; Randall K. Kolka

    2009-01-01

    Identifying sites meeting wetland hydrology requirements is simple when long-term (>10 years) records are available. Because such data are rare, we hypothesized that a single-year of hydrology data could be used to reach the same conclusion as with long-term data, if the data were obtained during a period of normal or below normal rainfall. Long-term (40-45 years)...

  9. Elevated temperature forming method and preheater apparatus

    Science.gov (United States)

    Krajewski, Paul E; Hammar, Richard Harry; Singh, Jugraj; Cedar, Dennis; Friedman, Peter A; Luo, Yingbing

    2013-06-11

    An elevated temperature forming system in which a sheet metal workpiece is provided in a first stage position of a multi-stage pre-heater, is heated to a first stage temperature lower than a desired pre-heat temperature, is moved to a final stage position where it is heated to a desired final stage temperature, is transferred to a forming press, and is formed by the forming press. The preheater includes upper and lower platens that transfer heat into workpieces disposed between the platens. A shim spaces the upper platen from the lower platen by a distance greater than a thickness of the workpieces to be heated by the platens and less than a distance at which the upper platen would require an undesirably high input of energy to effectively heat the workpiece without being pressed into contact with the workpiece.

  10. Method and Apparatus for Forming Nanodroplets

    Science.gov (United States)

    Ackley, Donald; Forster, Anita

    2011-01-01

    This innovation uses partially miscible fluids to form nano- and microdroplets in a microfluidic droplet generator system. Droplet generators fabricated in PDMS (polydimethylsiloxane) are currently being used to fabricate engineered nanoparticles and microparticles. These droplet generators were first demonstrated in a T-junction configuration, followed by a cross-flow configuration. All of these generating devices have used immiscible fluids, such as oil and water. This immiscible fluid system can produce mono-dispersed distributions of droplets and articles with sizes ranging from a few hundred nanometers to a few hundred microns. For applications such as drug delivery, the ability to encapsulate aqueous solutions of drugs within particles formed from the droplets is desirable. Of particular interest are non-polar solvents that can dissolve lipids for the formation of liposomes in the droplet generators. Such fluids include ether, cyclohexane, butanol, and ethyl acetate. Ethyl acetate is of particular interest for two reasons. It is relatively nontoxic and it is formed from ether and acetic acid, and maybe broken down into its constituents at relatively low concentrations.

  11. A Denotational Account of Untyped Normalization by Evaluation

    DEFF Research Database (Denmark)

    Filinski, Andrzej; Rohde, Henning Korsholm

    2004-01-01

    Abstract. We show that the standard normalization-by-evaluation construction for the simply-typed λβη-calculus has a natural counterpart for the untyped λβ-calculus, with the central type-indexed logical relation replaced by a “recursively defined” invariant relation, in the style of Pitts. In fact......, the construction can be seen as generalizing a computational adequacy argument for an untyped, call-by-name language to normalization instead of evaluation. In the untyped setting, not all terms have normal forms, so the normalization function is necessarily partial. We establish its correctness in the senses...

  12. Metacognition and Reading: Comparing Three Forms of Metacognition in Normally Developing Readers and Readers with Dyslexia.

    Science.gov (United States)

    Furnes, Bjarte; Norman, Elisabeth

    2015-08-01

    Metacognition refers to 'cognition about cognition' and includes metacognitive knowledge, strategies and experiences (Efklides, 2008; Flavell, 1979). Research on reading has shown that better readers demonstrate more metacognitive knowledge than poor readers (Baker & Beall, 2009), and that reading ability improves through strategy instruction (Gersten, Fuchs, Williams, & Baker, 2001). The current study is the first to specifically compare the three forms of metacognition in dyslexic (N = 22) versus normally developing readers (N = 22). Participants read two factual texts, with learning outcome measured by a memory task. Metacognitive knowledge and skills were assessed by self-report. Metacognitive experiences were measured by predictions of performance and judgments of learning. Individuals with dyslexia showed insight into their reading problems, but less general knowledge of how to approach text reading. They more often reported lack of available reading strategies, but groups did not differ in the use of deep and surface strategies. Learning outcome and mean ratings of predictions of performance and judgments of learning were lower in dyslexic readers, but not the accuracy with which metacognitive experiences predicted learning. Overall, the results indicate that dyslexic reading and spelling problems are not generally associated with lower levels of metacognitive knowledge, metacognitive strategies or sensitivity to metacognitive experiences in reading situations. 2015 The Authors. Dyslexia Published by John Wiley & Sons Ltd.

  13. Solitary-wave families of the Ostrovsky equation: An approach via reversible systems theory and normal forms

    International Nuclear Information System (INIS)

    Roy Choudhury, S.

    2007-01-01

    The Ostrovsky equation is an important canonical model for the unidirectional propagation of weakly nonlinear long surface and internal waves in a rotating, inviscid and incompressible fluid. Limited functional analytic results exist for the occurrence of one family of solitary-wave solutions of this equation, as well as their approach to the well-known solitons of the famous Korteweg-de Vries equation in the limit as the rotation becomes vanishingly small. Since solitary-wave solutions often play a central role in the long-time evolution of an initial disturbance, we consider such solutions here (via the normal form approach) within the framework of reversible systems theory. Besides confirming the existence of the known family of solitary waves and its reduction to the KdV limit, we find a second family of multihumped (or N-pulse) solutions, as well as a continuum of delocalized solitary waves (or homoclinics to small-amplitude periodic orbits). On isolated curves in the relevant parameter region, the delocalized waves reduce to genuine embedded solitons. The second and third families of solutions occur in regions of parameter space distinct from the known solitary-wave solutions and are thus entirely new. Directions for future work are also mentioned

  14. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  15. CNN-based ranking for biomedical entity normalization.

    Science.gov (United States)

    Li, Haodi; Chen, Qingcai; Tang, Buzhou; Wang, Xiaolong; Xu, Hua; Wang, Baohua; Huang, Dong

    2017-10-03

    Most state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities. The CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance. We propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement.

  16. TumorBoost: Normalization of allele-specific tumor copy numbers from a single pair of tumor-normal genotyping microarrays

    Directory of Open Access Journals (Sweden)

    Neuvial Pierre

    2010-05-01

    Full Text Available Abstract Background High-throughput genotyping microarrays assess both total DNA copy number and allelic composition, which makes them a tool of choice for copy number studies in cancer, including total copy number and loss of heterozygosity (LOH analyses. Even after state of the art preprocessing methods, allelic signal estimates from genotyping arrays still suffer from systematic effects that make them difficult to use effectively for such downstream analyses. Results We propose a method, TumorBoost, for normalizing allelic estimates of one tumor sample based on estimates from a single matched normal. The method applies to any paired tumor-normal estimates from any microarray-based technology, combined with any preprocessing method. We demonstrate that it increases the signal-to-noise ratio of allelic signals, making it significantly easier to detect allelic imbalances. Conclusions TumorBoost increases the power to detect somatic copy-number events (including copy-neutral LOH in the tumor from allelic signals of Affymetrix or Illumina origin. We also conclude that high-precision allelic estimates can be obtained from a single pair of tumor-normal hybridizations, if TumorBoost is combined with single-array preprocessing methods such as (allele-specific CRMA v2 for Affymetrix or BeadStudio's (proprietary XY-normalization method for Illumina. A bounded-memory implementation is available in the open-source and cross-platform R package aroma.cn, which is part of the Aroma Project (http://www.aroma-project.org/.

  17. NNWSI waste form test method for unsaturated disposal conditions

    International Nuclear Information System (INIS)

    Bates, J.K.; Gerding, T.J.

    1985-03-01

    A test method has been developed to measure the release of radionuclides from the waste package under simulated NNWSI repository conditions, and to provide information concerning materials interactions that may occur in the repository. Data are presented from Unsaturated testing of simulated Savannah River Laboratory 165 glass completed through 26 weeks. The relationship between these results and those from parametric and analog testing are described. The data indicate that the waste form test is capable of producing consistent, reproducible results that will be useful in evaluating the role of the waste package in the long-term performance of the repository. 6 refs., 7 figs., 5 tabs

  18. New spectrofluorimetric method for the determination of nizatidine in bulk form and in pharmaceutical preparations

    Science.gov (United States)

    Karasakal, Ayça; Ulu, Sevgi Tatar

    2013-08-01

    A simple, accurate and highly sensitive spectrofluorimetric method has been developed for determination of nizatidine in pure form and in pharmaceutical dosage forms. The method is based on the reaction between nizatidine and 1-dimethylaminonaphthalene-5-sulphonyl chloride in carbonate buffer, pH 10.5, to yield a highly fluorescent derivative peaking at 513 nm after excitation at 367 nm. Various factors affecting the fluorescence intensity of nizatidin-dansyl derivative were studied and conditions were optimized. The method was validated as per ICH guidelines. The fluorescence concentration plot was rectilinear over the range of 25-300 ng/mL. Limit of detection and limit of quantification were calculated as 11.71 and 35.73 ng/mL, respectively. The proposed method was successfully applied to pharmaceutical preparations.

  19. A Validated RP-HPLC Method for the Determination of Atazanavir in Pharmaceutical Dosage Form

    Directory of Open Access Journals (Sweden)

    K. Srinivasu

    2011-01-01

    Full Text Available A validated RP HPLC method for the estimation of atazanavir in capsule dosage form on YMC ODS 150 × 4.6 mm, 5 μ column using mobile phase composition of ammonium dihydrogen phosphate buffer (pH 2.5 with acetonitrile (55:45 v/v. Flow rate was maintained at 1.5 mL/min with 288 nm UV detection. The retention time obtained for atazanavir was at 4.7 min. The detector response was linear in the concentration range of 30 - 600 μg/mL. This method has been validated and shown to be specific, sensitive, precise, linear, accurate, rugged, robust and fast. Hence, this method can be applied for routine quality control of atazanavir in capsule dosage forms as well as in bulk drug.

  20. Score Normalization using Logistic Regression with Expected Parameters

    NARCIS (Netherlands)

    Aly, Robin

    State-of-the-art score normalization methods use generative models that rely on sometimes unrealistic assumptions. We propose a novel parameter estimation method for score normalization based on logistic regression. Experiments on the Gov2 and CluewebA collection indicate that our method is

  1. Ophthalmic Drug Dosage Forms: Characterisation and Research Methods

    OpenAIRE

    Baranowski, Przemysław; Karolewicz, Bożena; Gajda, Maciej; Pluta, Janusz

    2014-01-01

    This paper describes hitherto developed drug forms for topical ocular administration, that is, eye drops, ointments, in situ gels, inserts, multicompartment drug delivery systems, and ophthalmic drug forms with bioadhesive properties. Heretofore, many studies have demonstrated that new and more complex ophthalmic drug forms exhibit advantage over traditional ones and are able to increase the bioavailability of the active substance by, among others, reducing the susceptibility of drug forms to...

  2. Transforming high-dimensional potential energy surfaces into sum-of-products form using Monte Carlo methods

    Science.gov (United States)

    Schröder, Markus; Meyer, Hans-Dieter

    2017-08-01

    We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.

  3. Method of forming a nanocluster comprising dielectric layer and device comprising such a layer

    NARCIS (Netherlands)

    2009-01-01

    A method of forming a dielectric layer (330) on a further layer (114, 320) of a semiconductor device (300) is disclosed. The method comprises depositing a dielectric precursor compound and a further precursor compound over the further layer (114, 320), the dielectric precursor compound comprising a

  4. Confectionery-based dose forms.

    Science.gov (United States)

    Tangso, Kristian J; Ho, Quy Phuong; Boyd, Ben J

    2015-01-01

    Conventional dosage forms such as tablets, capsules and syrups are prescribed in the normal course of practice. However, concerns about patient preferences and market demands have given rise to the exploration of novel unconventional dosage forms. Among these, confectionery-based dose forms have strong potential to overcome compliance problems. This report will review the availability of these unconventional dose forms used in treating the oral cavity and for systemic drug delivery, with a focus on medicated chewing gums, medicated lollipops, and oral bioadhesive devices. The aim is to stimulate increased interest in the opportunities for innovative new products that are available to formulators in this field, particularly for atypical patient populations.

  5. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties.

    Science.gov (United States)

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2014-03-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).

  6. Transliteration normalization for Information Extraction and Machine Translation

    Directory of Open Access Journals (Sweden)

    Yuval Marton

    2014-12-01

    Full Text Available Foreign name transliterations typically include multiple spelling variants. These variants cause data sparseness and inconsistency problems, increase the Out-of-Vocabulary (OOV rate, and present challenges for Machine Translation, Information Extraction and other natural language processing (NLP tasks. This work aims to identify and cluster name spelling variants using a Statistical Machine Translation method: word alignment. The variants are identified by being aligned to the same “pivot” name in another language (the source-language in Machine Translation settings. Based on word-to-word translation and transliteration probabilities, as well as the string edit distance metric, names with similar spellings in the target language are clustered and then normalized to a canonical form. With this approach, tens of thousands of high-precision name transliteration spelling variants are extracted from sentence-aligned bilingual corpora in Arabic and English (in both languages. When these normalized name spelling variants are applied to Information Extraction tasks, improvements over strong baseline systems are observed. When applied to Machine Translation tasks, a large improvement potential is shown.

  7. Evaluation of the standard normal variate method for Laser-Induced Breakdown Spectroscopy data treatment applied to the discrimination of painting layers

    Science.gov (United States)

    Syvilay, D.; Wilkie-Chancellier, N.; Trichereau, B.; Texier, A.; Martinez, L.; Serfaty, S.; Detalle, V.

    2015-12-01

    Nowadays, Laser-Induced Breakdown Spectroscopy (LIBS) is frequently used for in situ analyses to identify pigments from mural paintings. Nonetheless, in situ analyses require a robust instrumentation in order to face to hard experimental conditions. This may imply variation of fluencies and thus inducing variation of LIBS signal, which degrades spectra and then results. Usually, to overcome these experimental errors, LIBS signal is processed. Signal processing methods most commonly used are the baseline subtraction and the normalization by using a spectral line. However, the latter suggests that this chosen element is a constant component of the material, which may not be the case in paint layers organized in stratigraphic layers. For this reason, it is sometimes difficult to apply this normalization. In this study, another normalization will be carried out to throw off these signal variations. Standard normal variate (SNV) is a normalization designed for these conditions. It is sometimes implemented in Diffuse Reflectance Infrared Fourier Transform Spectroscopy and in Raman Spectroscopy but rarely in LIBS. The SNV transformation is not newly applied on LIBS data, but for the first time the effect of SNV on LIBS spectra was evaluated in details (energy of laser, shot by shot, quantification). The aim of this paper is the quick visualization of the different layers of a stratigraphic painting sample by simple data representations (3D or 2D) after SNV normalization. In this investigation, we showed the potential power of SNV transformation to overcome undesired LIBS signal variations but also its limit of application. This method appears as a promising way to normalize LIBS data, which may be interesting for in-situ depth analyses.

  8. Proximity effect in normal-superconductor hybrids for quasiparticle traps

    Energy Technology Data Exchange (ETDEWEB)

    Hosseinkhani, Amin [Peter Grunberg Institute (PGI-2), Forschungszentrum Julich, D-52425 Julich (Germany); JARA-Institute for Quantum Information, RWTH Aachen University, D-52056 Aachen (Germany)

    2016-07-01

    Coherent transport of charges in the form of Cooper pairs is the main feature of Josephson junctions which plays a central role in superconducting qubits. However, the presence of quasiparticles in superconducting devices may lead to incoherent charge transfer and limit the coherence time of superconducting qubits. A way around this so-called ''quasiparticle poisoning'' might be using a normal-metal island to trap quasiparticles; this has motivated us to revisit the proximity effect in normal-superconductor hybrids. Using the semiclassical Usadel equations, we study the density of states (DoS) both within and away from the trap. We find that in the superconducting layer the DoS quickly approaches the BCS form; this indicates that normal-metal traps should be effective at localizing quasiparticles.

  9. Decoupled Simulation Method For Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Sebastiani, G.; Brosius, A.; Tekkaya, A. E.; Homberg, W.; Kleiner, M.

    2007-01-01

    Within the scope of this article a decoupling algorithm to reduce computing time in Finite Element Analyses of incremental forming processes will be investigated. Based on the given position of the small forming zone, the presented algorithm aims at separating a Finite Element Model in an elastic and an elasto-plastic deformation zone. Including the elastic response of the structure by means of model simplifications, the costly iteration in the elasto-plastic zone can be restricted to the small forming zone and to few supporting elements in order to reduce computation time. Since the forming zone moves along the specimen, an update of both, forming zone with elastic boundary and supporting structure, is needed after several increments.The presented paper discusses the algorithmic implementation of the approach and introduces several strategies to implement the denoted elastic boundary condition at the boundary of the plastic forming zone

  10. Flexible barrier film, method of forming same, and organic electronic device including same

    Science.gov (United States)

    Blizzard, John; Tonge, James Steven; Weidner, William Kenneth

    2013-03-26

    A flexible barrier film has a thickness of from greater than zero to less than 5,000 nanometers and a water vapor transmission rate of no more than 1.times.10.sup.-2 g/m.sup.2/day at 22.degree. C. and 47% relative humidity. The flexible barrier film is formed from a composition, which comprises a multi-functional acrylate. The composition further comprises the reaction product of an alkoxy-functional organometallic compound and an alkoxy-functional organosilicon compound. A method of forming the flexible barrier film includes the steps of disposing the composition on a substrate and curing the composition to form the flexible barrier film. The flexible barrier film may be utilized in organic electronic devices.

  11. Derivation of three closed loop kinematic velocity models using normalized quaternion feedback for an autonomous redundant manipulator with application to inverse kinematics

    Energy Technology Data Exchange (ETDEWEB)

    Unseren, M.A.

    1993-04-01

    The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associated with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.

  12. Derivation of three closed loop kinematic velocity models using normalized quaternion feedback for an autonomous redundant manipulator with application to inverse kinematics

    International Nuclear Information System (INIS)

    Unseren, M.A.

    1993-04-01

    The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associated with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993

  13. Method of forming buried oxide layers in silicon

    Science.gov (United States)

    Sadana, Devendra Kumar; Holland, Orin Wayne

    2000-01-01

    A process for forming Silicon-On-Insulator is described incorporating the steps of ion implantation of oxygen into a silicon substrate at elevated temperature, ion implanting oxygen at a temperature below 200.degree. C. at a lower dose to form an amorphous silicon layer, and annealing steps to form a mixture of defective single crystal silicon and polycrystalline silicon or polycrystalline silicon alone and then silicon oxide from the amorphous silicon layer to form a continuous silicon oxide layer below the surface of the silicon substrate to provide an isolated superficial layer of silicon. The invention overcomes the problem of buried isolated islands of silicon oxide forming a discontinuous buried oxide layer.

  14. Automated PCR setup for forensic casework samples using the Normalization Wizard and PCR Setup robotic methods.

    Science.gov (United States)

    Greenspoon, S A; Sykes, K L V; Ban, J D; Pollard, A; Baisden, M; Farr, M; Graham, N; Collins, B L; Green, M M; Christenson, C C

    2006-12-20

    Human genome, pharmaceutical and research laboratories have long enjoyed the application of robotics to performing repetitive laboratory tasks. However, the utilization of robotics in forensic laboratories for processing casework samples is relatively new and poses particular challenges. Since the quantity and quality (a mixture versus a single source sample, the level of degradation, the presence of PCR inhibitors) of the DNA contained within a casework sample is unknown, particular attention must be paid to procedural susceptibility to contamination, as well as DNA yield, especially as it pertains to samples with little biological material. The Virginia Department of Forensic Science (VDFS) has successfully automated forensic casework DNA extraction utilizing the DNA IQ(trade mark) System in conjunction with the Biomek 2000 Automation Workstation. Human DNA quantitation is also performed in a near complete automated fashion utilizing the AluQuant Human DNA Quantitation System and the Biomek 2000 Automation Workstation. Recently, the PCR setup for casework samples has been automated, employing the Biomek 2000 Automation Workstation and Normalization Wizard, Genetic Identity version, which utilizes the quantitation data, imported into the software, to create a customized automated method for DNA dilution, unique to that plate of DNA samples. The PCR Setup software method, used in conjunction with the Normalization Wizard method and written for the Biomek 2000, functions to mix the diluted DNA samples, transfer the PCR master mix, and transfer the diluted DNA samples to PCR amplification tubes. Once the process is complete, the DNA extracts, still on the deck of the robot in PCR amplification strip tubes, are transferred to pre-labeled 1.5 mL tubes for long-term storage using an automated method. The automation of these steps in the process of forensic DNA casework analysis has been accomplished by performing extensive optimization, validation and testing of the

  15. Ophthalmic Drug Dosage Forms: Characterisation and Research Methods

    Directory of Open Access Journals (Sweden)

    Przemysław Baranowski

    2014-01-01

    Full Text Available This paper describes hitherto developed drug forms for topical ocular administration, that is, eye drops, ointments, in situ gels, inserts, multicompartment drug delivery systems, and ophthalmic drug forms with bioadhesive properties. Heretofore, many studies have demonstrated that new and more complex ophthalmic drug forms exhibit advantage over traditional ones and are able to increase the bioavailability of the active substance by, among others, reducing the susceptibility of drug forms to defense mechanisms of the human eye, extending contact time of drug with the cornea, increasing the penetration through the complex anatomical structure of the eye, and providing controlled release of drugs into the eye tissues, which allows reducing the drug application frequency. The rest of the paper describes recommended in vitro and in vivo studies to be performed for various ophthalmic drugs forms in order to assess whether the form is acceptable from the perspective of desired properties and patient’s compliance.

  16. Study by the disco method of critical components of a P.W.R. normal feedwater system

    International Nuclear Information System (INIS)

    Duchemin, B.; Villeneuve, M.J. de; Vallette, F.; Bruna, J.G.

    1983-03-01

    The DISCO (Determination of Importance Sensitivity of COmponents) method objectif is to rank the components of a system in order to obtain the most important ones versus availability. This method uses the fault tree description of the system and the cut set technique. It ranks the components by ordering the importances attributed to each one. The DISCO method was applied to the study of the 900 MWe P.W.R. normal feedwater system with insufficient flow in steam generator. In order to take account of operating experience several data banks were used and the results compared. This study allowed to determine the most critical component (the turbo-pumps) and to propose and quantify modifications of the system in order to improve its availability

  17. Study on compressive strength of self compacting mortar cubes under normal & electric oven curing methods

    Science.gov (United States)

    Prasanna Venkatesh, G. J.; Vivek, S. S.; Dhinakaran, G.

    2017-07-01

    In the majority of civil engineering applications, the basic building blocks were the masonry units. Those masonry units were developed as a monolithic structure by plastering process with the help of binding agents namely mud, lime, cement and their combinations. In recent advancements, the mortar study plays an important role in crack repairs, structural rehabilitation, retrofitting, pointing and plastering operations. The rheology of mortar includes flowable, passing and filling properties which were analogous with the behaviour of self compacting concrete. In self compacting (SC) mortar cubes, the cement was replaced by mineral admixtures namely silica fume (SF) from 5% to 20% (with an increment of 5%), metakaolin (MK) from 10% to 30% (with an increment of 10%) and ground granulated blast furnace slag (GGBS) from 25% to 75% (with an increment of 25%). The ratio between cement and fine aggregate was kept constant as 1: 2 for all normal and self compacting mortar mixes. The accelerated curing namely electric oven curing with the differential temperature of 128°C for the period of 4 hours was adopted. It was found that the compressive strength obtained from the normal and electric oven method of curing was higher for self compacting mortar cubes than normal mortar cube. The cement replacement by 15% SF, 20% MK and 25%GGBS obtained higher strength under both curing conditions.

  18. Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)

    Science.gov (United States)

    Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi

    2017-03-01

    The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.

  19. Rare Earth Oxide Fluoride Nanoparticles And Hydrothermal Method For Forming Nanoparticles

    Science.gov (United States)

    Fulton, John L.; Hoffmann, Markus M.

    2003-12-23

    A hydrothermal method for forming nanoparticles of a rare earth element, oxygen and fluorine has been discovered. Nanoparticles comprising a rare earth element, oxygen and fluorine are also described. These nanoparticles can exhibit excellent refractory properties as well as remarkable stability in hydrothermal conditions. The nanoparticles can exhibit excellent properties for numerous applications including fiber reinforcement of ceramic composites, catalyst supports, and corrosion resistant coatings for high-temperature aqueous solutions.

  20. Rhythm-based heartbeat duration normalization for atrial fibrillation detection.

    Science.gov (United States)

    Islam, Md Saiful; Ammour, Nassim; Alajlan, Naif; Aboalsamh, Hatim

    2016-05-01

    Screening of atrial fibrillation (AF) for high-risk patients including all patients aged 65 years and older is important for prevention of risk of stroke. Different technologies such as modified blood pressure monitor, single lead ECG-based finger-probe, and smart phone using plethysmogram signal have been emerging for this purpose. All these technologies use irregularity of heartbeat duration as a feature for AF detection. We have investigated a normalization method of heartbeat duration for improved AF detection. AF is an arrhythmia in which heartbeat duration generally becomes irregularly irregular. From a window of heartbeat duration, we estimate the possible rhythm of the majority of heartbeats and normalize duration of all heartbeats in the window based on the rhythm so that we can measure the irregularity of heartbeats for both AF and non-AF rhythms in the same scale. Irregularity is measured by the entropy of distribution of the normalized duration. Then we classify a window of heartbeats as AF or non-AF by thresholding the measured irregularity. The effect of this normalization is evaluated by comparing AF detection performances using duration with the normalization, without normalization, and with other existing normalizations. Sensitivity and specificity of AF detection using normalized heartbeat duration were tested on two landmark databases available online and compared with results of other methods (with/without normalization) by receiver operating characteristic (ROC) curves. ROC analysis showed that the normalization was able to improve the performance of AF detection and it was consistent for a wide range of sensitivity and specificity for use of different thresholds. Detection accuracy was also computed for equal rates of sensitivity and specificity for different methods. Using normalized heartbeat duration, we obtained 96.38% accuracy which is more than 4% improvement compared to AF detection without normalization. The proposed normalization

  1. Dynamic analysis of suspension cable based on vector form intrinsic finite element method

    Science.gov (United States)

    Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun

    2017-10-01

    A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.

  2. Plasma spraying method for forming diamond and diamond-like coatings

    Science.gov (United States)

    Holcombe, Cressie E.; Seals, Roland D.; Price, R. Eugene

    1997-01-01

    A method and composition for the deposition of a thick layer (10) of diamond or diamond-like material. The method includes high temperature processing wherein a selected composition (12) including at least glassy carbon is heated in a direct current plasma arc device to a selected temperature above the softening point, in an inert atmosphere, and is propelled to quickly quenched on a selected substrate (20). The softened or molten composition (18) crystallizes on the substrate (20) to form a thick deposition layer (10) comprising at least a diamond or diamond-like material. The selected composition (12) includes at least glassy carbon as a primary constituent (14) and may include at least one secondary constituent (16). Preferably, the secondary constituents (16) are selected from the group consisting of at least diamond powder, boron carbide (B.sub.4 C) powder and mixtures thereof.

  3. Normal radiographic findings. 4. act. ed.; Roentgennormalbefunde

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, T.B. [Gemeinschaftspraxis fuer Radiologie und Nuklearmedizin, Dillingen (Germany)

    2003-07-01

    This book can serve the reader in three ways: First, it presents normal findings for all radiographic techniques including KM. Important data which are criteria of normal findings are indicated directly in the pictures and are also explained in full text and in summary form. Secondly, it teaches the systematics of interpreting a picture - how to look at it, what structures to regard in what order, and for what to look in particular. Checklists are presented in each case. Thirdly, findings are formulated in accordance with the image analysis procedure. All criteria of normal findings are defined in these formulations, which make them an important didactic element. (orig.)

  4. Analysis of the nonlinear dynamic behavior of power systems using normal forms of superior order; Analisis del comportamiento dinamico no lineal de sistemas de potencia usando formas normales de orden superior

    Energy Technology Data Exchange (ETDEWEB)

    Marinez Carrillo, Irma

    2003-08-01

    This thesis investigates the application of parameter disturbance methods of analysis to the nonlinear dynamic systems theory, for the study of the stability of small signal of electric power systems. The work is centered in the determination of two fundamental aspects of interest in the study of the nonlinear dynamic behavior of the system: the characterization and quantification of the nonlinear interaction degree between the fundamental ways of oscillation of the system and the study of the ways with greater influence in the response of the system in the presence of small disturbances. With these objectives, a general mathematical model, based on the application of the expansion in series of power of the nonlinear model of the power system and the theory of normal forms of vector fields is proposed for the study of the dynamic behavior of the power system. The proposed tool generalizes the existing methods in the literature to consider effects of superior order in the dynamic model of the power system. Starting off of this representation, a methodology is proposed to obtain analytical solutions of loop back and the extension of the existing methods is investigated to identify and quantify the of interaction degree among the fundamental ways of oscillation of the system. The developed tool allows, from analytical expressions of loop backs, the development of analytical measures to evaluate the stress degree in the system, the interaction between the fundamental ways of oscillation and the determination of stability borders. The conceptual development of the proposed method in this thesis offers, on the other hand, a great flexibility to incorporate detailed models of the power system and the evaluation of diverse measures of the nonlinear modal interaction. Finally, the results are presented of the application of the method of analysis proposed for the study of the nonlinear dynamic behavior in a machine-infinite bus system considering different modeled degrees

  5. Advancing Normal Birth: Organizations, Goals, and Research

    OpenAIRE

    Hotelling, Barbara A.; Humenick, Sharron S.

    2005-01-01

    In this column, the support for advancing normal birth is summarized, based on a comparison of the goals of Healthy People 2010, Lamaze International, the Coalition for Improving Maternity Services, and the midwifery model of care. Research abstracts are presented to provide evidence that the midwifery model of care safely and economically advances normal birth. Rates of intervention experienced, as reported in the Listening to Mothers survey, are compared to the forms of care recommended by ...

  6. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    Science.gov (United States)

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  7. Normalized Index of Synergy for Evaluating the Coordination of Motor Commands

    Science.gov (United States)

    Togo, Shunta; Imamizu, Hiroshi

    2015-01-01

    Humans perform various motor tasks by coordinating the redundant motor elements in their bodies. The coordination of motor outputs is produced by motor commands, as well properties of the musculoskeletal system. The aim of this study was to dissociate the coordination of motor commands from motor outputs. First, we conducted simulation experiments where the total elbow torque was generated by a model of a simple human right and left elbow with redundant muscles. The results demonstrated that muscle tension with signal-dependent noise formed a coordinated structure of trial-to-trial variability of muscle tension. Therefore, the removal of signal-dependent noise effects was required to evaluate the coordination of motor commands. We proposed a method to evaluate the coordination of motor commands, which removed signal-dependent noise from the measured variability of muscle tension. We used uncontrolled manifold analysis to calculate a normalized index of synergy. Simulation experiments confirmed that the proposed method could appropriately represent the coordinated structure of the variability of motor commands. We also conducted experiments in which subjects performed the same task as in the simulation experiments. The normalized index of synergy revealed that the subjects coordinated their motor commands to achieve the task. Finally, the normalized index of synergy was applied to a motor learning task to determine the utility of the proposed method. We hypothesized that a large part of the change in the coordination of motor outputs through learning was because of changes in motor commands. In a motor learning task, subjects tracked a target trajectory of the total torque. The change in the coordination of muscle tension through learning was dominated by that of motor commands, which supported the hypothesis. We conclude that the normalized index of synergy can be used to evaluate the coordination of motor commands independently from the properties of the

  8. Communication between hearing impaired and normal hearing students: a facilitative proposal of learning in higher education

    Directory of Open Access Journals (Sweden)

    Krysne Kelly de França Oliveira

    2014-09-01

    Full Text Available Introduction: There has been an increase in the number of hearing impaired people with access to higher education. Most of them are young people from a different culture who present difficulties in communication, inter-relationship, and learning in a culture of normal hearing people, because they use a different language, the Brazilian Sign Language - LIBRAS. Objective: The present study aimed to identify the forms of communication used between hearing impaired and normal hearing students, verifying how they can interfere with the learning process of the first. Methods: A qualitative study that used the space of a private university in the city of Fortaleza, Ceará state, Brazil, from February to April 2009. We carried out semi-structured interviews with three hearing impaired students, three teachers, three interpreters, and three normal hearing students. The content of the speeches was categorized and organized by the method of thematic analysis. Results: We verified that the forms of communication used ranged from mime and gestures to writing and drawing, but the most accepted by the hearing impaired students was LIBRAS. As a method of communication, it supports the learning of hearing impaired students, and with the mediation of interpreters, it gives them conditions to settle in their zones of development, according to the precepts of Vygotsky. Conclusion: Thus, we recognize the importance of LIBRAS as predominant language, essential to the full academic achievement of hearing impaired students; however, their efforts and dedication, as well as the interest of institutions and teachers on the deaf culture, are also important for preparing future professionals.

  9. Development and Validation of a UV Spectrophotometric and a RP-HPLC Methods for Moexipril Hydrochloride in Pure Form and Pharmaceutical Dosage Form

    International Nuclear Information System (INIS)

    Mastiholimath, V.S.; Gupte, P.P.; Mannur, V.S.

    2012-01-01

    A simple and reliable UV spectrophotometric and high-performance liquid chromatography (HPLC) methods were developed and validated for Moexipril hydrochloride in pure form and pharmaceutical dosage form. The RP-HPLC method was developed on agilant eclipse C 18 , (150 mm x 4.6 mm, 5 μm) with a mobile phase gradient system of 60 % (methanol:acetonitrile (70:30 % v/v)) : 40 % 20 mM ammonium acetate buffer pH 4.5 (v/v) and UV spectrophotometric method was developed in phosphate buffer pH 6.8. The effluent was monitored by SPD-M20A, prominence PDA detector at 210 nm. Calibration curve was linear over the concentration range of 10-35 μg/ml and 1-9 μg/ml for RP-HPLC and UV with a regression coefficient of 0.999. For RP-HPLC method Inter-day and intra-day precision % RSD values were found to be 1.00078 % and 1.49408 % respectively. For UV method 0.73386 % to 1.44111 % for inter day 0.453864 to 1.15542 intra-day precision. Recovery of Moexipril hydrochloride was found to be in the range of 99.8538 % to 101.5614 % and 100.5297586 % to 100.6431587 % for UV and RP-HPLC respectively. The limits of detection (LOD) and quantification (LOQ) for HPLC were 0.98969 and 2.99907 μg/ml, respectively. The developed RP-HPLC and UV spectrophotometric method was successfully applied for the quantitative determination of Moexipril hydrochloride in pharmaceutical dosage. (author)

  10. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties

    Directory of Open Access Journals (Sweden)

    Babak Mehmandoust

    2014-03-01

    Full Text Available The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K.

  11. Modelling Stochastic Route Choice Behaviours with a Closed-Form Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2015-01-01

    Full Text Available A closed-form mixed Logit approach is proposed to model the stochastic route choice behaviours. It combines both the advantages of Probit and Logit to provide a flexible form in alternatives correlation and a tractable form in expression; besides, the heterogeneity in alternative variance can also be addressed. Paths are compared by pairs where the superiority of the binary Probit can be fully used. The Probit-based aggregation is also used for a nested Logit structure. Case studies on both numerical and empirical examples demonstrate that the new method is valid and practical. This paper thus provides an operational solution to incorporate the normal distribution in route choice with an analytical expression.

  12. Normalization of Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  13. Organizational forms and knowledge absorption

    Directory of Open Access Journals (Sweden)

    Radovanović Nikola

    2016-01-01

    Full Text Available Managing the entire portion of knowledge in an organization is a challenging task. At the organizational level, there can be enormous quantities of unknown, poorly valued or inefficiently applied knowledge. This is normally followed with the underdeveloped potential or inability of organizations to absorb knowledge from external sources. Facilitation of the efficient internal flow of knowledge within the established communication network may positively affect organizational capacity to absorb or identify, share and subsequently apply knowledge to commercial ends. Based on the evidences that the adoption of different organizational forms affects knowledge flows within an organization, this research analyzed the relationship between common organizational forms and absorptive capacity of organizations. In this paper, we test the hypothesis stating that the organizational structure affects knowledge absorption and exploitation in the organization. The methodology included quantitative and qualitative research method based on a questionnaire, while the data has been statistically analyzed and the hypothesis has been tested with the use of cross-tabulation and chi-square tests. The findings suggest that the type of organizational form affects knowledge absorption capacity and that having a less formalized and more flexible structure in an organization increases absorbing and exploiting opportunities of potentially valuable knowledge.

  14. Normal mode analysis and applications in biological physics.

    Science.gov (United States)

    Dykeman, Eric C; Sankey, Otto F

    2010-10-27

    Normal mode analysis has become a popular and often used theoretical tool in the study of functional motions in enzymes, viruses, and large protein assemblies. The use of normal modes in the study of these motions is often extremely fruitful since many of the functional motions of large proteins can be described using just a few normal modes which are intimately related to the overall structure of the protein. In this review, we present a broad overview of several popular methods used in the study of normal modes in biological physics including continuum elastic theory, the elastic network model, and a new all-atom method, recently developed, which is capable of computing a subset of the low frequency vibrational modes exactly. After a review of the various methods, we present several examples of applications of normal modes in the study of functional motions, with an emphasis on viral capsids.

  15. Development and Statistical Validation of Spectrophotometric Methods for the Estimation of Nabumetone in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    A. R. Rote

    2010-01-01

    Full Text Available Three new simple, economic spectrophotometric methods were developed and validated for the estimation of nabumetone in bulk and tablet dosage form. First method includes determination of nabumetone at absorption maxima 330 nm, second method applied was area under curve for analysis of nabumetone in the wavelength range of 326-334 nm and third method was First order derivative spectra with scaling factor 4. Beer law obeyed in the concentration range of 10-30 μg/mL for all three methods. The correlation coefficients were found to be 0.9997, 0.9998 and 0.9998 by absorption maxima, area under curve and first order derivative spectra. Results of analysis were validated statistically and by performing recovery studies. The mean percent recoveries were found satisfactory for all three methods. The developed methods were also compared statistically using one way ANOVA. The proposed methods have been successfully applied for the estimation of nabumetone in bulk and pharmaceutical tablet dosage form.

  16. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles...... with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature (Tmax) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative...

  17. Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.

    Science.gov (United States)

    Nguyen, Hien D; Wood, Ian A

    2016-04-01

    Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.

  18. Derivative spectrophotometric method for simultaneous determination of clindamycin phosphate and tretinoin in pharmaceutical dosage forms.

    Science.gov (United States)

    Barazandeh Tehrani, Maliheh; Namadchian, Melika; Fadaye Vatan, Sedigheh; Souri, Effat

    2013-04-10

    A derivative spectrophotometric method was proposed for the simultaneous determination of clindamycin and tretinoin in pharmaceutical dosage forms. The measurement was achieved using the first and second derivative signals of clindamycin at (1D) 251 nm and (2D) 239 nm and tretinoin at (1D) 364 nm and (2D) 387 nm.The proposed method showed excellent linearity at both first and second derivative order in the range of 60-1200 and 1.25-25 μg/ml for clindamycin phosphate and tretinoin respectively. The within-day and between-day precision and accuracy was in acceptable range (CVpharmaceutical dosage form.

  19. An improved method for sacro-iliac joint imaging: a study of normal subjects, patients with sacro-iliitis and patients with low back pain

    International Nuclear Information System (INIS)

    Ayres, J.; Hilson, A.J.W.; Maisey, M.N.; Laurent, R.; Panayi, G.S.; Saunders, A.J.

    1981-01-01

    A new method is described for quantitative measurement of the uptake of sup(99m)Tc-methylene diphosphonate (MDP) by the sacro-iliac joints. The method uses 'regions of interest' providing advantages over the previously described 'slice' method; the two methods are compared in normal subjects, patients with known sacro-iliitis and patients with low back pain. Sacro-iliac activity, as calculated by the sacro-iliac index (SII) in normal patients, was shown to decrease with age in females but not in males. The SII was compared with radiographs of the sacro-iliac joints in the patients with known sacro-iliac joint disease and in those with low back pain. The method is useful for the exclusion of sacro-iliitis as a specific cause of back pain. (author)

  20. Medically-enhanced normality

    DEFF Research Database (Denmark)

    Møldrup, Claus; Traulsen, Janine Morgall; Almarsdóttir, Anna Birna

    2003-01-01

    Objective: To consider public perspectives on the use of medicines for non-medical purposes, a usage called medically-enhanced normality (MEN). Method: Examples from the literature were combined with empirical data derived from two Danish research projects: a Delphi internet study and a Telebus...

  1. Influences of rolling method on deformation force in cold roll-beating forming process

    Science.gov (United States)

    Su, Yongxiang; Cui, Fengkui; Liang, Xiaoming; Li, Yan

    2018-03-01

    In process, the research object, the gear rack was selected to study the influence law of rolling method on the deformation force. By the mean of the cold roll forming finite element simulation, the variation regularity of radial and tangential deformation was analysed under different rolling methods. The variation of deformation force of the complete forming racks and the single roll during the steady state under different rolling modes was analyzed. The results show: when upbeating and down beating, radial single point average force is similar, the tangential single point average force gap is bigger, the gap of tangential single point average force is relatively large. Add itionally, the tangential force at the time of direct beating is large, and the dire ction is opposite with down beating. With directly beating, deformation force loading fast and uninstall slow. Correspondingly, with down beating, deformat ion force loading slow and uninstall fast.

  2. KERNEL MAD ALGORITHM FOR RELATIVE RADIOMETRIC NORMALIZATION

    Directory of Open Access Journals (Sweden)

    Y. Bai

    2016-06-01

    Full Text Available The multivariate alteration detection (MAD algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA. The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1 data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  3. Method of forming capsules containing a precise amount of material

    Science.gov (United States)

    Grossman, M.W.; George, W.A.; Maya, J.

    1986-06-24

    A method of forming a sealed capsule containing a submilligram quantity of mercury or the like, the capsule being constructed from a hollow glass tube, by placing a globule or droplet of the mercury in the tube. The tube is then evacuated and sealed and is subsequently heated so as to vaporize the mercury and fill the tube therewith. The tube is then separated into separate sealed capsules by heating spaced locations along the tube with a coiled heating wire means to cause collapse spaced locations there along and thus enable separation of the tube into said capsules. 7 figs.

  4. MO-F-CAMPUS-I-04: Characterization of Fan Beam Coded Aperture Coherent Scatter Spectral Imaging Methods for Differentiation of Normal and Neoplastic Breast Structures

    Energy Technology Data Exchange (ETDEWEB)

    Morris, R; Albanese, K; Lakshmanan, M; Greenberg, J; Kapadia, A [Duke University Medical Center, Durham, NC, Carl E Ravin Advanced Imaging Laboratories, Durham, NC (United States)

    2015-06-15

    Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality for breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded

  5. Modeling the Circle of Willis Using Electrical Analogy Method under both Normal and Pathological Circumstances

    Science.gov (United States)

    Abdi, Mohsen; Karimi, Alireza; Navidbakhsh, Mahdi; Rahmati, Mohammadali; Hassani, Kamran; Razmkon, Ali

    2013-01-01

    Background and objective: The circle of Willis (COW) supports adequate blood supply to the brain. The cardiovascular system, in the current study, is modeled using an equivalent electronic system focusing on the COW. Methods: In our previous study we used 42 compartments to model whole cardiovascular system. In the current study, nevertheless, we extended our model by using 63 compartments to model whole CS. Each cardiovascular artery is modeled using electrical elements, including resistor, capacitor, and inductor. The MATLAB Simulink software is used to obtain the left and right ventricles pressure as well as pressure distribution at efferent arteries of the circle of Willis. Firstly, the normal operation of the system is shown and then the stenosis of cerebral arteries is induced in the circuit and, consequently, the effects are studied. Results: In the normal condition, the difference between pressure distribution of right and left efferent arteries (left and right ACA–A2, left and right MCA, left and right PCA–P2) is calculated to indicate the effect of anatomical difference between left and right sides of supplying arteries of the COW. In stenosis cases, the effect of internal carotid artery occlusion on efferent arteries pressure is investigated. The modeling results are verified by comparing to the clinical observation reported in the literature. Conclusion: We believe the presented model is a useful tool for representing the normal operation of the cardiovascular system and study of the pathologies. PMID:25505747

  6. Normalization Methods and Selection Strategies for Reference Materials in Stable Isotope Analyes. Review

    Energy Technology Data Exchange (ETDEWEB)

    Skrzypek, G. [West Australian Biogeochemistry Centre, John de Laeter Centre of Mass Spectrometry, School of Plant Biology, University of Western Australia, Crawley (Australia); Sadler, R. [School of Agricultural and Resource Economics, University of Western Australia, Crawley (Australia); Paul, D. [Department of Civil Engineering (Geosciences), Indian Institute of Technology Kanpur, Kanpur (India); Forizs, I. [Institute for Geochemical Research, Hungarian Academy of Sciences, Budapest (Hungary)

    2013-07-15

    Stable isotope ratio mass spectrometers are highly precise, but not accurate instruments. Therefore, results have to be normalized to one of the isotope scales (e.g., VSMOW, VPDB) based on well calibrated reference materials. The selection of reference materials, numbers of replicates, {delta}-values of these reference materials and normalization technique have been identified as crucial in determining the uncertainty associated with the final results. The most common normalization techniques and reference materials have been tested using both Monte Carlo simulations and laboratory experiments to investigate aspects of error propagation during the normalization of isotope data. The range of observed differences justifies the need to employ the same sets of standards worldwide for each element and each stable isotope analytical technique. (author)

  7. Intrahepatic and hilar mass-forming cholangiocarcinoma: Qualitative and quantitative evaluation with diffusion-weighted MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fattach, Hassan El, E-mail: hassangreenmed@gmail.com [Department of Abdominal Imaging, Hôpital Lariboisière, Assistance Publique-Hôpitaux de Paris, 2 rue Ambroise Paré, 75010 Paris (France); Dohan, Anthony, E-mail: anthony.dohan@lrb.aphp.fr [Department of Abdominal Imaging, Hôpital Lariboisière, Assistance Publique-Hôpitaux de Paris, 2 rue Ambroise Paré, 75010 Paris (France); Université Paris-Diderot, Sorbonne Paris Cité, 10 Avenue de Verdun, 75010 Paris (France); UMR INSERM 965-Paris 7 “Angiogenèse et recherche translationnelle”, 2 rue Amboise Paré, 75010 Paris (France); Guerrache, Youcef, E-mail: docyoucef05@yahoo.fr [Department of Abdominal Imaging, Hôpital Lariboisière, Assistance Publique-Hôpitaux de Paris, 2 rue Ambroise Paré, 75010 Paris (France); Dautry, Raphael, E-mail: raphael.dautry@lrb.aphp.fr [Department of Abdominal Imaging, Hôpital Lariboisière, Assistance Publique-Hôpitaux de Paris, 2 rue Ambroise Paré, 75010 Paris (France); Université Paris-Diderot, Sorbonne Paris Cité, 10 Avenue de Verdun, 75010 Paris (France); and others

    2015-08-15

    Highlights: • DW-MR imaging helps depicts all intrahepatic or hilar mass-forming cholangiocarcinomas. • DW-MRI provides best conspicuity of intrahepatic or hilar mass-forming cholangiocarcinomas than the other MRI sequences (P < 0.001). • The use of normalized ADC using the liver as reference organ results in the most restricted distribution of ADC values of intrahepatic or hilar mass-forming cholangiocarcinomas (variation coefficient = 16.6%). - Abstract: Objective: To qualitatively and quantitatively analyze the presentation of intrahepatic and hilar mass-forming cholangiocarcinoma with diffusion-weighted magnetic resonance imaging (DW-MRI). Materials and methods: Twenty-eight patients with histopathologically proven mass-forming cholangiocarcinoma (hilar, n = 17; intrahepatic, n = 11) underwent hepatic DW-MRI at 1.5-T using free-breathing acquisition and three b-values (0,400,800 s/mm{sup 2}). Cholangiocarcinomas were evaluated qualitatively using visual analysis of DW-MR images and quantitatively with conventional ADC and normalized ADC measurements using liver and spleen as reference organs. Results: All cholangiocarcinomas (28/28; 100%) were visible on DW-MR images. DW-MRI yielded best conspicuity of cholangiocarcinomas than the other MRI sequences (P < 0.001). Seven cholangiocarcinomas (7/11; 64%) showed hypointense central area on DW-MR images. Conventional ADC value of cholangiocarcinomas (1.042 × 10{sup −3} mm{sup 2}/s ± 0.221 × 10{sup −3} mm{sup 2}/s; range: 0.616 × 10{sup −3} mm{sup 2}/s to 2.050 × 10{sup −3} mm{sup 2}/s) was significantly lower than that of apparently normal hepatic parenchyma (1.362 × 10{sup −3} mm{sup 2}/s ± 0.187 × 10{sup −3} mm{sup 2}/s) (P < 0.0001), although substantial overlap was found. No significant differences in ADC and normalized ADC values were found between intrahepatic and hilar cholangiocarcinomas. The use of normalized ADC using the liver as reference organ resulted in the most restricted

  8. Intrahepatic and hilar mass-forming cholangiocarcinoma: Qualitative and quantitative evaluation with diffusion-weighted MR imaging

    International Nuclear Information System (INIS)

    Fattach, Hassan El; Dohan, Anthony; Guerrache, Youcef; Dautry, Raphael

    2015-01-01

    Highlights: • DW-MR imaging helps depicts all intrahepatic or hilar mass-forming cholangiocarcinomas. • DW-MRI provides best conspicuity of intrahepatic or hilar mass-forming cholangiocarcinomas than the other MRI sequences (P < 0.001). • The use of normalized ADC using the liver as reference organ results in the most restricted distribution of ADC values of intrahepatic or hilar mass-forming cholangiocarcinomas (variation coefficient = 16.6%). - Abstract: Objective: To qualitatively and quantitatively analyze the presentation of intrahepatic and hilar mass-forming cholangiocarcinoma with diffusion-weighted magnetic resonance imaging (DW-MRI). Materials and methods: Twenty-eight patients with histopathologically proven mass-forming cholangiocarcinoma (hilar, n = 17; intrahepatic, n = 11) underwent hepatic DW-MRI at 1.5-T using free-breathing acquisition and three b-values (0,400,800 s/mm 2 ). Cholangiocarcinomas were evaluated qualitatively using visual analysis of DW-MR images and quantitatively with conventional ADC and normalized ADC measurements using liver and spleen as reference organs. Results: All cholangiocarcinomas (28/28; 100%) were visible on DW-MR images. DW-MRI yielded best conspicuity of cholangiocarcinomas than the other MRI sequences (P < 0.001). Seven cholangiocarcinomas (7/11; 64%) showed hypointense central area on DW-MR images. Conventional ADC value of cholangiocarcinomas (1.042 × 10 −3 mm 2 /s ± 0.221 × 10 −3 mm 2 /s; range: 0.616 × 10 −3 mm 2 /s to 2.050 × 10 −3 mm 2 /s) was significantly lower than that of apparently normal hepatic parenchyma (1.362 × 10 −3 mm 2 /s ± 0.187 × 10 −3 mm 2 /s) (P < 0.0001), although substantial overlap was found. No significant differences in ADC and normalized ADC values were found between intrahepatic and hilar cholangiocarcinomas. The use of normalized ADC using the liver as reference organ resulted in the most restricted distribution of ADC values of cholangiocarcinomas (variation

  9. ESTUDIO ESTADÍSTICO DEL NÚMERO DE REGLAS RESULTANTES AL TRANSFORMAR UNA GRAMÁTICA LIBRE DE CONTEXTO A LA FORMA NORMAL DE CHOMSKY STATISTICAL STUDY OF THE NUMBER OF RESULTING RULES WHEN TRANSFORMING A CONTEXT-FREE GRAMMAR TO CHOMSKY NORMAL FORM

    Directory of Open Access Journals (Sweden)

    Fredy Ángel Miguel Amaya Robayo

    2010-08-01

    Full Text Available Es un hecho conocido que toda gramática libre de contexto puede ser transformada a la forma normal de Chomsky de tal forma que los lenguajes generados por las dos gramáticas son equivalentes. Una gramática en forma normal de Chomsky (FNC, tiene algunas ventajas, por ejemplo sus árboles de derivación son binarios, la forma de sus reglas más simples etc. Por eso es siempre deseable poder trabajar con una gramática en FNC en las aplicaciones que lo requieran. Existe un algoritmo que permite transformar una gramática libre de contexto a una en FNC, sin embargo la cantidad de reglas generadas al hacer la transformación depende del número de reglas en la gramática inicial así como de otras características. En este trabajo se analiza desde el punto de vista experimental y estadístico, la relación existente entre el número de reglas iniciales y el número de reglas que resultan luego de transformar una Gramática Libre de Contexto a la FNC. Esto permite planificar la cantidad de recursos computacionales necesarios en caso de tratar con gramáticas de alguna complejidad.It is well known that any context-free grammar can be transformed to the Chomsky normal form so that the languages generated by each one are equivalent. A grammar in Chomsky Normal Form (CNF, has some advantages: their derivation trees are binary, simplest rules and so on. So it is always desirable to work with a grammar in CNF in applications that require them. There is an algorithm that can transform a context-free grammar to one CNF grammar, however the number of rules generated after the transformation depends on the initial grammar and other circumstances. In this work we analyze from the experimental and statistical point of view the relationship between the number of initial rules and the number of resulting rules after transforming. This allows you to plan the amount of computational resources needed in case of dealing with grammars of some complexity.

  10. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  11. Contrast sensitivity measured by two different test methods in healthy, young adults with normal visual acuity.

    Science.gov (United States)

    Koefoed, Vilhelm F; Baste, Valborg; Roumes, Corinne; Høvding, Gunnar

    2015-03-01

    This study reports contrast sensitivity (CS) reference values obtained by two different test methods in a strictly selected population of healthy, young adults with normal uncorrected visual acuity. Based on these results, the index of contrast sensitivity (ICS) is calculated, aiming to establish ICS reference values for this population and to evaluate the possible usefulness of ICS as a tool to compare the degree of agreement between different CS test methods. Military recruits with best eye uncorrected visual acuity 0.00 LogMAR or better, normal colour vision and age 18-25 years were included in a study to record contrast sensitivity using Optec 6500 (FACT) at spatial frequencies of 1.5, 3, 6, 12 and 18 cpd in photopic and mesopic light and CSV-1000E at spatial frequencies of 3, 6, 12 and 18 cpd in photopic light. Index of contrast sensitivity was calculated based on data from the three tests, and the Bland-Altman technique was used to analyse the agreement between ICS obtained by the different test methods. A total of 180 recruits were included. Contrast sensitivity frequency data for all tests were highly skewed with a marked ceiling effect for the photopic tests. The median ICS for Optec 6500 at 85 cd/m2 was -0.15 (95% percentile 0.45), compared with -0.00 (95% percentile 1.62) for Optec at 3 cd/m2 and 0.30 (95% percentile 1.20) FOR CSV-1000E. The mean difference between ICSFACT 85 and ICSCSV was -0.43 (95% CI -0.56 to -0.30, p<0.00) with limits of agreement (LoA) within -2.10 and 1.22. The regression line on the difference of average was near to zero (R2=0.03). The results provide reference CS and ICS values in a young, adult population with normal visual acuity. The agreement between the photopic tests indicated that they may be used interchangeably. There was little agreement between the mesopic and photopic tests. The mesopic test seemed best suited to differentiate between candidates and may therefore possibly be useful for medical selection purposes.

  12. Schema Design and Normalization Algorithm for XML Databases Model

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2009-06-01

    Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.

  13. A novel method for spectrophotometric determination of pregabalin in pure form and in capsules

    Directory of Open Access Journals (Sweden)

    Gaur Prateek

    2011-10-01

    Full Text Available Abstract Background Pregabalin, a γ-amino-n-butyric acid derivative, is an antiepileptic drug not yet official in any pharmacopeia and development of analytical procedures for this drug in bulk/formulation forms is a necessity. We herein, report a new, simple, extraction free, cost effective, sensitive and reproducible spectrophotometric method for the determination of the pregabalin. Results Pregabalin, as a primary amine was reacted with ninhydrin in phosphate buffer pH 7.4 to form blue violet colored chromogen which could be measured spectrophotometrically at λmax 402.6 nm. The method was validated with respect to linearity, accuracy, precision and robustness. The method showed linearity in a wide concentration range of 50-1000 μg mL-1 with good correlation coefficient (0.992. The limits of assays detection was found to be 6.0 μg mL-1 and quantitation limit was 20.0 μg mL-1. The suggested method was applied to the determination of the drug in capsules. No interference could be observed from the additives in the capsules. The percentage recovery was found to be 100.43 ± 1.24. Conclusion The developed method was successfully validated and applied to the determination of pregabalin in bulk and pharmaceutical formulations without any interference from common excipients. Hence, this method can be potentially useful for routine laboratory analysis of pregabalin.

  14. Manufacturing technology for practical Josephson voltage normals; Fertigungstechnologie fuer praxistaugliche Josephson-Spannungsnormale

    Energy Technology Data Exchange (ETDEWEB)

    Kohlmann, Johannes; Kieler, Oliver [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany). Arbeitsgruppe 2.43 ' ' Josephson-Schaltungen' '

    2016-09-15

    In this contribution we present the manufacturing technology for the fabrication of integrated superconducting Josephson serial circuits for voltage normals. First we summarize some foundations for Josephson voltage normals and sketch the concept and the setup of the circuits, before we describe the manufacturing technology form modern practical Josephson voltage normals.

  15. Modern X-ray examination methods in differential diagnostics of various forms of lung hydatid disease; Sovremennye luchevye metody issledovanij v differentsial'noj diagnostike razlichnykh form ehkhinokokkoza legkikh

    Energy Technology Data Exchange (ETDEWEB)

    Akilova, D N [1-Tashkent state med. inst., Tashkent (Uzbekistan)

    2003-02-15

    This work analyzes possibilities of complex radiation diagnostics using traditional Xray, computer and magnet resonance tomography and ultrasonography based on examination and treatment of 223 patients with lung hydatid disease. The diagnosis of 187 out 223 patients has been confirmed during operations. Original methods of ultrasound' examination (USI) of lungs have been developed. The role and place of needle aspirated biopsy controlled by computer tomography in differential diagnostics of complicated forms of lung hydatid disease with various forms of tumors, tubercular caverns etc. have been identified. Self-descriptiveness, sensitivity and general accuracy of these examination methods have been studied on patients with non-complicated and complicated forms of lung hydatid disease. Self descriptiveness of X-ray for non-complicated forms was 104%, USI - 85%, CT 100%, for complicated forms self-descriptiveness of X-ray was 92%, CT- 97%. Ultrasound examination of chest allowed visualizing and localizing of hydatid cysts when they were peripheral. The research enabled to develop algorithm of diagnosing non-complicated and complicated forms of lung hydatid disease. Needle aspirated biopsy was applied in complicated cases. In non-complicated cases transcutaneous manipulations have not been performed to avoid the process dissemination. (author)

  16. Normalization of Deviation: Quotation Error in Human Factors.

    Science.gov (United States)

    Lock, Jordan; Bearman, Chris

    2018-05-01

    Objective The objective of this paper is to examine quotation error in human factors. Background Science progresses through building on the work of previous research. This requires accurate quotation. Quotation error has a number of adverse consequences: loss of credibility, loss of confidence in the journal, and a flawed basis for academic debate and scientific progress. Quotation error has been observed in a number of domains, including marine biology and medicine, but there has been little or no previous study of this form of error in human factors, a domain that specializes in the causes and management of error. Methods A study was conducted examining quotation accuracy of 187 extracts from 118 published articles that cited a control article (Vaughan's 1996 book: The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA). Results Of extracts studied, 12.8% ( n = 24) were classed as inaccurate, with 87.2% ( n = 163) being classed as accurate. A second dimension of agreement was examined with 96.3% ( n = 180) agreeing with the control article and only 3.7% ( n = 7) disagreeing. The categories of accuracy and agreement form a two by two matrix. Conclusion Rather than simply blaming individuals for quotation error, systemic factors should also be considered. Vaughan's theory, normalization of deviance, is one systemic theory that can account for quotation error. Application Quotation error is occurring in human factors and should receive more attention. According to Vaughan's theory, the normal everyday systems that promote scholarship may also allow mistakes, mishaps, and quotation error to occur.

  17. Estimating the carbohydrate content of various forms of tobacco by phenol-sulfuric acid method.

    Science.gov (United States)

    Jain, Vardhaman Mulchand; Karibasappa, Gundabaktha Nagappa; Dodamani, Arun Suresh; Mali, Gaurao Vasant

    2017-01-01

    Due to consumption of various forms of tobacco in large amounts by Indian population, it has become a cause of concern for major oral diseases. In 2008, the WHO named tobacco as the world's single greatest cause of preventable death. It is also known that certain amount of carbohydrates are incorporated in processed tobacco to make it acceptable for consumption. Thus, its role in oral diseases becomes an important question at this point of time. Through this study, it is attempted to find out the carbohydrate content of various forms of tobacco by phenol-sulfuric acid method. Tobacco products selected for the study were Nandi hookah tambakhu (A), photo brand budhaa Punjabi snuff (B), Miraj (C), Gai-chhap tambakhu (D), Hanuman-chhap Pandharpuri tambakhu (E), and Hathi-chhap Bidi (F). The samples were decoded and transported to laboratory and tested at various concentrations by phenol-sulfuric acid method followed by ultraviolet spectrophotometry to determine their absorbance. The present study showed Hathi-chhap bidi/sample F had a maximum absorbance (1.995) at 10 μg/ml which is a smoking form of tobacco followed by rest all smokeless forms of tobacco, i.e. sample C (0.452), sample B (0.253), sample D (0.077), sample E (-0.018), and sample A (-0.127), respectively. As the concentration of tobacco sample increases, their absorbance increases which in turn is suggestive of increase in its carbohydrate concentration. Carbohydrates in the form of sugars, either inherently present or added in it during manufacturing can serve as a risk factor for higher incidence of dental caries.

  18. Method for Forming Pulp Fibre Yarns Developed by a Design-driven Process

    Directory of Open Access Journals (Sweden)

    Tiia-Maria Tenhunen

    2016-01-01

    Full Text Available A simple and inexpensive method for producing water-stable pulp fibre yarns using a deep eutectic mixture composed of choline chloride and urea (ChCl/urea was developed in this work. Deep eutectic solvents (DESs are eutectic mixtures consisting of two or more components that together have a lower melting point than the individual components. DESs have been previously studied with respect to cellulose dissolution, functionalisation, and pre-treatment. This new method uses a mixture of choline chloride and urea, which is used as a swelling and dispersing agent for the pulp fibres in the yarn-forming process. Although the pulp seemed to form a gel when dispersed in ChCl/urea, the ultrastructure of the pulp was not affected. To enable water stability, pulp fibres were crosslinked by esterification using polyacrylic acid. ChCl/urea could be easily recycled and reused by distillation. The novel process described in this study enables utilisation of pulp fibres in textile production without modification or dissolution and shortening of the textile value chain. An interdisciplinary approach was used, where potential applications were explored simultaneously with material development from process development to the early phase prototyping.

  19. Sandstone-filled normal faults: A case study from central California

    Science.gov (United States)

    Palladino, Giuseppe; Alsop, G. Ian; Grippa, Antonio; Zvirtes, Gustavo; Phillip, Ruy Paulo; Hurst, Andrew

    2018-05-01

    Despite the potential of sandstone-filled normal faults to significantly influence fluid transmissivity within reservoirs and the shallow crust, they have to date been largely overlooked. Fluidized sand, forcefully intruded along normal fault zones, markedly enhances the transmissivity of faults and, in general, the connectivity between otherwise unconnected reservoirs. Here, we provide a detailed outcrop description and interpretation of sandstone-filled normal faults from different stratigraphic units in central California. Such faults commonly show limited fault throw, cm to dm wide apertures, poorly-developed fault zones and full or partial sand infill. Based on these features and inferences regarding their origin, we propose a general classification that defines two main types of sandstone-filled normal faults. Type 1 form as a consequence of the hydraulic failure of the host strata above a poorly-consolidated sandstone following a significant, rapid increase of pore fluid over-pressure. Type 2 sandstone-filled normal faults form as a result of regional tectonic deformation. These structures may play a significant role in the connectivity of siliciclastic reservoirs, and may therefore be crucial not just for investigation of basin evolution but also in hydrocarbon exploration.

  20. Effects of variable transformations on errors in FORM results

    International Nuclear Information System (INIS)

    Qin Quan; Lin Daojin; Mei Gang; Chen Hao

    2006-01-01

    On the basis of studies on second partial derivatives of the variable transformation functions for nine different non-normal variables the paper comprehensively discusses the effects of the transformation on FORM results and shows that senses and values of the errors in FORM results depend on distributions of the basic variables, whether resistances or actions basic variables represent, and the design point locations in the standard normal space. The transformations of the exponential or Gamma resistance variables can generate +24% errors in the FORM failure probability, and the transformation of Frechet action variables could generate -31% errors

  1. Shack-Hartmann centroid detection method based on high dynamic range imaging and normalization techniques

    International Nuclear Information System (INIS)

    Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas

    2010-01-01

    In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.

  2. Relation between Protein Intrinsic Normal Mode Weights and Pre-Existing Conformer Populations.

    Science.gov (United States)

    Ozgur, Beytullah; Ozdemir, E Sila; Gursoy, Attila; Keskin, Ozlem

    2017-04-20

    Intrinsic fluctuations of a protein enable it to sample a large repertoire of conformers including the open and closed forms. These distinct forms of the protein called conformational substates pre-exist together in equilibrium as an ensemble independent from its ligands. The role of ligand might be simply to alter the equilibrium toward the most appropriate form for binding. Normal mode analysis is proved to be useful in identifying the directions of conformational changes between substates. In this study, we demonstrate that the ratios of normalized weights of a few normal modes driving the protein between its substates can give insights about the ratios of kinetic conversion rates of the substates, although a direct relation between the eigenvalues and kinetic conversion rates or populations of each substate could not be observed. The correlation between the normalized mode weight ratios and the kinetic rate ratios is around 83% on a set of 11 non-enzyme proteins and around 59% on a set of 17 enzymes. The results are suggestive that mode motions carry intrinsic relations with thermodynamics and kinetics of the proteins.

  3. The influence of form release agent application to the quality of concrete surfaces

    International Nuclear Information System (INIS)

    Klovas, A; Daukšys, M

    2013-01-01

    The main aim of this article was to obtain concrete surface quality changes by the usage of different form release agent application. Secondly, to determine blemishes of concrete surfaces and divide them according to combined method provided by two documents and by using computer program: CIB Report No. 24 T olerances on blemishes of concrete , GOST 13015.0–83 and I mageJ . Two different concrete compositions were made: BA1 (low fluidity, vibration is needed) and BA8 (high fluidity, vibration is not needed). Three castings with each formwork were conducted. Water emulsion based form release agent was used. Different applications (normal and excessive) of form release agent were used on the formwork

  4. Selective attention in normal and impaired hearing.

    Science.gov (United States)

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  5. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties

    OpenAIRE

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2014-01-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results s...

  6. Aqueous sulfomethylated melamine gel-forming compositions and methods of use

    Energy Technology Data Exchange (ETDEWEB)

    Meltz, C.N.; Guetzmacher, G.D.; Chang, P.W.

    1989-04-18

    A method is described for the selective modification of the permeability of the strata of a subterranean bydrocarbon-containing reservoir consisting of introducing into a well in, communication with the reservoir; an aqueous gel-forming composition, comprising a 1.0-60.0 weight percent sulfomethylated melamine polymer solution. The solution is prepared with a 1.0 molar equivalent of a malemine, reacted with 3.0-6.7 molar equivalents of formaldehyde or a 2-6 carbon atom containing dialdehyde; 0.25-1.25 molar equivalents of an alkali metal or ammonium salt of surfurous acid; and 0.01-1.5 molar equivalents of a gel-modifying agent.

  7. On The Extensive Form Of N-Person Cooperative Games | Udeh ...

    African Journals Online (AJOL)

    On The Extensive Form Of N-Person Cooperative Games. ... games. Keywords: Extensive form game, Normal form game, characteristic function, Coalition, Imputation, Player, Payoff, Strategy and Core ... AJOL African Journals Online. HOW TO ...

  8. Strength of Gamma Rhythm Depends on Normalization

    Science.gov (United States)

    Ray, Supratim; Ni, Amy M.; Maunsell, John H. R.

    2013-01-01

    Neuronal assemblies often exhibit stimulus-induced rhythmic activity in the gamma range (30–80 Hz), whose magnitude depends on the attentional load. This has led to the suggestion that gamma rhythms form dynamic communication channels across cortical areas processing the features of behaviorally relevant stimuli. Recently, attention has been linked to a normalization mechanism, in which the response of a neuron is suppressed (normalized) by the overall activity of a large pool of neighboring neurons. In this model, attention increases the excitatory drive received by the neuron, which in turn also increases the strength of normalization, thereby changing the balance of excitation and inhibition. Recent studies have shown that gamma power also depends on such excitatory–inhibitory interactions. Could modulation in gamma power during an attention task be a reflection of the changes in the underlying excitation–inhibition interactions? By manipulating the normalization strength independent of attentional load in macaque monkeys, we show that gamma power increases with increasing normalization, even when the attentional load is fixed. Further, manipulations of attention that increase normalization increase gamma power, even when they decrease the firing rate. Thus, gamma rhythms could be a reflection of changes in the relative strengths of excitation and normalization rather than playing a functional role in communication or control. PMID:23393427

  9. Self-Esteem of Gifted, Normal, and Mild Mentally Handicapped Children.

    Science.gov (United States)

    Chiu, Lian-Hwang

    1990-01-01

    Administered Coopersmith Self-Esteem Inventory (SEI) Form B to elementary school students (N=450) identified as gifted, normal, and mild mentally handicapped (MiMH). Results indicated that both the gifted and normal children had significantly higher self-esteem than did the MiMH children, but there were no differences between gifted and normal…

  10. ArrayMining: a modular web-application for microarray analysis combining ensemble and consensus methods with cross-study normalization

    Directory of Open Access Journals (Sweden)

    Krasnogor Natalio

    2009-10-01

    Full Text Available Abstract Background Statistical analysis of DNA microarray data provides a valuable diagnostic tool for the investigation of genetic components of diseases. To take advantage of the multitude of available data sets and analysis methods, it is desirable to combine both different algorithms and data from different studies. Applying ensemble learning, consensus clustering and cross-study normalization methods for this purpose in an almost fully automated process and linking different analysis modules together under a single interface would simplify many microarray analysis tasks. Results We present ArrayMining.net, a web-application for microarray analysis that provides easy access to a wide choice of feature selection, clustering, prediction, gene set analysis and cross-study normalization methods. In contrast to other microarray-related web-tools, multiple algorithms and data sets for an analysis task can be combined using ensemble feature selection, ensemble prediction, consensus clustering and cross-platform data integration. By interlinking different analysis tools in a modular fashion, new exploratory routes become available, e.g. ensemble sample classification using features obtained from a gene set analysis and data from multiple studies. The analysis is further simplified by automatic parameter selection mechanisms and linkage to web tools and databases for functional annotation and literature mining. Conclusion ArrayMining.net is a free web-application for microarray analysis combining a broad choice of algorithms based on ensemble and consensus methods, using automatic parameter selection and integration with annotation databases.

  11. Performance improvement of two-dimensional EUV spectroscopy based on high frame rate CCD and signal normalization method

    International Nuclear Information System (INIS)

    Zhang, H.M.; Morita, S.; Ohishi, T.; Goto, M.; Huang, X.L.

    2014-01-01

    In the Large Helical Device (LHD), the performance of two-dimensional (2-D) extreme ultraviolet (EUV) spectroscopy with wavelength range of 30-650A has been improved by installing a high frame rate CCD and applying a signal intensity normalization method. With upgraded 2-D space-resolved EUV spectrometer, measurement of 2-D impurity emission profiles with high horizontal resolution is possible in high-density NBI discharges. The variation in intensities of EUV emission among a few discharges is significantly reduced by normalizing the signal to the spectral intensity from EUV_—Long spectrometer which works as an impurity monitor with high-time resolution. As a result, high resolution 2-D intensity distribution has been obtained from CIV (384.176A), CV(2x40.27A), CVI(2x33.73A) and HeII(303.78A). (author)

  12. A method for autoradiographic studies of single clones of plaque forming cells

    International Nuclear Information System (INIS)

    Andersen, V.; Lefkovits, I.; Rigshospitalet, Copenhagen

    1977-01-01

    By limiting dilution of B lymphocytes from spleens of immunized mice, microcultures were obtained that contained only one clone of plaque forming cells (PFC). The cultured cells were labelled with [ 14 C]thymidine for varying period of time. Plaques were obtained in monolayers of sheep erythrocytes in plastic dishes. After fixation with glutaraldehyde, the bottoms of the dishes were stripped off and autoradiograms prepared. By this method, it is possible to determine the proportion of labelled PFC within a given clone and to quantitate the incorporation of label. The method described can be applied to study the incorporation of other labelled molecules and for cytochemical investigations

  13. A numerical method for the design of free-form reflectors for lighting applications

    NARCIS (Netherlands)

    Prins, C.R.; Thije Boonkkamp, ten J.H.M.; Roosmalen, van J.; IJzerman, W.L.; Tukker, T.W.

    2013-01-01

    In this article we present a method for the design of fully free-form reflectors for illumination systems. We derive an elliptic partial differential equation of the Monge-Ampère type for the surface of a reflector that converts an arbitrary parallel beam of light into a desired intensity output

  14. A novel normalization method based on principal component analysis to reduce the effect of peak overlaps in two-dimensional correlation spectroscopy

    Science.gov (United States)

    Wang, Yanwei; Gao, Wenying; Wang, Xiaogong; Yu, Zhiwu

    2008-07-01

    Two-dimensional correlation spectroscopy (2D-COS) has been widely used to separate overlapped spectroscopic bands. However, band overlap may sometimes cause misleading results in the 2D-COS spectra, especially if one peak is embedded within another peak by the overlap. In this work, we propose a new normalization method, based on principal component analysis (PCA). For each spectrum under discussion, the first principal component of PCA is simply taken as the normalization factor of the spectrum. It is demonstrated that the method works well with simulated dynamic spectra. Successful result has also been obtained from the analysis of an overlapped band in the wavenumber range 1440-1486 cm -1 for the evaporation process of a solution containing behenic acid, methanol, and chloroform.

  15. Experimental studies of breaking of elastic tired wheel under variable normal load

    Science.gov (United States)

    Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.

    2017-10-01

    The paper analyzes the braking of a vehicle wheel subjected to disturbances of normal load variations. Experimental tests and methods for developing test modes as sinusoidal force disturbances of the normal wheel load were used. Measuring methods for digital and analogue signals were used as well. Stabilization of vehicle wheel braking subjected to disturbances of normal load variations is a topical issue. The paper suggests a method for analyzing wheel braking processes under disturbances of normal load variations. A method to control wheel baking processes subjected to disturbances of normal load variations was developed.

  16. Multivariate statistical methods a primer

    CERN Document Server

    Manly, Bryan FJ

    2004-01-01

    THE MATERIAL OF MULTIVARIATE ANALYSISExamples of Multivariate DataPreview of Multivariate MethodsThe Multivariate Normal DistributionComputer ProgramsGraphical MethodsChapter SummaryReferencesMATRIX ALGEBRAThe Need for Matrix AlgebraMatrices and VectorsOperations on MatricesMatrix InversionQuadratic FormsEigenvalues and EigenvectorsVectors of Means and Covariance MatricesFurther Reading Chapter SummaryReferencesDISPLAYING MULTIVARIATE DATAThe Problem of Displaying Many Variables in Two DimensionsPlotting index VariablesThe Draftsman's PlotThe Representation of Individual Data P:ointsProfiles o

  17. Optimization and validation of spectrophotometric methods for determination of finasteride in dosage and biological forms

    Science.gov (United States)

    Amin, Alaa S.; Kassem, Mohammed A.

    2012-01-01

    Aim and Background: Three simple, accurate and sensitive spectrophotometric methods for the determination of finasteride in pure, dosage and biological forms, and in the presence of its oxidative degradates were developed. Materials and Methods: These methods are indirect, involve the addition of excess oxidant potassium permanganate for method A; cerric sulfate [Ce(SO4)2] for methods B; and N-bromosuccinimide (NBS) for method C of known concentration in acid medium to finasteride, and the determination of the unreacted oxidant by measurement of the decrease in absorbance of methylene blue for method A, chromotrope 2R for method B, and amaranth for method C at a suitable maximum wavelength, λmax: 663, 528, and 520 nm, for the three methods, respectively. The reaction conditions for each method were optimized. Results: Regression analysis of the Beer plots showed good correlation in the concentration ranges of 0.12–3.84 μg mL–1 for method A, and 0.12–3.28 μg mL–1 for method B and 0.14 – 3.56 μg mL–1 for method C. The apparent molar absorptivity, Sandell sensitivity, detection and quantification limits were evaluated. The stoichiometric ratio between the finasteride and the oxidant was estimated. The validity of the proposed methods was tested by analyzing dosage forms and biological samples containing finasteride with relative standard deviation ≤ 0.95. Conclusion: The proposed methods could successfully determine the studied drug with varying excess of its oxidative degradation products, with recovery between 99.0 and 101.4, 99.2 and 101.6, and 99.6 and 101.0% for methods A, B, and C, respectively. PMID:23781478

  18. Application of normalized spectra in resolving a challenging Orphenadrine and Paracetamol binary mixture

    Science.gov (United States)

    Yehia, Ali M.; Abd El-Rahman, Mohamed K.

    2015-03-01

    Normalized spectra have a great power in resolving spectral overlap of challenging Orphenadrine (ORP) and Paracetamol (PAR) binary mixture, four smart techniques utilizing the normalized spectra were used in this work, namely, amplitude modulation (AM), simultaneous area ratio subtraction (SARS), simultaneous derivative spectrophotometry (S1DD) and ratio H-point standard addition method (RHPSAM). In AM, peak amplitude at 221.6 nm of the division spectra was measured for both ORP and PAR determination, while in SARS, concentration of ORP was determined using the area under the curve from 215 nm to 222 nm of the regenerated ORP zero order absorption spectra, in S1DD, concentration of ORP was determined using the peak amplitude at 224 nm of the first derivative ratio spectra. PAR concentration was determined directly at 288 nm in the division spectra obtained during the manipulation steps in the previous three methods. The last RHPSAM is a dual wavelength method in which two calibrations were plotted at 216 nm and 226 nm. RH point is the intersection of the two calibration lines, where ORP and PAR concentrations were directly determined from coordinates of RH point. The proposed methods were applied successfully for the determination of ORP and PAR in their dosage form.

  19. Single-Phase Full-Wave Rectifier as an Effective Example to Teach Normalization, Conduction Modes, and Circuit Analysis Methods

    Directory of Open Access Journals (Sweden)

    Predrag Pejovic

    2013-12-01

    Full Text Available Application of a single phase rectifier as an example in teaching circuit modeling, normalization, operating modes of nonlinear circuits, and circuit analysis methods is proposed.The rectifier supplied from a voltage source by an inductive impedance is analyzed in the discontinuous as well as in the continuous conduction mode. Completely analytical solution for the continuous conduction mode is derived. Appropriate numerical methods are proposed to obtain the circuit waveforms in both of the operating modes, and to compute the performance parameters. Source code of the program that performs such computation is provided.

  20. Normal central retinal function and structure preserved in retinitis pigmentosa.

    Science.gov (United States)

    Jacobson, Samuel G; Roman, Alejandro J; Aleman, Tomas S; Sumaroka, Alexander; Herrera, Waldo; Windsor, Elizabeth A M; Atkinson, Lori A; Schwartz, Sharon B; Steinberg, Janet D; Cideciyan, Artur V

    2010-02-01

    To determine whether normal function and structure, as recently found in forms of Usher syndrome, also occur in a population of patients with nonsyndromic retinitis pigmentosa (RP). Patients with simplex, multiplex, or autosomal recessive RP (n = 238; ages 9-82 years) were studied with static chromatic perimetry. A subset was evaluated with optical coherence tomography (OCT). Co-localized visual sensitivity and photoreceptor nuclear layer thickness were measured across the central retina to establish the relationship of function and structure. Comparisons were made to patients with Usher syndrome (n = 83, ages 10-69 years). Cross-sectional psychophysical data identified patients with RP who had normal rod- and cone-mediated function in the central retina. There were two other patterns with greater dysfunction, and longitudinal data confirmed that progression can occur from normal rod and cone function to cone-only central islands. The retinal extent of normal laminar architecture by OCT corresponded to the extent of normal visual function in patients with RP. Central retinal preservation of normal function and structure did not show a relationship with age or retained peripheral function. Usher syndrome results were like those in nonsyndromic RP. Regional disease variation is a well-known finding in RP. Unexpected was the observation that patients with presumed recessive RP can have regions with functionally and structurally normal retina. Such patients will require special consideration in future clinical trials of either focal or systemic treatment. Whether there is a common molecular mechanism shared by forms of RP with normal regions of retina warrants further study.

  1. Evaluation of Normalization Methods on GeLC-MS/MS Label-Free Spectral Counting Data to Correct for Variation during Proteomic Workflows

    Science.gov (United States)

    Gokce, Emine; Shuford, Christopher M.; Franck, William L.; Dean, Ralph A.; Muddiman, David C.

    2011-12-01

    Normalization of spectral counts (SpCs) in label-free shotgun proteomic approaches is important to achieve reliable relative quantification. Three different SpC normalization methods, total spectral count (TSpC) normalization, normalized spectral abundance factor (NSAF) normalization, and normalization to selected proteins (NSP) were evaluated based on their ability to correct for day-to-day variation between gel-based sample preparation and chromatographic performance. Three spectral counting data sets obtained from the same biological conidia sample of the rice blast fungus Magnaporthe oryzae were analyzed by 1D gel and liquid chromatography-tandem mass spectrometry (GeLC-MS/MS). Equine myoglobin and chicken ovalbumin were spiked into the protein extracts prior to 1D-SDS- PAGE as internal protein standards for NSP. The correlation between SpCs of the same proteins across the different data sets was investigated. We report that TSpC normalization and NSAF normalization yielded almost ideal slopes of unity for normalized SpC versus average normalized SpC plots, while NSP did not afford effective corrections of the unnormalized data. Furthermore, when utilizing TSpC normalization prior to relative protein quantification, t-testing and fold-change revealed the cutoff limits for determining real biological change to be a function of the absolute number of SpCs. For instance, we observed the variance decreased as the number of SpCs increased, which resulted in a higher propensity for detecting statistically significant, yet artificial, change for highly abundant proteins. Thus, we suggest applying higher confidence level and lower fold-change cutoffs for proteins with higher SpCs, rather than using a single criterion for the entire data set. By choosing appropriate cutoff values to maintain a constant false positive rate across different protein levels (i.e., SpC levels), it is expected this will reduce the overall false negative rate, particularly for proteins with

  2. General form of the Euler-Poisson-Darboux equation and application of the transmutation method

    Directory of Open Access Journals (Sweden)

    Elina L. Shishkina

    2017-07-01

    Full Text Available In this article, we find solution representations in the compact integral form to the Cauchy problem for a general form of the Euler-Poisson-Darboux equation with Bessel operators via generalized translation and spherical mean operators for all values of the parameter k, including also not studying before exceptional odd negative values. We use a Hankel transform method to prove results in a unified way. Under additional conditions we prove that a distributional solution is a classical one too. A transmutation property for connected generalized spherical mean is proved and importance of applying transmutation methods for differential equations with Bessel operators is emphasized. The paper also contains a short historical introduction on differential equations with Bessel operators and a rather detailed reference list of monographs and papers on mathematical theory and applications of this class of differential equations.

  3. Normalized impact factor (NIF): an adjusted method for calculating the citation rate of biomedical journals.

    Science.gov (United States)

    Owlia, P; Vasei, M; Goliaei, B; Nassiri, I

    2011-04-01

    The interests in journal impact factor (JIF) in scientific communities have grown over the last decades. The JIFs are used to evaluate journals quality and the papers published therein. JIF is a discipline specific measure and the comparison between the JIF dedicated to different disciplines is inadequate, unless a normalization process is performed. In this study, normalized impact factor (NIF) was introduced as a relatively simple method enabling the JIFs to be used when evaluating the quality of journals and research works in different disciplines. The NIF index was established based on the multiplication of JIF by a constant factor. The constants were calculated for all 54 disciplines of biomedical field during 2005, 2006, 2007, 2008 and 2009 years. Also, ranking of 393 journals in different biomedical disciplines according to the NIF and JIF were compared to illustrate how the NIF index can be used for the evaluation of publications in different disciplines. The findings prove that the use of the NIF enhances the equality in assessing the quality of research works produced by researchers who work in different disciplines. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Automated quantification of optic nerve axons in primate glaucomatous and normal eyes--method and comparison to semi-automated manual quantification.

    Science.gov (United States)

    Reynaud, Juan; Cull, Grant; Wang, Lin; Fortune, Brad; Gardiner, Stuart; Burgoyne, Claude F; Cioffi, George A

    2012-05-01

    To describe an algorithm and software application (APP) for 100% optic nerve axon counting and to compare its performance with a semi-automated manual (SAM) method in optic nerve cross-section images (images) from normal and experimental glaucoma (EG) nonhuman primate (NHP) eyes. ON cross sections from eight EG eyes from eight NHPs, five EG and five normal eyes from five NHPs, and 12 normal eyes from 12 NHPs were imaged at 100×. Calibration (n = 500) and validation (n = 50) image sets ranging from normal to end-stage damage were assembled. Correlation between APP and SAM axon counts was assessed by Deming regression within the calibration set and a compensation formula was generated to account for the subtle, systematic differences. Then, compensated APP counts for each validation image were compared with the mean and 95% confidence interval of five SAM counts of the validation set performed by a single observer. Calibration set APP counts linearly correlated to SAM counts (APP = 10.77 + 1.03 [SAM]; R(2) = 0.94, P < 0.0001) in normal to end-stage damage images. In the validation set, compensated APP counts fell within the 95% confidence interval of the SAM counts in 42 of the 50 images and were within 12 axons of the confidence intervals in six of the eight remaining images. Uncompensated axon density maps for the normal and EG eyes of a representative NHP were generated. An APP for 100% ON axon counts has been calibrated and validated relative to SAM counts in normal and EG NHP eyes.

  5. Disjoint sum forms in reliability theory

    Directory of Open Access Journals (Sweden)

    B. Anrig

    2014-01-01

    Full Text Available The structure function f of a binary monotone system is assumed to be known and given in a disjunctive normal form, i.e. as the logical union of products of the indicator variables of the states of its subsystems. Based on this representation of f, an improved Abraham algorithm is proposed for generating the disjoint sum form of f. This form is the base for subsequent numerical reliability calculations. The approach is generalized to multivalued systems. Examples are discussed.

  6. Computing Instantaneous Frequency by normalizing Hilbert Transform

    Science.gov (United States)

    Huang, Norden E.

    2005-05-31

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  7. Simple and Inexpensive Methods Development for Determination of Venlafaxine Hydrochloride from Its Solid Dosage Forms by Visible Spectrophotometry

    Directory of Open Access Journals (Sweden)

    K. Raghubabu

    2012-01-01

    Full Text Available Two simple, sensitive and cost effective visible spectrophotometric methods (M1 and M2 have been developed for the determination of venlafaxine hydrochloride from bulk and tablet dosage forms. The method M1 is based on the formation of green colored coordination complex by the drug with cobalt thiocyanate which is quantitatively extractable into nitro benzene with an absorption maximum of 626.4 nm. The method M2 involves internal salt formation of aconitic anhydride, dehydration product of citric acid [CIA] with acetic anhydride [Ac2O] to form colored chromogen with an absorption maximum of 561.2 nm. The calibration graph is linear over the concentration range of 10-50 µg/mL and 8-24 µg/mL for method M1 and M2 respectively. The proposed methods are applied to commercial available tablets and the results are statistically compared with those obtained by the reference method and validated by recovery studies. The results are found satisfactory and reproducible. These methods are applied successfully for the estimation of the venlafaxine hydrochloride in the presence of other ingredients that are usually present in dosage forms.

  8. Method of deuterium isotope separation and enrichment

    International Nuclear Information System (INIS)

    Benson, S.W.

    1980-01-01

    A method of deuterium isotope separation and enrichment using infrared laser technology in combination with chemical processes for treating and recycling the unreacted and deuterium-depleted starting materials is described. Organic molecules of the formula RX (where R is an ethyl, isopropyl, t-butyl, or cyclopentenyl group and X is F, Cl, Br or OH) containing a normal abundance of hydrogen and deuterium are exposed to intense laser infrared radiation. An olefin containing deuterium (olefin D) will be formed, along with HX. The enriched olefin D can be stripped from the depleted stream of RX and HX, and can be burned to form enriched water or pyrolyzed to produce hydrogen gas with elevated deuterium content. The depleted RX is decomposed to olefins and RX, catalytically exchanged with normal water to restore the deuterium content to natural levels, and recombined to form RX which can be recycled. (LL)

  9. A method for the quantification of model form error associated with physical systems.

    Energy Technology Data Exchange (ETDEWEB)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  10. Probing the effect of human normal sperm morphology rate on cycle outcomes and assisted reproductive methods selection.

    Directory of Open Access Journals (Sweden)

    Bo Li

    Full Text Available Sperm morphology is the best predictor of fertilization potential, and the critical predictive information for supporting assisted reproductive methods selection. Given its important predictive value and the declining reality of semen quality in recent years, the threshold of normal sperm morphology rate (NSMR is being constantly corrected and controversial, from the 4th edition (14% to the 5th version (4%. We retrospectively analyzed 4756 cases of infertility patients treated with conventional-IVF(c-IVF or ICSI, which were divided into three groups according to NSMR: ≥14%, 4%-14% and <4%. Here, we demonstrate that, with decrease in NSMR(≥14%, 4%-14%, <4%, in the c-IVF group, the rate of fertilization, normal fertilization, high-quality embryo, multi-pregnancy and birth weight of twins gradually decreased significantly (P<0.05, while the miscarriage rate was significantly increased (p<0.01 and implantation rate, clinical pregnancy rate, ectopic pregnancy rate, preterm birth rate, live birth rate, sex ratio, and birth weight(Singleton showed no significant change. In the ICSI group, with decrease in NSMR (≥14%, 4%-14%, <4%, high-quality embryo rate, multi-pregnancy rate and birth weight of twins were gradually decreased significantly (p<0.05, while other parameters had no significant difference. Considering the clinical assisted methods selection, in the NFMR ≥14% group, normal fertilization rate of c-IVF was significantly higher than the ICSI group (P<0.05, in the 4%-14% group, birth weight (twins of c-IVF were significantly higher than the ICSI group, in the <4% group, miscarriage of IVF was significantly higher than the ICSI group. Therefore, we conclude that NSMR is positively related to embryo reproductive potential, and when NSMR<4% (5th edition, ICSI should be considered first, while the NSMR≥4%, c-IVF assisted reproduction might be preferred.

  11. Five-point form of the nodal diffusion method and comparison with finite-difference

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1988-01-01

    Nodal Methods have been derived, implemented and numerically tested for several problems in physics and engineering. In the field of nuclear engineering, many nodal formalisms have been used for the neutron diffusion equation, all yielding results which were far more computationally efficient than conventional Finite Difference (FD) and Finite Element (FE) methods. However, not much effort has been devoted to theoretically comparing nodal and FD methods in order to explain the very high accuracy of the former. In this summary we outline the derivation of a simple five-point form for the lowest order nodal method and compare it to the traditional five-point, edge-centered FD scheme. The effect of the observed differences on the accuracy of the respective methods is established by considering a simple test problem. It must be emphasized that the nodal five-point scheme derived here is mathematically equivalent to previously derived lowest order nodal methods. 7 refs., 1 tab

  12. Evaluating new methods for direct measurement of the moderator temperature coefficient in nuclear power plants during normal operation

    International Nuclear Information System (INIS)

    Makai, M.; Kalya, Z.; Nemes, I.; Pos, I.; Por, G.

    2007-01-01

    Moderator temperature coefficient of reactivity is not monitored during fuel cycles in WWER reactors, because it is not very easy or impossible to measure it without disturbing the normal operation. Two new methods were tested in our WWER type nuclear power plant to try methodologies, which enable to measure that important to safety parameter during the fuel cycle. One is based on small perturbances, and only small changes are requested in operation, the other is based on noise methods, which means it is without interference with reactor operation. Both method is new that aspects that they uses the plant computer data(VERONA) based signals calculated by C P ORCA diffusion code (Authors)

  13. A non-Hertzian method for solving wheel-rail normal contact problem taking into account the effect of yaw

    Science.gov (United States)

    Liu, Binbin; Bruni, Stefano; Vollebregt, Edwin

    2016-09-01

    A novel approach is proposed in this paper to deal with non-Hertzian normal contact in wheel-rail interface, extending the widely used Kik-Piotrowski method. The new approach is able to consider the effect of the yaw angle of the wheelset against the rail on the shape of the contact patch and on pressure distribution. Furthermore, the method considers the variation of profile curvature across the contact patch, enhancing the correspondence to CONTACT for highly non-Hertzian contact conditions. The simulation results show that the proposed method can provide more accurate estimation than the original algorithm compared to Kalker's CONTACT, and that the influence of yaw on the contact results is significant under certain circumstances.

  14. The preparation method of solid boron solution in silicon carbide in the form of micro powder

    International Nuclear Information System (INIS)

    Pampuch, R.; Stobierski, L.; Lis, J.; Bialoskorski, J.; Ermer, E.

    1993-01-01

    The preparation method of solid boron solution in silicon carbide in the form of micro power has been worked out. The method consists in introducing mixture of boron, carbon and silicon and heating in the atmosphere of inert gas to the 1573 K

  15. A new method for designing dual foil electron beam forming systems. I. Introduction, concept of the method

    Energy Technology Data Exchange (ETDEWEB)

    Adrich, Przemysław, E-mail: Przemyslaw.Adrich@ncbj.gov.pl

    2016-05-01

    In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.

  16. A new method for designing dual foil electron beam forming systems. I. Introduction, concept of the method

    International Nuclear Information System (INIS)

    Adrich, Przemysław

    2016-01-01

    In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.

  17. Proposed waste form performance criteria and testing methods for low-level mixed waste

    International Nuclear Information System (INIS)

    Franz, E.M.; Fuhrmann, M.; Bowerman, B.; Bates, S.; Peters, R.

    1994-08-01

    This document describes proposed waste form performance criteria and testing method that could be used as guidance in judging viability of a waste form as a physico-chemical barrier to releases of radionuclides and RCRA regulated hazardous components. It is assumed that release of contaminants by leaching is the single most important property by which the effectiveness of a waste form is judged. A two-tier regimen is proposed. The first tier includes a leach test required by the Environmental Protection Agency and a leach test designed to determine the net forward leach rate for a variety of materials. The second tier of tests are to determine if a set of stresses (i.e., radiation, freeze-thaw, wet-dry cycling) on the waste form adversely impact its ability to retain contaminants and remain physically intact. It is recommended that the first tier tests be performed first to determine acceptability. Only on passing the given specifications for the leach tests should other tests be performed. In the absence of site-specific performance assessments (PA), two generic modeling exercises are described which were used to calculate proposed acceptable leach rates

  18. Perineal Ultrasound Findings of Stress Urinary Incontinence : Differentiation from Normal Findings

    International Nuclear Information System (INIS)

    Baek, Seung Yon; Chung, Eun Chul; Rhee, Chung Sik; Suh, Jeong Soo

    1995-01-01

    Perineal ultrasonography is a noninvasive method that is easier than chain cystoure-thrography in the diagnosis of stress urinary incontinence(SUI). We report the findings of stress urinary incontinence at peritoneal ultrasound and its differential points form normal control. Twenty-two patients with SUI and l6 normal controls were included in our study. Aloka SSD 650 with 3.5MHz convex transducer was used, and sagittal image through the bladder, bladder base, urethrovesical junction and pubis was obtained from the vulva area, We measured thepdsterior urethrovesical angle(PUVA) at rest and stress, and calculated the difference between the two angles. We also measured the distance of bladder neck descent during stress and the diameter of proximal urethra at rest. The data were analyzed with student t-test. At rest, PUVA was 135.3 .deg. in patients with SUI group and 134.5 .deg. in normal control group(P=0.8376). During streets, PUVA was 149.5 .deg. in SUI group and 142.1 .deg. in normal group(P=0.0135). The difference PUVAs at rest and during stress was 14.2 .deg. in SUI group and 7.6 .deg. in normal group(P=0.0173). The distance of bladder neck descent during stress was 14.5mm in SUI group and 9.8mm in normal group(P=0.0029). The diameter of proxiaml urethra at rest was 4.4mm in SUI group and 3.6mm in normal group(P=0.0385). In conclusion, ultrasound parameters that include the PUVA during stress, the difference between PUVAs at rest and during stress, the distance of bladder neck descent during stress and the diameter of proximal ureyhra at rest are useful in diagnosis of the stress urinary incontinence

  19. Perineal Ultrasound Findings of Stress Urinary Incontinence : Differentiation from Normal Findings

    Energy Technology Data Exchange (ETDEWEB)

    Baek, Seung Yon; Chung, Eun Chul; Rhee, Chung Sik; Suh, Jeong Soo [Ewha Womans University Hospital, Seoul (Korea, Republic of)

    1995-06-15

    Perineal ultrasonography is a noninvasive method that is easier than chain cystoure-thrography in the diagnosis of stress urinary incontinence(SUI). We report the findings of stress urinary incontinence at peritoneal ultrasound and its differential points form normal control. Twenty-two patients with SUI and l6 normal controls were included in our study. Aloka SSD 650 with 3.5MHz convex transducer was used, and sagittal image through the bladder, bladder base, urethrovesical junction and pubis was obtained from the vulva area, We measured thepdsterior urethrovesical angle(PUVA) at rest and stress, and calculated the difference between the two angles. We also measured the distance of bladder neck descent during stress and the diameter of proximal urethra at rest. The data were analyzed with student t-test. At rest, PUVA was 135.3 .deg. in patients with SUI group and 134.5 .deg. in normal control group(P=0.8376). During streets, PUVA was 149.5 .deg. in SUI group and 142.1 .deg. in normal group(P=0.0135). The difference PUVAs at rest and during stress was 14.2 .deg. in SUI group and 7.6 .deg. in normal group(P=0.0173). The distance of bladder neck descent during stress was 14.5mm in SUI group and 9.8mm in normal group(P=0.0029). The diameter of proxiaml urethra at rest was 4.4mm in SUI group and 3.6mm in normal group(P=0.0385). In conclusion, ultrasound parameters that include the PUVA during stress, the difference between PUVAs at rest and during stress, the distance of bladder neck descent during stress and the diameter of proximal ureyhra at rest are useful in diagnosis of the stress urinary incontinence

  20. Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.

    Science.gov (United States)

    Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B

    2015-09-01

    Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.

  1. Proposed waste form performance criteria and testing methods for low-level mixed waste

    International Nuclear Information System (INIS)

    Franz, E.M.; Fuhrmann, M.; Bowerman, B.

    1995-01-01

    Proposed waste form performance criteria and testing methods were developed as guidance in judging the suitability of solidified waste as a physico-chemical barrier to releases of radionuclides and RCRA regulated hazardous components. The criteria follow from the assumption that release of contaminants by leaching is the single most important property for judging the effectiveness of a waste form. A two-tier regimen is proposed. The first tier consists of a leach test designed to determine the net, forward leach rate of the solidified waste and a leach test required by the Environmental Protection Agency (EPA). The second tier of tests is to determine if a set of stresses (i.e., radiation, freeze-thaw, wet-dry cycling) on the waste form adversely impacts its ability to retain contaminants and remain physically intact. In the absence of site-specific performance assessments (PA), two generic modeling exercises are described which were used to calculate proposed acceptable leachates

  2. A Simple Method for Forming Hybrid Core-Shell Nanoparticles Suspended in Water

    Directory of Open Access Journals (Sweden)

    Jean-Christophe Daigle

    2008-01-01

    addition fragmentation chain transfer (RAFT polymerization as dispersant. Then, the resulting dispersion is engaged in a radical emulsion polymerization process whereby a hydrophobic organic monomer (styrene and butyl acrylate is polymerized to form the shell of the hybrid nanoparticle. This method is extremely versatile, allowing the preparation of a variety of nanocomposites with metal oxides (alumina, rutile, anatase, barium titanate, zirconia, copper oxide, metals (Mo, Zn, and even inorganic nitrides (Si3N4.

  3. DIAGNOSTIC CHARACTERISTICS OF THE COMPUTER TESTS FORMED BY METHOD OF RESTORED FRAGMENTS

    OpenAIRE

    Oleksandr O. Petkov

    2013-01-01

    Definition of validity and reliability of tests which are formed by a method of restored fragments is considered in the article. The structure of the controlled theoretical material of limit field of knowledge, language expressions that describe the subject of control, and reliability of test, is analyzed. The technique of definition of the most important components of reliability of the considered tests is given: reliability of quantitative determination of coefficient of assimilation and te...

  4. Normalization in EDIP97 and EDIP2003: updated European inventory for 2004 and guidance towards a consistent use in practice

    DEFF Research Database (Denmark)

    Laurent, Alexis; Olsen, Stig Irving; Hauschild, Michael Zwicky

    2011-01-01

    Purpose: When performing a life cycle assessment (LCA), the LCA practitioner faces the need to express the characterized results in a form suitable for the final interpretation. This can be done using normalization against some common reference impact—the normalization references—which require...... regular updates. The study presents updated sets of normalization inventories, normalization references for the EDIP97/EDIP2003 methodology and guidance on their consistent use in practice. Materials and methods: The base year of the inventory is 2004; the geographical scope for the non-global impacts...... is limited to Europe. The emission inventory was collected from different publicly available databases and monitoring bodies. Where necessary, gaps were filled using extrapolations. A new approach for inventorizing specific groups of substances—non-methane volatile organic compounds and pesticides—was also...

  5. A multi-sample based method for identifying common CNVs in normal human genomic structure using high-resolution aCGH data.

    Directory of Open Access Journals (Sweden)

    Chihyun Park

    Full Text Available BACKGROUND: It is difficult to identify copy number variations (CNV in normal human genomic data due to noise and non-linear relationships between different genomic regions and signal intensity. A high-resolution array comparative genomic hybridization (aCGH containing 42 million probes, which is very large compared to previous arrays, was recently published. Most existing CNV detection algorithms do not work well because of noise associated with the large amount of input data and because most of the current methods were not designed to analyze normal human samples. Normal human genome analysis often requires a joint approach across multiple samples. However, the majority of existing methods can only identify CNVs from a single sample. METHODOLOGY AND PRINCIPAL FINDINGS: We developed a multi-sample-based genomic variations detector (MGVD that uses segmentation to identify common breakpoints across multiple samples and a k-means-based clustering strategy. Unlike previous methods, MGVD simultaneously considers multiple samples with different genomic intensities and identifies CNVs and CNV zones (CNVZs; CNVZ is a more precise measure of the location of a genomic variant than the CNV region (CNVR. CONCLUSIONS AND SIGNIFICANCE: We designed a specialized algorithm to detect common CNVs from extremely high-resolution multi-sample aCGH data. MGVD showed high sensitivity and a low false discovery rate for a simulated data set, and outperformed most current methods when real, high-resolution HapMap datasets were analyzed. MGVD also had the fastest runtime compared to the other algorithms evaluated when actual, high-resolution aCGH data were analyzed. The CNVZs identified by MGVD can be used in association studies for revealing relationships between phenotypes and genomic aberrations. Our algorithm was developed with standard C++ and is available in Linux and MS Windows format in the STL library. It is freely available at: http://embio.yonsei.ac.kr/~Park/mgvd.php.

  6. Review of clinically accessible methods to determine lean body mass for normalization of standardized uptake values

    International Nuclear Information System (INIS)

    DEVRIESE, Joke; POTTEL, Hans; BEELS, Laurence; MAES, Alex; VAN DE WIELE, Christophe; GHEYSENS, Olivier

    2016-01-01

    With the routine use of 2-deoxy-2-[ 18 F]-fluoro-D-glucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) scans, metabolic activity of tumors can be quantitatively assessed through calculation of SUVs. One possible normalization parameter for the standardized uptake value (SUV) is lean body mass (LBM), which is generally calculated through predictive equations based on height and body weight. (Semi-)direct measurements of LBM could provide more accurate results in cancer populations than predictive equations based on healthy populations. In this context, four methods to determine LBM are reviewed: bioelectrical impedance analysis, dual-energy X-ray absorptiometry. CT, and magnetic resonance imaging. These methods were selected based on clinical accessibility and are compared in terms of methodology, precision and accuracy. By assessing each method’s specific advantages and limitations, a well-considered choice of method can hopefully lead to more accurate SUVLBM values, hence more accurate quantitative assessment of 18F-FDG PET images.

  7. Investigation of normal organ development with fetal MRI

    International Nuclear Information System (INIS)

    Prayer, Daniela; Brugger, Peter C.

    2007-01-01

    The understanding of the presentation of normal organ development on fetal MRI forms the basis for recognition of pathological states. During the second and third trimesters, maturational processes include changes in size, shape and signal intensities of organs. Visualization of these developmental processes requires tailored MR protocols. Further prerequisites for recognition of normal maturational states are unequivocal intrauterine orientation with respect to left and right body halves, fetal proportions, and knowledge about the MR presentation of extrafetal/intrauterine organs. Emphasis is laid on the demonstration of normal MR appearance of organs that are frequently involved in malformation syndromes. In addition, examples of time-dependent contrast enhancement of intrauterine structures are given. (orig.)

  8. Investigation of normal organ development with fetal MRI

    Energy Technology Data Exchange (ETDEWEB)

    Prayer, Daniela [Medical University of Vienna, Department of Radiology, Vienna (Austria); Brugger, Peter C. [Medical University of Vienna, Center of Anatomy and Cell Biology, Integrative Morphology Group, Vienna (Austria)

    2007-10-15

    The understanding of the presentation of normal organ development on fetal MRI forms the basis for recognition of pathological states. During the second and third trimesters, maturational processes include changes in size, shape and signal intensities of organs. Visualization of these developmental processes requires tailored MR protocols. Further prerequisites for recognition of normal maturational states are unequivocal intrauterine orientation with respect to left and right body halves, fetal proportions, and knowledge about the MR presentation of extrafetal/intrauterine organs. Emphasis is laid on the demonstration of normal MR appearance of organs that are frequently involved in malformation syndromes. In addition, examples of time-dependent contrast enhancement of intrauterine structures are given. (orig.)

  9. Kinetic spectrophotometric method for the determination of perindopril erbumine in pure and commercial dosage forms

    Directory of Open Access Journals (Sweden)

    Nafisur Rahman

    2017-02-01

    Full Text Available A kinetic spectrophotometric method has been developed for the determination of perindopril erbumine in pure and commercial dosage forms. The method is based on the reaction of drug with potassium permanganate in alkaline medium at room temperature (30 ± 1 °C. The reaction was followed spectrophotometrically by measuring the increase in absorbance with time at 603 nm and the initial rate, fixed time (at 8.0 min and equilibrium time (at 90.0 min methods were adopted for constructing the calibration graphs. All the calibration graphs are linear in the concentration range of 5.0–50.0 μg/ml. The limits of detection for initial rate, fixed time and equilibrium time methods were 0.752, 0.882 and 1.091 μg/ml, respectively. The activation parameters such as Ea, ΔH‡, ΔS‡ and ΔG‡ were also determined for the reaction and found to be 60.93 kJ/mol, 56.45 kJ/mol, 74.16 J/K mol and −6.53 kJ/mol, respectively. The variables were optimized and the proposed methods are validated as per ICH guidelines. The method has been further applied to the determination of perindopril erbumine in commercial dosage forms. The analytical results of the proposed methods when compared with those of the reference method show no significant difference in accuracy and precision and have acceptable bias.

  10. Monitoring the normal body

    DEFF Research Database (Denmark)

    Nissen, Nina Konstantin; Holm, Lotte; Baarts, Charlotte

    2015-01-01

    of practices for monitoring their bodies based on different kinds of calculations of weight and body size, observations of body shape, and measurements of bodily firmness. Biometric measurements are familiar to them as are health authorities' recommendations. Despite not belonging to an extreme BMI category...... provides us with knowledge about how to prevent future overweight or obesity. This paper investigates body size ideals and monitoring practices among normal-weight and moderately overweight people. Methods : The study is based on in-depth interviews combined with observations. 24 participants were...... recruited by strategic sampling based on self-reported BMI 18.5-29.9 kg/m2 and socio-demographic factors. Inductive analysis was conducted. Results : Normal-weight and moderately overweight people have clear ideals for their body size. Despite being normal weight or close to this, they construct a variety...

  11. A Classification Method of Normal and Overweight Females Based on Facial Features for Automated Medical Applications

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available Obesity and overweight have become serious public health problems worldwide. Obesity and abdominal obesity are associated with type 2 diabetes, cardiovascular diseases, and metabolic syndrome. In this paper, we first suggest a method of predicting normal and overweight females according to body mass index (BMI based on facial features. A total of 688 subjects participated in this study. We obtained the area under the ROC curve (AUC value of 0.861 and kappa value of 0.521 in Female: 21–40 (females aged 21–40 years group, and AUC value of 0.76 and kappa value of 0.401 in Female: 41–60 (females aged 41–60 years group. In two groups, we found many features showing statistical differences between normal and overweight subjects by using an independent two-sample t-test. We demonstrated that it is possible to predict BMI status using facial characteristics. Our results provide useful information for studies of obesity and facial characteristics, and may provide useful clues in the development of applications for alternative diagnosis of obesity in remote healthcare.

  12. Method of forming composite fiber blends and molding same

    Science.gov (United States)

    McMahon, Paul E. (Inventor); Chung, Tai-Shung (Inventor)

    1989-01-01

    The instant invention involves a process used in preparing fibrous tows which may be formed into polymeric plastic composites. The process involves the steps of (a) forming a tow of strong filamentary materials; (b) forming a thermoplastic polymeric fiber; (c) intermixing the two tows; and (d) withdrawing the intermixed tow for further use.

  13. Comparison of Oral Stereognosis in 6 and 7 Old Normal Children

    Directory of Open Access Journals (Sweden)

    Amir Shiani

    2004-06-01

    Full Text Available Objective: This research determined oral stereognosis (form recognition and spent time to recognize in normal children in north and south of Tehran city to use it in assessment and therapy of oral senses and speech in children with articulation disorders. Materials & Methods: This research was done in 200 children who were 6 & 7 years old and normal in Tehran city. 20 items with different shapes were used and children were wanted to recognize the shapes which were put in their mouth and they should choice one of three shapes located in front of them. Responses and the spent time were calculated. Results: The mean scores of form recognition in children of 6 years old is 17/34 and in children of 7 years old is 17/59. There was no significant difference between them in their scores (P=0.31. In addition, the time of formation diagnosis in 6 years old children is 2/67s (seconds and in 7 years old children is 2/82s, there was no significant difference between them (P=0.11.The northern city children responded slower than the other group (P=0.000. The only statistically significant score between two sexes was the time of formation recognition which was shorter in girls relative to the boys (P=0.043. Conclusion: Based on this study, a significant correlation could not be found in ability of oral stereognosis in 6 and 7 years old children. But in south of city children can recognize faster than children in northern city. Based on importance of this sense in speech, we suggest normalization of it in different ages.

  14. Signal Normalization Reduces Image Appearance Disparity Among Multiple Optical Coherence Tomography Devices.

    Science.gov (United States)

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S

    2017-02-01

    To assess the effect of the previously reported optical coherence tomography (OCT) signal normalization method on reducing the discrepancies in image appearance among spectral-domain OCT (SD-OCT) devices. Healthy eyes and eyes with various retinal pathologies were scanned at the macular region using similar volumetric scan patterns with at least two out of three SD-OCT devices at the same visit (Cirrus HD-OCT, Zeiss, Dublin, CA; RTVue, Optovue, Fremont, CA; and Spectralis, Heidelberg Engineering, Heidelberg, Germany). All the images were processed with the signal normalization. A set of images formed a questionnaire with 24 pairs of cross-sectional images from each eye with any combination of the three SD-OCT devices either both pre- or postsignal normalization. Observers were asked to evaluate the similarity of the two displayed images based on the image appearance. The effects on reducing the differences in image appearance before and after processing were analyzed. Twenty-nine researchers familiar with OCT images participated in the survey. Image similarity was significantly improved after signal normalization for all three combinations ( P ≤ 0.009) as Cirrus and RTVue combination became the most similar pair, followed by Cirrus and Spectralis, and RTVue and Spectralis. The signal normalization successfully minimized the disparities in the image appearance among multiple SD-OCT devices, allowing clinical interpretation and comparison of OCT images regardless of the device differences. The signal normalization would enable direct OCT images comparisons without concerning about device differences and broaden OCT usage by enabling long-term follow-ups and data sharing.

  15. Normal mode-guided transition pathway generation in proteins.

    Directory of Open Access Journals (Sweden)

    Byung Ho Lee

    Full Text Available The biological function of proteins is closely related to its structural motion. For instance, structurally misfolded proteins do not function properly. Although we are able to experimentally obtain structural information on proteins, it is still challenging to capture their dynamics, such as transition processes. Therefore, we need a simulation method to predict the transition pathways of a protein in order to understand and study large functional deformations. Here, we present a new simulation method called normal mode-guided elastic network interpolation (NGENI that performs normal modes analysis iteratively to predict transition pathways of proteins. To be more specific, NGENI obtains displacement vectors that determine intermediate structures by interpolating the distance between two end-point conformations, similar to a morphing method called elastic network interpolation. However, the displacement vector is regarded as a linear combination of the normal mode vectors of each intermediate structure, in order to enhance the physical sense of the proposed pathways. As a result, we can generate more reasonable transition pathways geometrically and thermodynamically. By using not only all normal modes, but also in part using only the lowest normal modes, NGENI can still generate reasonable pathways for large deformations in proteins. This study shows that global protein transitions are dominated by collective motion, which means that a few lowest normal modes play an important role in this process. NGENI has considerable merit in terms of computational cost because it is possible to generate transition pathways by partial degrees of freedom, while conventional methods are not capable of this.

  16. The anti-tumor efficacy of nanoparticulate form of ICD-85 versus free form

    Directory of Open Access Journals (Sweden)

    Zare Mirakabadi, A.

    2015-04-01

    Full Text Available Biodegradable polymeric nanoparticles (NPs have been intensively studied as a possible way to enhance anti-tumor efficacy while reducing side effects. ICD-85, derived from the venom of two separate species of venomous animals, has been shown to exhibit anti-cancer activity. In this report polymer based sodium alginate nanoparticles of ICD-85 was used to enhance its therapeutic effects and reduce its side effects. The inhibitory effect was evaluated by MTT assay. The necrotic effect was assessed using LDH assay. The induction of apoptosis was analyzed by caspase-8 colorimetric assay kit. Cytotoxicity assay in HeLa cells demonstrated enhanced efficacy of ICD-85 loaded NPs compared to the free ICD-85. The IC50 values obtained in HeLa cells after 48 h, for free ICD-85 and ICD-85 loaded NPs were 26±2.9μg ml-1 and 18±2.5μg ml-1, respectively. While it was observed that free ICD-85 exhibits mild cytotoxicity towards normal MRC-5 cells (IC50>60μg ml-1, ICD-85 loaded NPs was found to have higher efficacy in anti-proliferative activity on HeLa cells in vitro without any significant cytotoxic effect on normal MRC-5 cells. The apoptosis-induction mechanism by both form of ICD-85 on HeLa cells was found to be through activation of caspase-8 with approximately 2 fold greater of ICD-85 loaded NPs as compared to free ICD-85. Our work reveals that although ICD-85 in free form is relatively selective to inhibit the growth of cancer cells via apoptosis as compared to normal cells, but nanoparticulate form increases its selectivity towards cancer cells.

  17. New Spectrophotometric and Conductometric Methods for Macrolide Antibiotics Determination in Pure and Pharmaceutical Dosage Forms Using Rose Bengal

    Directory of Open Access Journals (Sweden)

    Rania A. Sayed

    2013-01-01

    Full Text Available Two Simple, accurate, precise, and rapid spectrophotometric and conductometric methods were developed for the estimation of erythromycin thiocyanate (I, clarithromycin (II, and azithromycin dihydrate (III in both pure and pharmaceutical dosage forms. The spectrophotometric procedure depends on the reaction of rose bengal and copper with the cited drugs to form stable ternary complexes which are extractable with methylene chloride, and the absorbances were measured at 558, 557, and 560 nm for (I, (II, and (III, respectively. The conductometric method depends on the formation of an ion-pair complex between the studied drug and rose bengal. For the spectrophotometric method, Beer's law was obeyed. The correlation coefficient ( for the studied drugs was found to be 0.9999. The molar absorptivity (, Sandell’s sensitivity, limit of detection (LOD, and limit of quantification (LOQ were also calculated. The proposed methods were successfully applied for the determination of certain pharmaceutical dosage forms containing the studied drugs

  18. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    Science.gov (United States)

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  19. AN ELECTROPLATING METHOD OF FORMING PLATINGS OF NICKEL, COBALT, NICKEL ALLOYS OR COBALT ALLOYS

    DEFF Research Database (Denmark)

    1997-01-01

    An electroplating method of forming platings of nickel, cobalt, nickel alloys or cobalt alloys with reduced stresses in an electrodepositing bath of the type: Watt's bath, chloride bath or a combination thereof, by employing pulse plating with periodic reverse pulse and a sulfonated naphthalene...

  20. Efficacy of hyaluronic acid binding assay in selecting motile spermatozoa with normal morphology at high magnification

    Directory of Open Access Journals (Sweden)

    Mauri Ana L

    2010-12-01

    Full Text Available Abstract Background The present study aimed to evaluate the efficacy of the hyaluronic acid (HA binding assay in the selection of motile spermatozoa with normal morphology at high magnification (8400x. Methods A total of 16592 prepared spermatozoa were selected and classified into two groups: Group I, spermatozoa which presented their head attached to an HA substance (HA-bound sperm, and Group II, those spermatozoa that did not attach to the HA substance (HA-unbound sperm. HA-bound and HA-unbound spermatozoa were evaluated according to the following sperm forms: 1-Normal morphology: normal nucleus (smooth, symmetric and oval configuration, length: 4.75+/-2.8 μm and width: 3.28+/-0.20 μm, no extrusion or invagination and no vacuoles occupied more than 4% of the nuclear area as well as acrosome, post-acrosomal lamina, neck, tail, besides not presenting a cytoplasmic droplet or cytoplasm around the head; 2-Abnormalities of nuclear form (a-Large/small; b-Wide/narrow; c-Regional disorder; 3-Abnormalities of nuclear chromatin content (a-Vacuoles: occupy >4% to 50% of the nuclear area and b-Large vacuoles: occupy >50% of the nuclear area using a high magnification (8400x microscopy system. Results No significant differences were obtained with respect to sperm morphological forms and the groups HA-bound and HA-unbound. 1-Normal morphology: HA-bound 2.7% and HA-unbound 2.5% (P = 0.56. 2-Abnormalities of nuclear form: a-Large/small: HA-bound 1.6% vs. HA-unbound 1.6% (P = 0.63; b-Wide/narrow: HA-bound 3.1% vs. HA-unbound 2.7% (P = 0.13; c-Regional disorders: HA-bound 4.7% vs. HA-unbound 4.4% (P = 0.34. 3. Abnormalities of nuclear chromatin content: a-Vacuoles >4% to 50%: HA-bound 72.2% vs. HA-unbound 72.5% (P = 0.74; b-Large vacuoles: HA-bound 15.7% vs. HA-unbound 16.3% (P = 0.36. Conclusions The findings suggest that HA binding assay has limited efficacy in selecting motile spermatozoa with normal morphology at high magnification.

  1. USING THE METHOD KINESIOTAPING IN REHABILITATION OF CHILDREN WITH HEMIPARETIC FORM OF CEREBRAL PALSY

    Directory of Open Access Journals (Sweden)

    Vladimir Evgenevich Tuchkov

    2016-08-01

    Full Text Available The study examines the impact of a new kind of impact in the rehabilitation of hemiparetic form of cerebral palsy – a method kinesiotaping «Concept 4 tapes». Within this framework, the receptor patient unit gradually turned on, resulting in a restructuring of the program abnormal movement, the conditions of use of other methods to increase the efficiency and depth of the order of their influence. The advantage of a technique kinesiotaping is the standard approach, allowing you to apply effects diagram method to all patients without loss of efficacy of therapeutic effects.

  2. Comparative analyses reveal discrepancies among results of commonly used methods for Anopheles gambiaemolecular form identification

    Directory of Open Access Journals (Sweden)

    Pinto João

    2011-08-01

    Full Text Available Abstract Background Anopheles gambiae M and S molecular forms, the major malaria vectors in the Afro-tropical region, are ongoing a process of ecological diversification and adaptive lineage splitting, which is affecting malaria transmission and vector control strategies in West Africa. These two incipient species are defined on the basis of single nucleotide differences in the IGS and ITS regions of multicopy rDNA located on the X-chromosome. A number of PCR and PCR-RFLP approaches based on form-specific SNPs in the IGS region are used for M and S identification. Moreover, a PCR-method to detect the M-specific insertion of a short interspersed transposable element (SINE200 has recently been introduced as an alternative identification approach. However, a large-scale comparative analysis of four widely used PCR or PCR-RFLP genotyping methods for M and S identification was never carried out to evaluate whether they could be used interchangeably, as commonly assumed. Results The genotyping of more than 400 A. gambiae specimens from nine African countries, and the sequencing of the IGS-amplicon of 115 of them, highlighted discrepancies among results obtained by the different approaches due to different kinds of biases, which may result in an overestimation of MS putative hybrids, as follows: i incorrect match of M and S specific primers used in the allele specific-PCR approach; ii presence of polymorphisms in the recognition sequence of restriction enzymes used in the PCR-RFLP approaches; iii incomplete cleavage during the restriction reactions; iv presence of different copy numbers of M and S-specific IGS-arrays in single individuals in areas of secondary contact between the two forms. Conclusions The results reveal that the PCR and PCR-RFLP approaches most commonly utilized to identify A. gambiae M and S forms are not fully interchangeable as usually assumed, and highlight limits of the actual definition of the two molecular forms, which might

  3. Measurement of the angle formed between the thalamostriate vein and internal cerebral vein in anteroposterior projection: A method of estimating the size of the lateral ventricle

    International Nuclear Information System (INIS)

    Choi, Il Soon; Yoo, Ho Joon; Kim, Myung Sung; Park, Kwang Joo

    1974-01-01

    The size and shape of the lateral ventricle are frequently altered by intracranial lesions, and this may be reflected on cerebral angiogram. The size and dilatation of the lateral ventricle may be estimate by the course of the thalamostirate vein (TSV) and the distance between the midline and the TSV in frontal projection, the course of the pericallosal artery and the distance between the venous angle and subependymal veins in lateral projection. However, little description can be found in the literature about the method of expressing the size and degree of dilatation of the lateral ventricle on cerebral angiogram. The authors have attempted to find out an easy way of precisely estimating the size of the lateral ventricle and to observe how it can be applied in the patients with various expanding intracranial lesions. We measured the angle formed between the internal cerebral vein (ICV) and the TSV in the anteroposterior roentgenograms of venous phase in normal group composed of 61 patients in whom no significant abnormality could be detected neurologically or by other methods, and in 18 patients with expanding intracranial lesions. The results obtained are as follows: 1. In the normal group, the average angle formed between the ICV and TSV on the anteroposterior angiogram obtained with the central beam projected making an angle of 10 to 15 .deg with the orbitomeatal line was 25.7 ± 3.9 .deg, ranging from 19 to 34 .deg. The angle measured from 20 to 30 in 85% of the normal group. There was no significant difference between the male and the female as well as between the children and adults. 2. The measurement of the angle was found to reflect faithfully the size of the lateral ventricle on the side examined, increasing as the lateral ventricle dilated. When the angle measures more than 33.deg. the lateral ventricle would certainly be dilated. The lateral ventricle can be taken as moderately dilated when the measurement exceeds 40.deg and as severely dilated when

  4. Spatial normalization of array-CGH data

    Directory of Open Access Journals (Sweden)

    Brennetot Caroline

    2006-05-01

    Full Text Available Abstract Background Array-based comparative genomic hybridization (array-CGH is a recently developed technique for analyzing changes in DNA copy number. As in all microarray analyses, normalization is required to correct for experimental artifacts while preserving the true biological signal. We investigated various sources of systematic variation in array-CGH data and identified two distinct types of spatial effect of no biological relevance as the predominant experimental artifacts: continuous spatial gradients and local spatial bias. Local spatial bias affects a large proportion of arrays, and has not previously been considered in array-CGH experiments. Results We show that existing normalization techniques do not correct these spatial effects properly. We therefore developed an automatic method for the spatial normalization of array-CGH data. This method makes it possible to delineate and to eliminate and/or correct areas affected by spatial bias. It is based on the combination of a spatial segmentation algorithm called NEM (Neighborhood Expectation Maximization and spatial trend estimation. We defined quality criteria for array-CGH data, demonstrating significant improvements in data quality with our method for three data sets coming from two different platforms (198, 175 and 26 BAC-arrays. Conclusion We have designed an automatic algorithm for the spatial normalization of BAC CGH-array data, preventing the misinterpretation of experimental artifacts as biologically relevant outliers in the genomic profile. This algorithm is implemented in the R package MANOR (Micro-Array NORmalization, which is described at http://bioinfo.curie.fr/projects/manor and available from the Bioconductor site http://www.bioconductor.org. It can also be tested on the CAPweb bioinformatics platform at http://bioinfo.curie.fr/CAPweb.

  5. Sampling from the normal and exponential distributions

    International Nuclear Information System (INIS)

    Chaplin, K.R.; Wills, C.A.

    1982-01-01

    Methods for generating random numbers from the normal and exponential distributions are described. These involve dividing each function into subregions, and for each of these developing a method of sampling usually based on an acceptance rejection technique. When sampling from the normal or exponential distribution, each subregion provides the required random value with probability equal to the ratio of its area to the total area. Procedures written in FORTRAN for the CYBER 175/CDC 6600 system are provided to implement the two algorithms

  6. Normal mode analysis of macromolecular systems with the mobile block Hessian method

    International Nuclear Information System (INIS)

    Ghysels, An; Van Speybroeck, Veronique; Van Neck, Dimitri; Waroquier, Michel; Brooks, Bernard R.

    2015-01-01

    Until recently, normal mode analysis (NMA) was limited to small proteins, not only because the required energy minimization is a computationally exhausting task, but also because NMA requires the expensive diagonalization of a 3N a ×3N a matrix with N a the number of atoms. A series of simplified models has been proposed, in particular the Rotation-Translation Blocks (RTB) method by Tama et al. for the simulation of proteins. It makes use of the concept that a peptide chain or protein can be seen as a subsequent set of rigid components, i.e. the peptide units. A peptide chain is thus divided into rigid blocks with six degrees of freedom each. Recently we developed the Mobile Block Hessian (MBH) method, which in a sense has similar features as the RTB method. The main difference is that MBH was developed to deal with partially optimized systems. The position/orientation of each block is optimized while the internal geometry is kept fixed at a plausible - but not necessarily optimized - geometry. This reduces the computational cost of the energy minimization. Applying the standard NMA on a partially optimized structure however results in spurious imaginary frequencies and unwanted coordinate dependence. The MBH avoids these unphysical effects by taking into account energy gradient corrections. Moreover the number of variables is reduced, which facilitates the diagonalization of the Hessian. In the original implementation of MBH, atoms could only be part of one rigid block. The MBH is now extended to the case where atoms can be part of two or more blocks. Two basic linkages can be realized: (1) blocks connected by one link atom, or (2) by two link atoms, where the latter is referred to as the hinge type connection. In this work we present the MBH concept and illustrate its performance with the crambin protein as an example

  7. Spectrophotometric method for simultaneous estimation of atazanavir sulfate and ritonavir in tablet dosage form

    Directory of Open Access Journals (Sweden)

    Disha A Patel

    2015-01-01

    Full Text Available Background: Ritonavir (RTV and atazanavir sulfate (ATV are protease inhibitor and RTV mostly used as a booster for increasing the bioavailability of other protease inhibitors like ATV. Aims: Quality assessment of the new dosage form of RTV and ATV i.e., tablets is very essential and hence this work deals with to develop sensitive, simple and precise method for simultaneous estimation of ATV and RTV in tablet dosage form by absorbance correction method. Materials and Methods: The present work was carried out on Shimadzu Ultraviolate(UV-1700 double beam spectrophotometer with 1 cm path length supported by S Shimadzu, model-1700(Japan, UV-Probe software, version 2.31 was used for spectral measurements with 10 mm matched quartz cells. Standard ATV and RTV were supplied by Cipla Pharmaceutical Ltd. Methanol was purchased from Finar Chemicals Pvt. Ltd. Results and Conclusion: The λmax or the absorption maxima for ATV and RTV were found to be 279 and 240 nm, respectively in methanol as solvent. The drugs follow Beer-Lambert′s law in the concentration range 30-90 and 10-30 μg/mL for ATV and RTV, respectively. The percentage recovery was found to be 100-100.33% and 100-101.5% for ATV and RTV, respectively. The method was validated for different parameters as per the International Conference for Harmonization Guidelines.

  8. Combustible structural composites and methods of forming combustible structural composites

    Science.gov (United States)

    Daniels, Michael A.; Heaps, Ronald J.; Steffler, Eric D.; Swank, W. David

    2013-04-02

    Combustible structural composites and methods of forming same are disclosed. In an embodiment, a combustible structural composite includes combustible material comprising a fuel metal and a metal oxide. The fuel metal is present in the combustible material at a weight ratio from 1:9 to 1:1 of the fuel metal to the metal oxide. The fuel metal and the metal oxide are capable of exothermically reacting upon application of energy at or above a threshold value to support self-sustaining combustion of the combustible material within the combustible structural composite. Structural-reinforcing fibers are present in the composite at a weight ratio from 1:20 to 10:1 of the structural-reinforcing fibers to the combustible material. Other embodiments and aspects are disclosed.

  9. Lie algebra of conformal Killing–Yano forms

    International Nuclear Information System (INIS)

    Ertem, Ümit

    2016-01-01

    We provide a generalization of the Lie algebra of conformal Killing vector fields to conformal Killing–Yano forms. A new Lie bracket for conformal Killing–Yano forms that corresponds to slightly modified Schouten–Nijenhuis bracket of differential forms is proposed. We show that conformal Killing–Yano forms satisfy a graded Lie algebra in constant curvature manifolds. It is also proven that normal conformal Killing–Yano forms in Einstein manifolds also satisfy a graded Lie algebra. The constructed graded Lie algebras reduce to the graded Lie algebra of Killing–Yano forms and the Lie algebras of conformal Killing and Killing vector fields in special cases. (paper)

  10. Analytical methods for study of transmission line lightning protection

    International Nuclear Information System (INIS)

    Pettersson, Per.

    1993-04-01

    Transmission line lightning performance is studied by analytical methods. The elements of shielding failure flashovers and back-flashovers are analysed as functions of incidence, response and insulation. Closed-form approximate expressions are sought to enhance understanding of the phenomena. Probabilistic and wave propagation aspects are particularly studied. The electrogeometric model of lightning attraction to structures is used in combination with the log-normal probability distribution of lightning to ground currents. The log-normality is found to be retained for the currents collected by mast-type as well as line-type structures, but with a change of scale. For both types, exceedingly simple formulas for the number of hits are derived. Simple closed-form expressions for the line outage rates from back- flashovers and shielding failure flashovers are derived in a uniform way as functions of the critical currents. The expressions involve the standardized normal distribution function. System response is analysed by use of Laplace transforms in combination with text-book transmission-line theory. Inversion into time domain is accomplished by an approximate asymptotic method producing closed-form results. The back-flashover problem is analysed in particular. Approximate, image type expressions are derived for shunt admittance of wires above, on and under ground for analyses of fast transients. The derivation parallels that for series impedance, now well-known. 3 refs, 5 figs

  11. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    International Nuclear Information System (INIS)

    Hofschen, S.; Wolff, I.

    1996-01-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are compared with measurements and show good agreement

  12. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hofschen, S.; Wolff, I. [Gerhard Mercator Univ. of Duisburg (Germany). Dept. of Electrical Engineering

    1996-08-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are compared with measurements and show good agreement.

  13. ChIPnorm: a statistical method for normalizing and identifying differential regions in histone modification ChIP-seq libraries.

    Science.gov (United States)

    Nair, Nishanth Ulhas; Sahu, Avinash Das; Bucher, Philipp; Moret, Bernard M E

    2012-01-01

    The advent of high-throughput technologies such as ChIP-seq has made possible the study of histone modifications. A problem of particular interest is the identification of regions of the genome where different cell types from the same organism exhibit different patterns of histone enrichment. This problem turns out to be surprisingly difficult, even in simple pairwise comparisons, because of the significant level of noise in ChIP-seq data. In this paper we propose a two-stage statistical method, called ChIPnorm, to normalize ChIP-seq data, and to find differential regions in the genome, given two libraries of histone modifications of different cell types. We show that the ChIPnorm method removes most of the noise and bias in the data and outperforms other normalization methods. We correlate the histone marks with gene expression data and confirm that histone modifications H3K27me3 and H3K4me3 act as respectively a repressor and an activator of genes. Compared to what was previously reported in the literature, we find that a substantially higher fraction of bivalent marks in ES cells for H3K27me3 and H3K4me3 move into a K27-only state. We find that most of the promoter regions in protein-coding genes have differential histone-modification sites. The software for this work can be downloaded from http://lcbb.epfl.ch/software.html.

  14. Normal gravity field in relativistic geodesy

    Science.gov (United States)

    Kopeikin, Sergei; Vlasov, Igor; Han, Wen-Biao

    2018-02-01

    Modern geodesy is subject to a dramatic change from the Newtonian paradigm to Einstein's theory of general relativity. This is motivated by the ongoing advance in development of quantum sensors for applications in geodesy including quantum gravimeters and gradientometers, atomic clocks and fiber optics for making ultra-precise measurements of the geoid and multipolar structure of the Earth's gravitational field. At the same time, very long baseline interferometry, satellite laser ranging, and global navigation satellite systems have achieved an unprecedented level of accuracy in measuring 3-d coordinates of the reference points of the International Terrestrial Reference Frame and the world height system. The main geodetic reference standard to which gravimetric measurements of the of Earth's gravitational field are referred is a normal gravity field represented in the Newtonian gravity by the field of a uniformly rotating, homogeneous Maclaurin ellipsoid of which mass and quadrupole momentum are equal to the total mass and (tide-free) quadrupole moment of Earth's gravitational field. The present paper extends the concept of the normal gravity field from the Newtonian theory to the realm of general relativity. We focus our attention on the calculation of the post-Newtonian approximation of the normal field that is sufficient for current and near-future practical applications. We show that in general relativity the level surface of homogeneous and uniformly rotating fluid is no longer described by the Maclaurin ellipsoid in the most general case but represents an axisymmetric spheroid of the fourth order with respect to the geodetic Cartesian coordinates. At the same time, admitting a post-Newtonian inhomogeneity of the mass density in the form of concentric elliptical shells allows one to preserve the level surface of the fluid as an exact ellipsoid of rotation. We parametrize the mass density distribution and the level surface with two parameters which are

  15. METHOD OF GROUP OBJECTS FORMING FOR SPACE-BASED REMOTE SENSING OF THE EARTH

    Directory of Open Access Journals (Sweden)

    A. N. Grigoriev

    2015-07-01

    Full Text Available Subject of Research. Research findings of the specific application of space-based optical-electronic and radar means for the Earth remote sensing are considered. The subject matter of the study is the current planning of objects survey on the underlying surface in order to increase the effectiveness of sensing system due to the rational use of its resources. Method. New concept of a group object, stochastic swath and stochastic length of the route is introduced. The overview of models for single, group objects and their parameters is given. The criterion for the existence of the group object based on two single objects is formulated. The method for group objects formation while current survey planning has been developed and its description is presented. The method comprises several processing stages for data about objects with the calculation of new parameters, the stochastic characteristics of space means and validates the spatial size of the object value of the stochastic swath and stochastic length of the route. The strict mathematical description of techniques for model creation of a group object based on data about a single object and onboard special complex facilities in difficult conditions of registration of spatial data is given. Main Results. The developed method is implemented on the basis of modern geographic information system in the form of a software tool layout with advanced tools of processing and analysis of spatial data in vector format. Experimental studies of the forming method for the group of objects were carried out on a different real object environment using the parameters of modern national systems of the Earth remote sensing detailed observation Canopus-B and Resurs-P. Practical Relevance. The proposed models and method are focused on practical implementation using vector spatial data models and modern geoinformation technologies. Practical value lies in the reduction in the amount of consumable resources by means of

  16. The usefulness and the problems of attenuation correction using simultaneous transmission and emission data acquisition method. Studies on normal volunteers and phantom

    International Nuclear Information System (INIS)

    Kijima, Tetsuji; Kumita, Shin-ichiro; Mizumura, Sunao; Cho, Keiichi; Ishihara, Makiko; Toba, Masahiro; Kumazaki, Tatsuo; Takahashi, Munehiro.

    1997-01-01

    Attenuation correction using simultaneous transmission data (TCT) and emission data (ECT) acquisition method was applied to 201 Tl myocardial SPECT with ten normal adults and the phantom in order to validate the efficacy of attenuation correction using this method. Normal adults study demonstrated improved 201 Tl accumulation to the septal wall and the posterior wall of the left ventricle and relative decreased activities in the lateral wall with attenuation correction (p 201 Tl uptake organs such as the liver and the stomach pushed up the activities in the septal wall and the posterior wall. Cardiac dynamic phantom studies showed partial volume effect due to cardiac motion contributed to under-correction of the apex, which might be overcome using gated SPECT. Although simultaneous TCT and ECT acquisition was conceived of the advantageous method for attenuation correction, miss-correction of the special myocardial segments should be taken into account in assessment of attenuation correction compensated images. (author)

  17. Normal Weight Dyslipidemia

    DEFF Research Database (Denmark)

    Ipsen, David Hojland; Tveden-Nyborg, Pernille; Lykkesfeldt, Jens

    2016-01-01

    Objective: The liver coordinates lipid metabolism and may play a vital role in the development of dyslipidemia, even in the absence of obesity. Normal weight dyslipidemia (NWD) and patients with nonalcoholic fatty liver disease (NAFLD) who do not have obesity constitute a unique subset...... of individuals characterized by dyslipidemia and metabolic deterioration. This review examined the available literature on the role of the liver in dyslipidemia and the metabolic characteristics of patients with NAFLD who do not have obesity. Methods: PubMed was searched using the following keywords: nonobese......, dyslipidemia, NAFLD, NWD, liver, and metabolically obese/unhealthy normal weight. Additionally, article bibliographies were screened, and relevant citations were retrieved. Studies were excluded if they had not measured relevant biomarkers of dyslipidemia. Results: NWD and NAFLD without obesity share a similar...

  18. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  19. Method for making a low density polyethylene waste form for safe disposal of low level radioactive material

    Science.gov (United States)

    Colombo, P.; Kalb, P.D.

    1984-06-05

    In the method of the invention low density polyethylene pellets are mixed in a predetermined ratio with radioactive particulate material, then the mixture is fed through a screw-type extruder that melts the low density polyethylene under a predetermined pressure and temperature to form a homogeneous matrix that is extruded and separated into solid monolithic waste forms. The solid waste forms are adapted to be safely handled, stored for a short time, and safely disposed of in approved depositories.

  20. Color normalization of histology slides using graph regularized sparse NMF

    Science.gov (United States)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The

  1. A simple method for normalization of DNA extraction to improve the quantitative detection of soil-borne plant pathogenic oomycetes by real-time PCR.

    Science.gov (United States)

    Li, M; Ishiguro, Y; Kageyama, K; Zhu, Z

    2015-08-01

    Most of the current research into the quantification of soil-borne pathogenic oomycetes lacks determination of DNA extraction efficiency, probably leading to an incorrect estimation of DNA quantity. In this study, we developed a convenient method by using a 100 bp artificially synthesized DNA sequence derived from the mitochondrion NADH dehydrogenase subunit 2 gene of Thunnus thynnus as a control to determine the DNA extraction efficiency. The control DNA was added to soils and then co-extracted along with soil genomic DNA. DNA extraction efficiency was determined by the control DNA. Two different DNA extraction methods were compared and evaluated using different types of soils, and the commercial kit was proved to give more consistent results. We used the control DNA combined with real-time PCR to quantify the oomycete DNAs from 12 naturally infested soils. Detectable target DNA concentrations were three to five times higher after normalization. Our tests also showed that the extraction efficiencies varied on a sample-to-sample basis and were simple and useful for the accurate quantification of soil-borne pathogenic oomycetes. Oomycetes include many important plant pathogens. Accurate quantification of these pathogens is essential in the management of diseases. This study reports an easy method utilizing an external DNA control for the normalization of DNA extraction by real-time PCR. By combining two different efficient soil DNA extraction methods, the developed quantification method dramatically improved the results. This study also proves that the developed normalization method is necessary and useful for the accurate quantification of soil-borne plant pathogenic oomycetes. © 2015 The Society for Applied Microbiology.

  2. GC-Content Normalization for RNA-Seq Data

    Science.gov (United States)

    2011-01-01

    Background Transcriptome sequencing (RNA-Seq) has become the assay of choice for high-throughput studies of gene expression. However, as is the case with microarrays, major technology-related artifacts and biases affect the resulting expression measures. Normalization is therefore essential to ensure accurate inference of expression levels and subsequent analyses thereof. Results We focus on biases related to GC-content and demonstrate the existence of strong sample-specific GC-content effects on RNA-Seq read counts, which can substantially bias differential expression analysis. We propose three simple within-lane gene-level GC-content normalization approaches and assess their performance on two different RNA-Seq datasets, involving different species and experimental designs. Our methods are compared to state-of-the-art normalization procedures in terms of bias and mean squared error for expression fold-change estimation and in terms of Type I error and p-value distributions for tests of differential expression. The exploratory data analysis and normalization methods proposed in this article are implemented in the open-source Bioconductor R package EDASeq. Conclusions Our within-lane normalization procedures, followed by between-lane normalization, reduce GC-content bias and lead to more accurate estimates of expression fold-changes and tests of differential expression. Such results are crucial for the biological interpretation of RNA-Seq experiments, where downstream analyses can be sensitive to the supplied lists of genes. PMID:22177264

  3. Ultrasonic off-normal imaging techniques for under sodium viewing

    International Nuclear Information System (INIS)

    Michaels, T.E.; Horn, J.E.

    1979-01-01

    Advanced imaging methods have been evaluated for the purpose of constructing images of objects from ultrasonic data. Feasibility of imaging surfaces which are off-normal to the sound beam has been established. Laboratory results are presented which show a complete image of a typical core component. Using the previous system developed for under sodium viewing (USV), only normal surfaces of this object could be imaged. Using advanced methods, surfaces up to 60 degrees off-normal have been imaged. Details of equipment and procedures used for this image construction are described. Additional work on high temperature transducers, electronics, and signal analysis is required in order to adapt the off-normal viewing process described here to an eventual USV application

  4. Robust glint detection through homography normalization

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Roholm, Lars; García Ferreiros, Iván

    2014-01-01

    A novel normalization principle for robust glint detection is presented. The method is based on geometric properties of corneal reflections and allows for simple and effective detection of glints even in the presence of several spurious and identically appearing reflections. The method is tested...

  5. Heart failure: when form fails to follow function.

    Science.gov (United States)

    Katz, Arnold M; Rolett, Ellis L

    2016-02-01

    Cardiac performance is normally determined by architectural, cellular, and molecular structures that determine the heart's form, and by physiological and biochemical mechanisms that regulate the function of these structures. Impaired adaptation of form to function in failing hearts contributes to two syndromes initially called systolic heart failure (SHF) and diastolic heart failure (DHF). In SHF, characterized by high end-diastolic volume (EDV), the left ventricle (LV) cannot eject a normal stroke volume (SV); in DHF, with normal or low EDV, the LV cannot accept a normal venous return. These syndromes are now generally defined in terms of ejection fraction (EF): SHF became 'heart failure with reduced ejection fraction' (HFrEF) while DHF became 'heart failure with normal or preserved ejection fraction' (HFnEF or HFpEF). However, EF is a chimeric index because it is the ratio between SV--which measures function, and EDV--which measures form. In SHF the LV dilates when sarcomere addition in series increases cardiac myocyte length, whereas sarcomere addition in parallel can cause concentric hypertrophy in DHF by increasing myocyte thickness. Although dilatation in SHF allows the LV to accept a greater venous return, it increases the energy cost of ejection and initiates a vicious cycle that contributes to progressive dilatation. In contrast, concentric hypertrophy in DHF facilitates ejection but impairs filling and can cause heart muscle to deteriorate. Differences in the molecular signals that initiate dilatation and concentric hypertrophy can explain why many drugs that improve prognosis in SHF have little if any benefit in DHF. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  6. On matrix superpotential and three-component normal modes

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, R. de Lima [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Lima, A.F. de [Universidade Federal de Campina Grande (UFCG), PB (Brazil). Dept. de Fisica; Mello, E.R. Bezerra de; Bezerra, V.B. [Universidade Federal da Paraiba (UFPB), Joao Pessoa, PB (Brazil). Dept. de Fisica]. E-mails: rafael@df.ufcg.edu.br; aerlima@df.ufcg.edu.br; emello@fisica.ufpb.br; valdir@fisica.ufpb.br

    2007-07-01

    We consider the supersymmetric quantum mechanics(SUSY QM) with three-component normal modes for the Bogomol'nyi-Prasad-Sommerfield (BPS) states. An explicit form of the SUSY QM matrix superpotential is presented and the corresponding three-component bosonic zero-mode eigenfunction is investigated. (author)

  7. ASSESSMENT OF SELECTED PROPERTIES OF NORMAL CONCRETES WITH THE GRINDED RUBBER FROM WORN OUT VEHICLE TYRES

    Directory of Open Access Journals (Sweden)

    Ewa Ołdakowska

    2015-07-01

    Full Text Available Rubber from the worn tyres is associated with a useless material, strenuous for environment, whose most popular recovery method until recently was storage (currently forbidden by law. The adoption and dissemination of new ecological standards, created not only by the European and national legislation, but also developing as a result of expanding ecological consciousness, forces the necessity of seeking efficient methods of utilization of the vehicle tyres. The exemplary solution for the problem of tyres withdrawn from the operation, presented in the article, is using them in the grinded form as a substitute for the natural aggregate for the production of normal concrete. The article presents the results of the tests of selected properties of the modified normal concrete, upon the basis of which it has been found that the rubber causes decrease of compression strength, concrete weight, limits water absorbability, and does not influence significantly the physical and chemical phenomena accompanying the composite structure formation.

  8. Normal-Gamma-Bernoulli Peak Detection for Analysis of Comprehensive Two-Dimensional Gas Chromatography Mass Spectrometry Data.

    Science.gov (United States)

    Kim, Seongho; Jang, Hyejeong; Koo, Imhoi; Lee, Joohyoung; Zhang, Xiang

    2017-01-01

    Compared to other analytical platforms, comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has much increased separation power for analysis of complex samples and thus is increasingly used in metabolomics for biomarker discovery. However, accurate peak detection remains a bottleneck for wide applications of GC×GC-MS. Therefore, the normal-exponential-Bernoulli (NEB) model is generalized by gamma distribution and a new peak detection algorithm using the normal-gamma-Bernoulli (NGB) model is developed. Unlike the NEB model, the NGB model has no closed-form analytical solution, hampering its practical use in peak detection. To circumvent this difficulty, three numerical approaches, which are fast Fourier transform (FFT), the first-order and the second-order delta methods (D1 and D2), are introduced. The applications to simulated data and two real GC×GC-MS data sets show that the NGB-D1 method performs the best in terms of both computational expense and peak detection performance.

  9. MR guided spatial normalization of SPECT scans

    International Nuclear Information System (INIS)

    Crouch, B.; Barnden, L.R.; Kwiatek, R.

    2010-01-01

    Full text: In SPECT population studies where magnetic resonance (MR) scans are also available, the higher resolution of the MR scans allows for an improved spatial normalization of the SPECT scans. In this approach, the SPECT images are first coregistered to their corresponding MR images by a linear (affine) transformation which is calculated using SPM's mutual information maximization algorithm. Non-linear spatial normalization maps are then computed either directly from the MR scans using SPM's built in spatial normalization algorithm, or, from segmented TI MR images using DARTEL, an advanced diffeomorphism based spatial normalization algorithm. We compare these MR based methods to standard SPECT based spatial normalization for a population of 27 fibromyalgia patients and 25 healthy controls with spin echo T 1 scans. We identify significant perfusion deficits in prefrontal white matter in FM patients, with the DARTEL based spatial normalization procedure yielding stronger statistics than the standard SPECT based spatial normalization. (author)

  10. Cross Correlation versus Normalized Mutual Information on Image Registration

    Science.gov (United States)

    Tan, Bin; Tilton, James C.; Lin, Guoqing

    2016-01-01

    This is the first study to quantitatively assess and compare cross correlation and normalized mutual information methods used to register images in subpixel scale. The study shows that the normalized mutual information method is less sensitive to unaligned edges due to the spectral response differences than is cross correlation. This characteristic makes the normalized image resolution a better candidate for band to band registration. Improved band-to-band registration in the data from satellite-borne instruments will result in improved retrievals of key science measurements such as cloud properties, vegetation, snow and fire.

  11. Fabricating TiO2 nanocolloids by electric spark discharge method at normal temperature and pressure.

    Science.gov (United States)

    Tseng, Kuo-Hsiung; Chang, Chaur-Yang; Chung, Meng-Yun; Cheng, Ting-Shou

    2017-11-17

    In this study, TiO 2 nanocolloids were successfully fabricated in deionized water without using suspending agents through using the electric spark discharge method at room temperature and under normal atmospheric pressure. This method was exceptional because it did not create nanoparticle dispersion and the produced colloids contained no derivatives. The proposed method requires only traditional electrical discharge machines (EDMs), self-made magnetic stirrers, and Ti wires (purity, 99.99%). The EDM pulse on time (T on ) and pulse off time (T off ) were respectively set at 50 and 100 μs, 100 and 100 μs, 150 and 100 μs, and 200 and 100 μs to produce four types of TiO 2 nanocolloids. Zetasizer analysis of the nanocolloids showed that a decrease in T on increased the suspension stability, but there were no significant correlations between T on and particle size. Colloids produced from the four production configurations showed a minimum particle size between 29.39 and 52.85 nm and a zeta-potential between -51.2 and -46.8 mV, confirming that the method introduced in this study can be used to produce TiO 2 nanocolloids with excellent suspension stability. Scanning electron microscopy with energy dispersive spectroscopy also indicated that the TiO 2 colloids did not contain elements other than Ti and oxygen.

  12. Fabricating TiO2 nanocolloids by electric spark discharge method at normal temperature and pressure

    Science.gov (United States)

    Tseng, Kuo-Hsiung; Chang, Chaur-Yang; Chung, Meng-Yun; Cheng, Ting-Shou

    2017-11-01

    In this study, TiO2 nanocolloids were successfully fabricated in deionized water without using suspending agents through using the electric spark discharge method at room temperature and under normal atmospheric pressure. This method was exceptional because it did not create nanoparticle dispersion and the produced colloids contained no derivatives. The proposed method requires only traditional electrical discharge machines (EDMs), self-made magnetic stirrers, and Ti wires (purity, 99.99%). The EDM pulse on time (T on) and pulse off time (T off) were respectively set at 50 and 100 μs, 100 and 100 μs, 150 and 100 μs, and 200 and 100 μs to produce four types of TiO2 nanocolloids. Zetasizer analysis of the nanocolloids showed that a decrease in T on increased the suspension stability, but there were no significant correlations between T on and particle size. Colloids produced from the four production configurations showed a minimum particle size between 29.39 and 52.85 nm and a zeta-potential between -51.2 and -46.8 mV, confirming that the method introduced in this study can be used to produce TiO2 nanocolloids with excellent suspension stability. Scanning electron microscopy with energy dispersive spectroscopy also indicated that the TiO2 colloids did not contain elements other than Ti and oxygen.

  13. Is this the right normalization? A diagnostic tool for ChIP-seq normalization.

    Science.gov (United States)

    Angelini, Claudia; Heller, Ruth; Volkinshtein, Rita; Yekutieli, Daniel

    2015-05-09

    Chip-seq experiments are becoming a standard approach for genome-wide profiling protein-DNA interactions, such as detecting transcription factor binding sites, histone modification marks and RNA Polymerase II occupancy. However, when comparing a ChIP sample versus a control sample, such as Input DNA, normalization procedures have to be applied in order to remove experimental source of biases. Despite the substantial impact that the choice of the normalization method can have on the results of a ChIP-seq data analysis, their assessment is not fully explored in the literature. In particular, there are no diagnostic tools that show whether the applied normalization is indeed appropriate for the data being analyzed. In this work we propose a novel diagnostic tool to examine the appropriateness of the estimated normalization procedure. By plotting the empirical densities of log relative risks in bins of equal read count, along with the estimated normalization constant, after logarithmic transformation, the researcher is able to assess the appropriateness of the estimated normalization constant. We use the diagnostic plot to evaluate the appropriateness of the estimates obtained by CisGenome, NCIS and CCAT on several real data examples. Moreover, we show the impact that the choice of the normalization constant can have on standard tools for peak calling such as MACS or SICER. Finally, we propose a novel procedure for controlling the FDR using sample swapping. This procedure makes use of the estimated normalization constant in order to gain power over the naive choice of constant (used in MACS and SICER), which is the ratio of the total number of reads in the ChIP and Input samples. Linear normalization approaches aim to estimate a scale factor, r, to adjust for different sequencing depths when comparing ChIP versus Input samples. The estimated scaling factor can easily be incorporated in many peak caller algorithms to improve the accuracy of the peak identification. The

  14. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. of the stomach (ID 345), neutralisation of gastric acid (ID 345), contribution to normal formation of collagen and connective tissue (ID 287, 288, 333, 334, 335, 1405, 1652, 1718, 1719, 1945), maintenance of normal bone (ID 287, 335, 1652, 1718, 1945), maintenance of normal joints (ID 1405, 1652, 1945

    DEFF Research Database (Denmark)

    Tetens, Inge

    claims in relation to silicon and protection against aluminium accumulation in the brain, cardiovascular health, forming a protective coat on the mucous membrane of the stomach, neutralisation of gastric acid, contribution to normal formation of collagen and connective tissue, maintenance of normal bone...

  16. Reconstructing Normality

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Fristed, Peter Billeskov

    2012-01-01

    Forensic psychiatry is an area of priority for the Danish Government. As the field expands, this calls for increased knowledge about mental health nursing practice, as this is part of the forensic psychiatry treatment offered. However, only sparse research exists in this area. The aim of this study...... was to investigate the characteristics of forensic mental health nursing staff interaction with forensic mental health inpatients and to explore how staff give meaning to these interactions. The project included 32 forensic mental health staff members, with over 307 hours of participant observations, 48 informal....... The intention is to establish a trusting relationship to form behaviour and perceptual-corrective care, which is characterized by staff's endeavours to change, halt, or support the patient's behaviour or perception in relation to staff's perception of normality. The intention is to support and teach the patient...

  17. Chandra-SDSS Normal and Star-Forming Galaxies. I. X-Ray Source Properties of Galaxies Detected by the Chandra X-Ray Observatory in SDSS DR2

    Science.gov (United States)

    Hornschemeier, A. E.; Heckman, T. M.; Ptak, A. F.; Tremonti, C. A.; Colbert, E. J. M.

    2005-01-01

    We have cross-correlated X-ray catalogs derived from archival Chandra X-Ray Observatory ACIS observations with a Sloan Digital Sky Survey Data Release 2 (DR2) galaxy catalog to form a sample of 42 serendipitously X-ray-detected galaxies over the redshift interval 0.03normal galaxies and those in the deepest X-ray surveys. Our chief purpose is to compare optical spectroscopic diagnostics of activity (both star formation and accretion) with X-ray properties of galaxies. Our work supports a normalization value of the X-ray-star formation rate correlation consistent with the lower values published in the literature. The difference is in the allocation of X-ray emission to high-mass X-ray binaries relative to other components, such as hot gas, low-mass X-ray binaries, and/or active galactic nuclei (AGNs). We are able to quantify a few pitfalls in the use of lower resolution, lower signal-to-noise ratio optical spectroscopy to identify X-ray sources (as has necessarily been employed for many X-ray surveys). Notably, we find a few AGNs that likely would have been misidentified as non-AGN sources in higher redshift studies. However, we do not find any X-ray-hard, highly X-ray-luminous galaxies lacking optical spectroscopic diagnostics of AGN activity. Such sources are members of the ``X-ray-bright, optically normal galaxy'' (XBONG) class of AGNs.

  18. Doppler ultrasound scan during normal gestation: umbilical circulation; Ecografia Doppler en la gestacion normal: circulacion umbilical

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz, T.; Sabate, J.; Martinez-Benavides, M. M.; Sanchez-Ramos, J. [Hospital Virgen Macarena. Sevilla (Spain)

    2002-07-01

    To determine normal umbilical circulation patterns by means of Doppler ultrasound scan in a healthy gestating population without risk factors and with normal perinatal results, and to evaluate any occurring modifications relative to gestational age by obtaining records kept during pregnancy. One hundred and sixteen pregnant women carrying a single fetus have been studied. These women had no risk factors, with both clinical and analytical controls, as well as ultrasound scans, all being normal. There were performed a total of 193 Doppler ultrasound scans between weeks 15 and 41 of gestation, with blood-flow analysis in the arteries and vein of the umbilical cord. The obtained information was correlated with parameters that evaluate fetal well-being (fetal monitoring and/or oxytocin test) and perinatal result (delivery type, birth weight, Apgar score). Statistical analysis was performed with the programs SPSS 6.0.1 for Windows and EPIINFO 6.0.4. With pulsed Doppler, the umbilical artery in all cases demonstrated a biphasic morphology with systolic and diastolic components and without retrograde blood flow. As the gestation period increased, there was observed a progressive decrease in resistance along with an increase in blood-flow velocity during the diastolic phase. The Doppler ultrasound scan is a non-invasive method that permits the hemodynamic study of umbilical blood circulation. A knowledge of normal blood-flow signal morphology, as well as of the normal values for Doppler indices in relation to gestational age would permit us to utilize this method in high-risk pregnancies. (Author) 30 refs.

  19. Identity Work at a Normal University in Shanghai

    Science.gov (United States)

    Cockain, Alex

    2016-01-01

    Based upon ethnographic research, this article explores undergraduate students' experiences at a normal university in Shanghai focusing on the types of identities and forms of sociality emerging therein. Although students' symptoms of disappointment seem to indicate the power of university experiences to extinguish purposeful action, this article…

  20. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software.

    Science.gov (United States)

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-05

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.