WorldWideScience

Sample records for normal form method

  1. Application of normal form methods to the analysis of resonances in particle accelerators

    International Nuclear Information System (INIS)

    Davies, W.G.

    1992-01-01

    The transformation to normal form in a Lie-algebraic framework provides a very powerful method for identifying and analysing non-linear behaviour and resonances in particle accelerators. The basic ideas are presented and illustrated. (author). 4 refs

  2. SYNTHESIS METHODS OF ALGEBRAIC NORMAL FORM OF MANY-VALUED LOGIC FUNCTIONS

    Directory of Open Access Journals (Sweden)

    A. V. Sokolov

    2016-01-01

    Full Text Available The rapid development of methods of error-correcting coding, cryptography, and signal synthesis theory based on the principles of many-valued logic determines the need for a more detailed study of the forms of representation of functions of many-valued logic. In particular the algebraic normal form of Boolean functions, also known as Zhegalkin polynomial, that well describe many of the cryptographic properties of Boolean functions is widely used. In this article, we formalized the notion of algebraic normal form for many-valued logic functions. We developed a fast method of synthesis of algebraic normal form of 3-functions and 5-functions that work similarly to the Reed-Muller transform for Boolean functions: on the basis of recurrently synthesized transform matrices. We propose the hypothesis, which determines the rules of the synthesis of these matrices for the transformation from the truth table to the coefficients of the algebraic normal form and the inverse transform for any given number of variables of 3-functions or 5-functions. The article also introduces the definition of algebraic degree of nonlinearity of the functions of many-valued logic and the S-box, based on the principles of many-valued logic. Thus, the methods of synthesis of algebraic normal form of 3-functions applied to the known construction of recurrent synthesis of S-boxes of length N = 3k, whereby their algebraic degrees of nonlinearity are computed. The results could be the basis for further theoretical research and practical applications such as: the development of new cryptographic primitives, error-correcting codes, algorithms of data compression, signal structures, and algorithms of block and stream encryption, all based on the perspective principles of many-valued logic. In addition, the fast method of synthesis of algebraic normal form of many-valued logic functions is the basis for their software and hardware implementation.

  3. THE METHOD OF CONSTRUCTING A BOOLEAN FORMULA OF A POLYGON IN THE DISJUNCTIVE NORMAL FORM

    Directory of Open Access Journals (Sweden)

    A. A. Butov

    2014-01-01

    Full Text Available The paper focuses on finalizing the method of finding a polygon Boolean formula in disjunctive normal form, described in the previous article [1]. An improved method eliminates the drawback asso-ciated with the existence of a class of problems for which the solution is only approximate. The pro-posed method always allows to find an exact solution. The method can be used, in particular, in the systems of computer-aided design of integrated circuits topology.

  4. Analysis of a renormalization group method and normal form theory for perturbed ordinary differential equations

    Science.gov (United States)

    DeVille, R. E. Lee; Harkin, Anthony; Holzer, Matt; Josić, Krešimir; Kaper, Tasso J.

    2008-06-01

    For singular perturbation problems, the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. E. 49 (1994) 4502-4511] has been shown to be an effective general approach for deriving reduced or amplitude equations that govern the long time dynamics of the system. It has been applied to a variety of problems traditionally analyzed using disparate methods, including the method of multiple scales, boundary layer theory, the WKBJ method, the Poincaré-Lindstedt method, the method of averaging, and others. In this article, we show how the RG method may be used to generate normal forms for large classes of ordinary differential equations. First, we apply the RG method to systems with autonomous perturbations, and we show that the reduced or amplitude equations generated by the RG method are equivalent to the classical Poincaré-Birkhoff normal forms for these systems up to and including terms of O(ɛ2), where ɛ is the perturbation parameter. This analysis establishes our approach and generalizes to higher order. Second, we apply the RG method to systems with nonautonomous perturbations, and we show that the reduced or amplitude equations so generated constitute time-asymptotic normal forms, which are based on KBM averages. Moreover, for both classes of problems, we show that the main coordinate changes are equivalent, up to translations between the spaces in which they are defined. In this manner, our results show that the RG method offers a new approach for deriving normal forms for nonautonomous systems, and it offers advantages since one can typically more readily identify resonant terms from naive perturbation expansions than from the nonautonomous vector fields themselves. Finally, we establish how well the solution to the RG equations approximates the solution of the original equations on time scales of O(1/ɛ).

  5. Optimization of accelerator parameters using normal form methods on high-order transfer maps

    Energy Technology Data Exchange (ETDEWEB)

    Snopok, Pavel [Michigan State Univ., East Lansing, MI (United States)

    2007-05-01

    Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented

  6. First-order systems of linear partial differential equations: normal forms, canonical systems, transform methods

    Directory of Open Access Journals (Sweden)

    Heinz Toparkus

    2014-04-01

    Full Text Available In this paper we consider first-order systems with constant coefficients for two real-valued functions of two real variables. This is both a problem in itself, as well as an alternative view of the classical linear partial differential equations of second order with constant coefficients. The classification of the systems is done using elementary methods of linear algebra. Each type presents its special canonical form in the associated characteristic coordinate system. Then you can formulate initial value problems in appropriate basic areas, and you can try to achieve a solution of these problems by means of transform methods.

  7. Mandibulary dental arch form differences between level four polynomial method and pentamorphic pattern for normal occlusion sample

    Directory of Open Access Journals (Sweden)

    Y. Yuliana

    2011-07-01

    Full Text Available The aim of an orthodontic treatment is to achieve aesthetic, dental health and the surrounding tissues, occlusal functional relationship, and stability. The success of an orthodontic treatment is influenced by many factors, such as diagnosis and treatment plan. In order to do a diagnosis and a treatment plan, medical record, clinical examination, radiographic examination, extra oral and intra oral photos, as well as study model analysis are needed. The purpose of this study was to evaluate the differences in dental arch form between level four polynomial and pentamorphic arch form and to determine which one is best suitable for normal occlusion sample. This analytic comparative study was conducted at Faculty of Dentistry Universitas Padjadjaran on 13 models by comparing the dental arch form using the level four polynomial method based on mathematical calculations, the pattern of the pentamorphic arch and mandibular normal occlusion as a control. The results obtained were tested using statistical analysis T student test. The results indicate a significant difference both in the form of level four polynomial method and pentamorphic arch form when compared with mandibular normal occlusion dental arch form. Level four polynomial fits better, compare to pentamorphic arch form.

  8. Application of Power Geometry and Normal Form Methods to the Study of Nonlinear ODEs

    Science.gov (United States)

    Edneral, Victor

    2018-02-01

    This paper describes power transformations of degenerate autonomous polynomial systems of ordinary differential equations which reduce such systems to a non-degenerative form. Example of creating exact first integrals of motion of some planar degenerate system in a closed form is given.

  9. Application of Power Geometry and Normal Form Methods to the Study of Nonlinear ODEs

    Directory of Open Access Journals (Sweden)

    Edneral Victor

    2018-01-01

    Full Text Available This paper describes power transformations of degenerate autonomous polynomial systems of ordinary differential equations which reduce such systems to a non-degenerative form. Example of creating exact first integrals of motion of some planar degenerate system in a closed form is given.

  10. Normal forms in Poisson geometry

    NARCIS (Netherlands)

    Marcut, I.T.

    2013-01-01

    The structure of Poisson manifolds is highly nontrivial even locally. The first important result in this direction is Conn's linearization theorem around fixed points. One of the main results of this thesis (Theorem 2) is a normal form theorem in Poisson geometry, which is the Poisson-geometric

  11. The method of normal forms for singularly perturbed systems of Fredholm integro-differential equations with rapidly varying kernels

    Energy Technology Data Exchange (ETDEWEB)

    Bobodzhanov, A A; Safonov, V F [National Research University " Moscow Power Engineering Institute" , Moscow (Russian Federation)

    2013-07-31

    The paper deals with extending the Lomov regularization method to classes of singularly perturbed Fredholm-type integro-differential systems, which have not so far been studied. In these the limiting operator is discretely noninvertible. Such systems are commonly known as problems with unstable spectrum. Separating out the essential singularities in the solutions to these problems presents great difficulties. The principal one is to give an adequate description of the singularities induced by 'instability points' of the spectrum. A methodology for separating singularities by using normal forms is developed. It is applied to the above type of systems and is substantiated in these systems. Bibliography: 10 titles.

  12. Nonlinear dynamics exploration through normal forms

    CERN Document Server

    Kahn, Peter B

    2014-01-01

    Geared toward advanced undergraduates and graduate students, this exposition covers the method of normal forms and its application to ordinary differential equations through perturbation analysis. In addition to its emphasis on the freedom inherent in the normal form expansion, the text features numerous examples of equations, the kind of which are encountered in many areas of science and engineering. The treatment begins with an introduction to the basic concepts underlying the normal forms. Coverage then shifts to an investigation of systems with one degree of freedom that model oscillations

  13. TRASYS form factor matrix normalization

    Science.gov (United States)

    Tsuyuki, Glenn T.

    1992-01-01

    A method has been developed for adjusting a TRASYS enclosure form factor matrix to unity. This approach is not limited to closed geometries, and in fact, it is primarily intended for use with open geometries. The purpose of this approach is to prevent optimistic form factors to space. In this method, nodal form factor sums are calculated within 0.05 of unity using TRASYS, although deviations as large as 0.10 may be acceptable, and then, a process is employed to distribute the difference amongst the nodes. A specific example has been analyzed with this method, and a comparison was performed with a standard approach for calculating radiation conductors. In this comparison, hot and cold case temperatures were determined. Exterior nodes exhibited temperature differences as large as 7 C and 3 C for the hot and cold cases, respectively when compared with the standard approach, while interior nodes demonstrated temperature differences from 0 C to 5 C. These results indicate that temperature predictions can be artificially biased if the form factor computation error is lumped into the individual form factors to space.

  14. Normal forms of Hopf-zero singularity

    International Nuclear Information System (INIS)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative–nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov–Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov–Takens singularities. Despite this, the normal form computations of Bogdanov–Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative–nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto–Sivashinsky equations to demonstrate the applicability of our results. (paper)

  15. Normal forms of Hopf-zero singularity

    Science.gov (United States)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.

  16. Normal form for mirror machine Hamiltonians

    International Nuclear Information System (INIS)

    Dragt, A.J.; Finn, J.M.

    1979-01-01

    A systematic algorithm is developed for performing canonical transformations on Hamiltonians which govern particle motion in magnetic mirror machines. These transformations are performed in such a way that the new Hamiltonian has a particularly simple normal form. From this form it is possible to compute analytic expressions for gyro and bounce frequencies. In addition, it is possible to obtain arbitrarily high order terms in the adiabatic magnetic moment expansion. The algorithm makes use of Lie series, is an extension of Birkhoff's normal form method, and has been explicitly implemented by a digital computer programmed to perform the required algebraic manipulations. Application is made to particle motion in a magnetic dipole field and to a simple mirror system. Bounce frequencies and locations of periodic orbits are obtained and compared with numerical computations. Both mirror systems are shown to be insoluble, i.e., trajectories are not confined to analytic hypersurfaces, there is no analytic third integral of motion, and the adiabatic magnetic moment expansion is divergent. It is expected also that the normal form procedure will prove useful in the study of island structure and separatrices associated with periodic orbits, and should facilitate studies of breakdown of adiabaticity and the onset of ''stochastic'' behavior

  17. Normal form theory and spectral sequences

    OpenAIRE

    Sanders, Jan A.

    2003-01-01

    The concept of unique normal form is formulated in terms of a spectral sequence. As an illustration of this technique some results of Baider and Churchill concerning the normal form of the anharmonic oscillator are reproduced. The aim of this paper is to show that spectral sequences give us a natural framework in which to formulate normal form theory. © 2003 Elsevier Science (USA). All rights reserved.

  18. a Recursive Approach to Compute Normal Forms

    Science.gov (United States)

    HSU, L.; MIN, L. J.; FAVRETTO, L.

    2001-06-01

    Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.

  19. Normal equivariant forms of vector fields

    International Nuclear Information System (INIS)

    Sanchez Bringas, F.

    1992-07-01

    We prove a theorem of linearization of type Siegel and a theorem of normal forms of type Poincare-Dulac for germs of holomorphic vector fields in the origin of C 2 , Γ -equivariants, where Γ is a finite subgroup of GL (2,C). (author). 5 refs

  20. Method for forming ammonia

    Science.gov (United States)

    Kong, Peter C.; Pink, Robert J.; Zuck, Larry D.

    2008-08-19

    A method for forming ammonia is disclosed and which includes the steps of forming a plasma; providing a source of metal particles, and supplying the metal particles to the plasma to form metal nitride particles; and providing a substance, and reacting the metal nitride particles with the substance to produce ammonia, and an oxide byproduct.

  1. Normal form and synchronization of strict-feedback chaotic systems

    International Nuclear Information System (INIS)

    Wang, Feng; Chen, Shihua; Yu Minghai; Wang Changping

    2004-01-01

    This study concerns the normal form and synchronization of strict-feedback chaotic systems. We prove that, any strict-feedback chaotic system can be rendered into a normal form with a invertible transform and then a design procedure to synchronize the normal form of a non-autonomous strict-feedback chaotic system is presented. This approach needs only a scalar driving signal to realize synchronization no matter how many dimensions the chaotic system contains. Furthermore, the Roessler chaotic system is taken as a concrete example to illustrate the procedure of designing without transforming a strict-feedback chaotic system into its normal form. Numerical simulations are also provided to show the effectiveness and feasibility of the developed methods

  2. Volume-preserving normal forms of Hopf-zero singularity

    International Nuclear Information System (INIS)

    Gazor, Majid; Mokhtari, Fahimeh

    2013-01-01

    A practical method is described for computing the unique generator of the algebra of first integrals associated with a large class of Hopf-zero singularity. The set of all volume-preserving classical normal forms of this singularity is introduced via a Lie algebra description. This is a maximal vector space of classical normal forms with first integral; this is whence our approach works. Systems with a nonzero condition on their quadratic parts are considered. The algebra of all first integrals for any such system has a unique (modulo scalar multiplication) generator. The infinite level volume-preserving parametric normal forms of any nondegenerate perturbation within the Lie algebra of any such system is computed, where it can have rich dynamics. The associated unique generator of the algebra of first integrals are derived. The symmetry group of the infinite level normal forms are also discussed. Some necessary formulas are derived and applied to appropriately modified Rössler and generalized Kuramoto–Sivashinsky equations to demonstrate the applicability of our theoretical results. An approach (introduced by Iooss and Lombardi) is applied to find an optimal truncation for the first level normal forms of these examples with exponentially small remainders. The numerically suggested radius of convergence (for the first integral) associated with a hypernormalization step is discussed for the truncated first level normal forms of the examples. This is achieved by an efficient implementation of the results using Maple. (paper)

  3. Volume-preserving normal forms of Hopf-zero singularity

    Science.gov (United States)

    Gazor, Majid; Mokhtari, Fahimeh

    2013-10-01

    A practical method is described for computing the unique generator of the algebra of first integrals associated with a large class of Hopf-zero singularity. The set of all volume-preserving classical normal forms of this singularity is introduced via a Lie algebra description. This is a maximal vector space of classical normal forms with first integral; this is whence our approach works. Systems with a nonzero condition on their quadratic parts are considered. The algebra of all first integrals for any such system has a unique (modulo scalar multiplication) generator. The infinite level volume-preserving parametric normal forms of any nondegenerate perturbation within the Lie algebra of any such system is computed, where it can have rich dynamics. The associated unique generator of the algebra of first integrals are derived. The symmetry group of the infinite level normal forms are also discussed. Some necessary formulas are derived and applied to appropriately modified Rössler and generalized Kuramoto-Sivashinsky equations to demonstrate the applicability of our theoretical results. An approach (introduced by Iooss and Lombardi) is applied to find an optimal truncation for the first level normal forms of these examples with exponentially small remainders. The numerically suggested radius of convergence (for the first integral) associated with a hypernormalization step is discussed for the truncated first level normal forms of the examples. This is achieved by an efficient implementation of the results using Maple.

  4. Method for forming materials

    Science.gov (United States)

    Tolle, Charles R [Idaho Falls, ID; Clark, Denis E [Idaho Falls, ID; Smartt, Herschel B [Idaho Falls, ID; Miller, Karen S [Idaho Falls, ID

    2009-10-06

    A material-forming tool and a method for forming a material are described including a shank portion; a shoulder portion that releasably engages the shank portion; a pin that releasably engages the shoulder portion, wherein the pin defines a passageway; and a source of a material coupled in material flowing relation relative to the pin and wherein the material-forming tool is utilized in methodology that includes providing a first material; providing a second material, and placing the second material into contact with the first material; and locally plastically deforming the first material with the material-forming tool so as mix the first material and second material together to form a resulting material having characteristics different from the respective first and second materials.

  5. Densified waste form and method for forming

    Science.gov (United States)

    Garino, Terry J.; Nenoff, Tina M.; Sava Gallis, Dorina Florentina

    2015-08-25

    Materials and methods of making densified waste forms for temperature sensitive waste material, such as nuclear waste, formed with low temperature processing using metallic powder that forms the matrix that encapsulates the temperature sensitive waste material. The densified waste form includes a temperature sensitive waste material in a physically densified matrix, the matrix is a compacted metallic powder. The method for forming the densified waste form includes mixing a metallic powder and a temperature sensitive waste material to form a waste form precursor. The waste form precursor is compacted with sufficient pressure to densify the waste precursor and encapsulate the temperature sensitive waste material in a physically densified matrix.

  6. Methods for forming particles

    Science.gov (United States)

    Fox, Robert V.; Zhang, Fengyan; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin

    2016-06-21

    Single source precursors or pre-copolymers of single source precursors are subjected to microwave radiation to form particles of a I-III-VI.sub.2 material. Such particles may be formed in a wurtzite phase and may be converted to a chalcopyrite phase by, for example, exposure to heat. The particles in the wurtzite phase may have a substantially hexagonal shape that enables stacking into ordered layers. The particles in the wurtzite phase may be mixed with particles in the chalcopyrite phase (i.e., chalcopyrite nanoparticles) that may fill voids within the ordered layers of the particles in the wurtzite phase thus produce films with good coverage. In some embodiments, the methods are used to form layers of semiconductor materials comprising a I-III-VI.sub.2 material. Devices such as, for example, thin-film solar cells may be fabricated using such methods.

  7. Automatic identification and normalization of dosage forms in drug monographs

    Science.gov (United States)

    2012-01-01

    Background Each day, millions of health consumers seek drug-related information on the Web. Despite some efforts in linking related resources, drug information is largely scattered in a wide variety of websites of different quality and credibility. Methods As a step toward providing users with integrated access to multiple trustworthy drug resources, we aim to develop a method capable of identifying drug's dosage form information in addition to drug name recognition. We developed rules and patterns for identifying dosage forms from different sections of full-text drug monographs, and subsequently normalized them to standardized RxNorm dosage forms. Results Our method represents a significant improvement compared with a baseline lookup approach, achieving overall macro-averaged Precision of 80%, Recall of 98%, and F-Measure of 85%. Conclusions We successfully developed an automatic approach for drug dosage form identification, which is critical for building links between different drug-related resources. PMID:22336431

  8. A New Normal Form for Multidimensional Mode Conversion

    International Nuclear Information System (INIS)

    Tracy, E. R.; Richardson, A. S.; Kaufman, A. N.; Zobin, N.

    2007-01-01

    Linear conversion occurs when two wave types, with distinct polarization and dispersion characteristics, are locally resonant in a nonuniform plasma [1]. In recent work, we have shown how to incorporate a ray-based (WKB) approach to mode conversion in numerical algorithms [2,3]. The method uses the ray geometry in the conversion region to guide the reduction of the full NxN-system of wave equations to a 2x2 coupled pair which can be solved and matched to the incoming and outgoing WKB solutions. The algorithm in [2] assumes the ray geometry is hyperbolic and that, in ray phase space, there is an 'avoided crossing', which is the most common type of conversion. Here, we present a new formulation that can deal with more general types of conversion [4]. This formalism is based upon the fact (first proved in [5]) that it is always possible to put the 2x2 wave equation into a 'normal' form, such that the diagonal elements of the dispersion matrix Poisson-commute with the off-diagonals (at leading order). Therefore, if we use the diagonals (rather than the eigenvalues or the determinant) of the dispersion matrix as ray Hamiltonians, the off-diagonals will be conserved quantities. When cast into normal form, the 2x2 dispersion matrix has a very natural physical interpretation: the diagonals are the uncoupled ray hamiltonians and the off-diagonals are the coupling. We discuss how to incorporate the normal form into ray tracing algorithms

  9. AFP Algorithm and a Canonical Normal Form for Horn Formulas

    OpenAIRE

    Majdoddin, Ruhollah

    2014-01-01

    AFP Algorithm is a learning algorithm for Horn formulas. We show that it does not improve the complexity of AFP Algorithm, if after each negative counterexample more that just one refinements are performed. Moreover, a canonical normal form for Horn formulas is presented, and it is proved that the output formula of AFP Algorithm is in this normal form.

  10. An Algorithm for Higher Order Hopf Normal Forms

    Directory of Open Access Journals (Sweden)

    A.Y.T. Leung

    1995-01-01

    Full Text Available Normal form theory is important for studying the qualitative behavior of nonlinear oscillators. In some cases, higher order normal forms are required to understand the dynamic behavior near an equilibrium or a periodic orbit. However, the computation of high-order normal forms is usually quite complicated. This article provides an explicit formula for the normalization of nonlinear differential equations. The higher order normal form is given explicitly. Illustrative examples include a cubic system, a quadratic system and a Duffing–Van der Pol system. We use exact arithmetic and find that the undamped Duffing equation can be represented by an exact polynomial differential amplitude equation in a finite number of terms.

  11. Normal form of linear systems depending on parameters

    International Nuclear Information System (INIS)

    Nguyen Huynh Phan.

    1995-12-01

    In this paper we resolve completely the problem to find normal forms of linear systems depending on parameters for the feedback action that we have studied for the special case of controllable linear systems. (author). 24 refs

  12. Utilizing Nested Normal Form to Design Redundancy Free JSON Schemas

    Directory of Open Access Journals (Sweden)

    Wai Yin Mok

    2016-12-01

    Full Text Available JSON (JavaScript Object Notation is a lightweight data-interchange format for the Internet. JSON is built on two structures: (1 a collection of name/value pairs and (2 an ordered list of values (http://www.json.org/. Because of this simple approach, JSON is easy to use and it has the potential to be the data interchange format of choice for the Internet. Similar to XML, JSON schemas allow nested structures to model hierarchical data. As data interchange over the Internet increases exponentially due to cloud computing or otherwise, redundancy free JSON data are an attractive form of communication because they improve the quality of data communication through eliminating update anomaly. Nested Normal Form, a normal form for hierarchical data, is a precise characterization of redundancy. A nested table, or a hierarchical schema, is in Nested Normal Form if and only if it is free of redundancy caused by multivalued and functional dependencies. Using Nested Normal Form as a guide, this paper introduces a JSON schema design methodology that begins with UML use case diagrams, communication diagrams and class diagrams that model a system under study. Based on the use cases’ execution frequencies and the data passed between involved parties in the communication diagrams, the proposed methodology selects classes from the class diagrams to be the roots of JSON scheme trees and repeatedly adds classes from the class diagram to the scheme trees as long as the schemas satisfy Nested Normal Form. This process continues until all of the classes in the class diagram have been added to some JSON scheme trees.

  13. Normal Forms for Fuzzy Logics: A Proof-Theoretic Approach

    Czech Academy of Sciences Publication Activity Database

    Cintula, Petr; Metcalfe, G.

    2007-01-01

    Roč. 46, č. 5-6 (2007), s. 347-363 ISSN 1432-0665 R&D Projects: GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10300504 Keywords : fuzzy logic * normal form * proof theory * hypersequents Subject RIV: BA - General Mathematics Impact factor: 0.620, year: 2007

  14. A New One-Pass Transformation into Monadic Normal Form

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2003-01-01

    We present a translation from the call-by-value λ-calculus to monadic normal forms that includes short-cut boolean evaluation. The translation is higher-order, operates in one pass, duplicates no code, generates no chains of thunks, and is properly tail recursive. It makes a crucial use of symbolic...

  15. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-11-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results is presented: The radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach tests methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transportation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance. 2 references, 2 figures

  16. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-01-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results are presented: the radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach test methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transporation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance

  17. Fast Bitwise Implementation of the Algebraic Normal Form Transform

    OpenAIRE

    Bakoev, Valentin

    2017-01-01

    The representation of Boolean functions by their algebraic normal forms (ANFs) is very important for cryptography, coding theory and other scientific areas. The ANFs are used in computing the algebraic degree of S-boxes, some other cryptographic criteria and parameters of errorcorrecting codes. Their applications require these criteria and parameters to be computed by fast algorithms. Hence the corresponding ANFs should also be obtained by fast algorithms. Here we continue o...

  18. A Proposed Arabic Handwritten Text Normalization Method

    Directory of Open Access Journals (Sweden)

    Tarik Abu-Ain

    2014-11-01

    Full Text Available Text normalization is an important technique in document image analysis and recognition. It consists of many preprocessing stages, which include slope correction, text padding, skew correction, and straight the writing line. In this side, text normalization has an important role in many procedures such as text segmentation, feature extraction and characters recognition. In the present article, a new method for text baseline detection, straightening, and slant correction for Arabic handwritten texts is proposed. The method comprises a set of sequential steps: first components segmentation is done followed by components text thinning; then, the direction features of the skeletons are extracted, and the candidate baseline regions are determined. After that, selection of the correct baseline region is done, and finally, the baselines of all components are aligned with the writing line.  The experiments are conducted on IFN/ENIT benchmark Arabic dataset. The results show that the proposed method has a promising and encouraging performance.

  19. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Quantifying Normal Craniofacial Form and Baseline Craniofacial Asymmetry in the Pediatric Population.

    Science.gov (United States)

    Cho, Min-Jeong; Hallac, Rami R; Ramesh, Jananie; Seaward, James R; Hermann, Nuno V; Darvann, Tron A; Lipira, Angelo; Kane, Alex A

    2018-03-01

    Restoring craniofacial symmetry is an important objective in the treatment of many craniofacial conditions. Normal form has been measured using anthropometry, cephalometry, and photography, yet all of these modalities have drawbacks. In this study, the authors define normal pediatric craniofacial form and craniofacial asymmetry using stereophotogrammetric images, which capture a densely sampled set of points on the form. After institutional review board approval, normal, healthy children (n = 533) with no known craniofacial abnormalities were recruited at well-child visits to undergo full head stereophotogrammetric imaging. The children's ages ranged from 0 to 18 years. A symmetric three-dimensional template was registered and scaled to each individual scan using 25 manually placed landmarks. The template was deformed to each subject's three-dimensional scan using a thin-plate spline algorithm and closest point matching. Age-based normal facial models were derived. Mean facial asymmetry and statistical characteristics of the population were calculated. The mean head asymmetry across all pediatric subjects was 1.5 ± 0.5 mm (range, 0.46 to 4.78 mm), and the mean facial asymmetry was 1.2 ± 0.6 mm (range, 0.4 to 5.4 mm). There were no significant differences in the mean head or facial asymmetry with age, sex, or race. Understanding the "normal" form and baseline distribution of asymmetry is an important anthropomorphic foundation. The authors present a method to quantify normal craniofacial form and baseline asymmetry in a large pediatric sample. The authors found that the normal pediatric craniofacial form is asymmetric, and does not change in magnitude with age, sex, or race.

  1. Method for construction of normalized cDNA libraries

    Science.gov (United States)

    Soares, Marcelo B.; Efstratiadis, Argiris

    1998-01-01

    This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to appropriate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. This invention also provides normalized cDNA libraries generated by the above-described method and uses of the generated libraries.

  2. Normalization Of Thermal-Radiation Form-Factor Matrix

    Science.gov (United States)

    Tsuyuki, Glenn T.

    1994-01-01

    Report describes algorithm that adjusts form-factor matrix in TRASYS computer program, which calculates intraspacecraft radiative interchange among various surfaces and environmental heat loading from sources such as sun.

  3. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  4. Article and method of forming an article

    Science.gov (United States)

    Lacy, Benjamin Paul; Kottilingam, Srikanth Chandrudu; Dutta, Sandip; Schick, David Edward

    2017-12-26

    Provided are an article and a method of forming an article. The method includes providing a metallic powder, heating the metallic powder to a temperature sufficient to joint at least a portion of the metallic powder to form an initial layer, sequentially forming additional layers in a build direction by providing a distributed layer of the metallic powder over the initial layer and heating the distributed layer of the metallic powder, repeating the steps of sequentially forming the additional layers in the build direction to form a portion of the article having a hollow space formed in the build direction, and forming an overhang feature extending into the hollow space. The article includes an article formed by the method described herein.

  5. Thermoelectric generator and method of forming same

    International Nuclear Information System (INIS)

    Wilson, K.T.

    1981-01-01

    A thermoelectric device is disclosed which comprises the formation of a multiplicity of thermocouples on a substrate in a narrow strip form, the thermocouples being formed by printing with first and second inks formed of suitable different powdered metals with a proper binder or flux. The thermocouples are formed in series and the opposed coupled areas are melted to form an intermingling of the two metals and the strips may be formed in substantial lengths and rolled onto a reel, or in relatively short strip form and disposed in a side-by-side abutting relationship in substantial numbers to define a generally rectangular panel form with opposed ends in electrical connection. The method of forming the panels includes the steps of feeding a suitable substrate, either in a continuous roll or sheet form, through first and second printers to form the series connected multiplicity of thermocouples thereon. From the printers the sheet or strip passes through a melter such as an induction furnace and from the furnace it passes through a sheeter, if the strip is in roll form. The sheets are then slit into narrow strips relative to the thermocouples, printed thereon and the strips are then formed into a bundle. A predetermined number of bundles are assembled into a panel form

  6. Diagonalization and Jordan Normal Form--Motivation through "Maple"[R

    Science.gov (United States)

    Glaister, P.

    2009-01-01

    Following an introduction to the diagonalization of matrices, one of the more difficult topics for students to grasp in linear algebra is the concept of Jordan normal form. In this note, we show how the important notions of diagonalization and Jordan normal form can be introduced and developed through the use of the computer algebra package…

  7. On the relationship between LTL normal forms and Büchi automata

    DEFF Research Database (Denmark)

    Li, Jianwen; Pu, Geguang; Zhang, Lijun

    2013-01-01

    In this paper, we revisit the problem of translating LTL formulas to Büchi automata. We first translate the given LTL formula into a special disjuctive-normal form (DNF). The formula will be part of the state, and its DNF normal form specifies the atomic properties that should hold immediately...

  8. Normal forms of invariant vector fields under a finite group action

    International Nuclear Information System (INIS)

    Sanchez Bringas, F.

    1992-07-01

    Let Γ be a finite subgroup of GL(n,C). This subgroup acts on the space of germs of holomorphic vector fields vanishing at the origin in C n . We prove a theorem of invariant conjugation to a normal form and linearization for the subspace of invariant elements and we give a description of these normal forms in dimension n=2. (author)

  9. Photovoltaic cell module and method of forming

    Science.gov (United States)

    Howell, Malinda; Juen, Donnie; Ketola, Barry; Tomalia, Mary Kay

    2017-12-12

    A photovoltaic cell module, a photovoltaic array including at least two modules, and a method of forming the module are provided. The module includes a first outermost layer and a photovoltaic cell disposed on the first outermost layer. The module also includes a second outermost layer disposed on the photovoltaic cell and sandwiching the photovoltaic cell between the second outermost layer and the first outermost layer. The method of forming the module includes the steps of disposing the photovoltaic cell on the first outermost layer, disposing a silicone composition on the photovoltaic cell, and compressing the first outermost layer, the photovoltaic cell, and the second layer to form the photovoltaic cell module.

  10. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.

    Science.gov (United States)

    Frejlich, Pedro; Mărcuț, Ioan

    2018-01-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  11. Nanofiber electrode and method of forming same

    Energy Technology Data Exchange (ETDEWEB)

    Pintauro, Peter N.; Zhang, Wenjing

    2018-02-27

    In one aspect, a method of forming an electrode for an electrochemical device is disclosed. In one embodiment, the method includes the steps of mixing at least a first amount of a catalyst and a second amount of an ionomer or uncharged polymer to form a solution and delivering the solution into a metallic needle having a needle tip. The method further includes the steps of applying a voltage between the needle tip and a collector substrate positioned at a distance from the needle tip, and extruding the solution from the needle tip at a flow rate such as to generate electrospun fibers and deposit the generated fibers on the collector substrate to form a mat with a porous network of fibers. Each fiber in the porous network of the mat has distributed particles of the catalyst. The method also includes the step of pressing the mat onto a membrane.

  12. Slab edge insulating form system and methods

    Science.gov (United States)

    Lee, Brain E [Corral de Tierra, CA; Barsun, Stephan K [Davis, CA; Bourne, Richard C [Davis, CA; Hoeschele, Marc A [Davis, CA; Springer, David A [Winters, CA

    2009-10-06

    A method of forming an insulated concrete foundation is provided comprising constructing a foundation frame, the frame comprising an insulating form having an opening, inserting a pocket former into the opening; placing concrete inside the foundation frame; and removing the pocket former after the placed concrete has set, wherein the concrete forms a pocket in the placed concrete that is accessible through the opening. The method may further comprise sealing the opening by placing a sealing plug or sealing material in the opening. A system for forming an insulated concrete foundation is provided comprising a plurality of interconnected insulating forms, the insulating forms having a rigid outer member protecting and encasing an insulating material, and at least one gripping lip extending outwardly from the outer member to provide a pest barrier. At least one insulating form has an opening into which a removable pocket former is inserted. The system may also provide a tension anchor positioned in the pocket former and a tendon connected to the tension anchor.

  13. Methods of forming semiconductor devices and devices formed using such methods

    Science.gov (United States)

    Fox, Robert V; Rodriguez, Rene G; Pak, Joshua

    2013-05-21

    Single source precursors are subjected to carbon dioxide to form particles of material. The carbon dioxide may be in a supercritical state. Single source precursors also may be subjected to supercritical fluids other than supercritical carbon dioxide to form particles of material. The methods may be used to form nanoparticles. In some embodiments, the methods are used to form chalcopyrite materials. Devices such as, for example, semiconductor devices may be fabricated that include such particles. Methods of forming semiconductor devices include subjecting single source precursors to carbon dioxide to form particles of semiconductor material, and establishing electrical contact between the particles and an electrode.

  14. Die singulation method and package formed thereby

    Science.gov (United States)

    Anderson, Robert C [Tucson, AZ; Shul, Randy J [Albuquerque, NM; Clews, Peggy J [Tijeras, NM; Baker, Michael S [Albuquerque, NM; De Boer, Maarten P [Albuquerque, NM

    2012-08-07

    A method is disclosed for singulating die from a substrate having a sacrificial layer and one or more device layers, with a retainer being formed in the device layer(s) and anchored to the substrate. Deep Reactive Ion Etching (DRIE) etching of a trench through the substrate from the bottom side defines a shape for each die. A handle wafer is then attached to the bottom side of the substrate, and the sacrificial layer is etched to singulate the die and to form a frame from the retainer and the substrate. The frame and handle wafer, which retain the singulated die in place, can be attached together with a clamp or a clip and to form a package for the singulated die. One or more stops can be formed from the device layer(s) to limit a sliding motion of the singulated die.

  15. Method of forming an HTS article

    Science.gov (United States)

    Bhattacharya, Raghu N.; Zhang, Xun; Selvamanickam, Venkat

    2014-08-19

    A method of forming a superconducting article includes providing a substrate tape, forming a superconducting layer overlying the substrate tape, and depositing a capping layer overlying the superconducting layer. The capping layer includes a noble metal and has a thickness not greater than about 1.0 micron. The method further includes electrodepositing a stabilizer layer overlying the capping layer using a solution that is non-reactive to the superconducting layer. The superconducting layer has an as-formed critical current I.sub.C(AF) and a post-stabilized critical current I.sub.C(PS). The I.sub.C(PS) is at least about 95% of the I.sub.C(AF).

  16. On some hypersurfaces with time like normal bundle in pseudo Riemannian space forms

    International Nuclear Information System (INIS)

    Kashani, S.M.B.

    1995-12-01

    In this work we classify immersed hypersurfaces with constant sectional curvature in pseudo Riemannian space forms if the normal bundle is time like and the mean curvature is constant. (author). 9 refs

  17. Method of forming aluminum oxynitride material and bodies formed by such methods

    Science.gov (United States)

    Bakas, Michael P [Ammon, ID; Lillo, Thomas M [Idaho Falls, ID; Chu, Henry S [Idaho Falls, ID

    2010-11-16

    Methods of forming aluminum oxynitride (AlON) materials include sintering green bodies comprising aluminum orthophosphate or another sacrificial material therein. Such green bodies may comprise aluminum, oxygen, and nitrogen in addition to the aluminum orthophosphate. For example, the green bodies may include a mixture of aluminum oxide, aluminum nitride, and aluminum orthophosphate or another sacrificial material. Additional methods of forming aluminum oxynitride (AlON) materials include sintering a green body including a sacrificial material therein, using the sacrificial material to form pores in the green body during sintering, and infiltrating the pores formed in the green body with a liquid infiltrant during sintering. Bodies are formed using such methods.

  18. Pre-form ceramic matrix composite cavity and method of forming and method of forming a ceramic matrix composite component

    Science.gov (United States)

    Monaghan, Philip Harold; Delvaux, John McConnell; Taxacher, Glenn Curtis

    2015-06-09

    A pre-form CMC cavity and method of forming pre-form CMC cavity for a ceramic matrix component includes providing a mandrel, applying a base ply to the mandrel, laying-up at least one CMC ply on the base ply, removing the mandrel, and densifying the base ply and the at least one CMC ply. The remaining densified base ply and at least one CMC ply form a ceramic matrix component having a desired geometry and a cavity formed therein. Also provided is a method of forming a CMC component.

  19. Bioactive form of resveratrol in glioblastoma cells and its safety for normal brain cells

    Directory of Open Access Journals (Sweden)

    Xiao-Hong Shu

    2013-05-01

    Full Text Available ABSTRACTBackground: Resveratrol, a plant polyphenol existing in grapes and many other natural foods, possesses a wide range of biological activities including cancer prevention. It has been recognized that resveratrol is intracellularly biotransformed to different metabolites, but no direct evidence has been available to ascertain its bioactive form because of the difficulty to maintain resveratrol unmetabolized in vivo or in vitro. It would be therefore worthwhile to elucidate the potential therapeutic implications of resveratrol metabolism using a reliable resveratrol-sensitive cancer cells.Objective: To identify the real biological form of trans-resveratrol and to evaluate the safety of the effective anticancer dose of resveratrol for the normal brain cells.Methods: The samples were prepared from the condition media and cell lysates of human glioblastoma U251 cells, and were purified by solid phase extraction (SPE. The samples were subjected to high performance liquid chromatography (HPLC and liquid chromatography/tandem mass spectrometry (LC/MS analysis. According to the metabolite(s, trans-resveratrol was biotransformed in vitro by the method described elsewhere, and the resulting solution was used to treat U251 cells. Meanwhile, the responses of U251 and primarily cultured rat normal brain cells (glial cells and neurons to 100μM trans-resveratrol were evaluated by multiple experimental methods.Results: The results revealed that resveratrol monosulfate was the major metabolite in U251 cells. About half fraction of resveratrol monosulfate was prepared in vitro and this trans-resveratrol and resveratrol monosulfate mixture showed little inhibitory effect on U251 cells. It is also found that rat primary brain cells (PBCs not only resist 100μM but also tolerate as high as 200μM resveratrol treatment.Conclusions: Our study thus demonstrated that trans-resveratrol was the bioactive form in glioblastoma cells and, therefore, the biotransforming

  20. Normal Forms for Retarded Functional Differential Equations and Applications to Bogdanov-Takens Singularity

    Science.gov (United States)

    Faria, T.; Magalhaes, L. T.

    The paper addresses, for retarded functional differential equations (FDEs), the computation of normal forms associated with the flow on a finite-dimensional invariant manifold tangent to invariant spaces for the infinitesimal generator of the linearized equation at a singularity. A phase space appropriate to the computation of these normal forms is introduced, and adequate nonresonance conditions for the computation of the normal forms are derived. As an application, the general situation of Bogdanov-Takens singularity and its versal unfolding for scalar retarded FDEs with nondegeneracy at second order is considered, both in the general case and in the case of differential-delay equations of the form ẋ( t) = ƒ( x( t), x( t-1)).

  1. A normal form approach to the theory of nonlinear betatronic motion

    International Nuclear Information System (INIS)

    Bazzani, A.; Todesco, E.; Turchetti, G.; Servizi, G.

    1994-01-01

    The betatronic motion of a particle in a circular accelerator is analysed using the transfer map description of the magnetic lattice. In the linear case the transfer matrix approach is shown to be equivalent to the Courant-Snyder theory: In the normal coordinates' representation the transfer matrix is a pure rotation. When the nonlinear effects due to the multipolar components of the magnetic field are taken into account, a similar procedure is used: a nonlinear change of coordinates provides a normal form representation of the map, which exhibits explicit symmetry properties depending on the absence or presence of resonance relations among the linear tunes. The use of normal forms is illustrated in the simplest but significant model of a cell with a sextupolar nonlinearity which is described by the quadratic Henon map. After recalling the basic theoretical results in Hamiltonian dynamics, we show how the normal forms describe the different topological structures of phase space such as KAM tori, chains of islands and chaotic regions; a critical comparison with the usual perturbation theory for Hamilton equations is given. The normal form theory is applied to compute the tune shift and deformation of the orbits for the lattices of the SPS and LHC accelerators, and scaling laws are obtained. Finally, the correction procedure of the multipolar errors of the LHC, based on the analytic minimization of the tune shift computed via the normal forms, is described and the results for a model of the LHC are presented. This application, relevant for the lattice design, focuses on the advantages of normal forms with respect to tracking when parametric dependences have to be explored. (orig.)

  2. Comparison of validation methods for forming simulations

    Science.gov (United States)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  3. Method of forming a dianhydrosugar alcohol

    Science.gov (United States)

    Holladay, Johnathan E [Kennewick, WA; Hu, Jianli [Kennewick, WA; Wang, Yong [Richland, WA; Werpy, Todd A [West Richland, WA; Zhang, Xinjie [Burlington, MA

    2010-01-19

    The invention includes methods of producing dianhydrosugars. A polyol is reacted in the presence of a first catalyst to form a monocyclic sugar. The monocyclic sugar is transferred to a second reactor where it is converted to a dianhydrosugar alcohol in the presence of a second catalyst. The invention includes a process of forming isosorbide. An initial reaction is conducted at a first temperature in the presence of a solid acid catalyst. The initial reaction involves reacting sorbitol to produce 1,4-sorbitan, 3,6-sorbitan, 2,5-mannitan and 2,5-iditan. Utilizing a second temperature, the 1,4-sorbitan and 3,6-sorbitan are converted to isosorbide. The invention includes a method of purifying isosorbide from a mixture containing isosorbide and at least one additional component. A first distillation removes a first portion of the isosorbide from the mixture. A second distillation is then conducted at a higher temperature to remove a second portion of isosorbide from the mixture.

  4. METHODS OF FORMING THE STRUCTURE OF KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    Tatyana A. Snegiryova

    2015-01-01

    Full Text Available The aim of the study is to describe the method of forming thestructure of knowledge of students on the basis of an integrated approach (expert, taxonomy and thesaurus and the presentation of the results of its use in the study of medical and biological physics at the Izhevsk State Medical Academy.Methods. The methods used in the work involve: an integrated approach that includes group expert method, developed by V. S. Cherepanov; taxonomy and thesaurus approach when creating a model of taxonomic structure of knowledge, as well as models of the formation of the knowledge structure.Results. The algorithm, stages and procedures of knowledge structure formation of trainees are considered in detail; the model of the given process is created; the technology of content selection of a teaching material due to the fixed time that has been released on studying of concrete discipline is shown.Scientific novelty and practical significance. Advantage of the proposed method and model of students’ knowledge structure formation consists in their flexibility: at certain adaptation they can be used while training to any discipline apart of its specificity and educational institution. Observance of all stages of the presented technology of content selection of a teaching material on the basis of an expert estimation will promote substantial increase of quality of training; make it possible to develop the unified method uniting the various points of view of teachers on knowledge formation of trainees.

  5. Empirical evaluation of data normalization methods for molecular classification.

    Science.gov (United States)

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  6. Capacitor assembly and related method of forming

    Science.gov (United States)

    Zhang, Lili; Tan, Daniel Qi; Sullivan, Jeffrey S.

    2017-12-19

    A capacitor assembly is disclosed. The capacitor assembly includes a housing. The capacitor assembly further includes a plurality of capacitors disposed within the housing. Furthermore, the capacitor assembly includes a thermally conductive article disposed about at least a portion of a capacitor body of the capacitors, and in thermal contact with the capacitor body. Moreover, the capacitor assembly also includes a heat sink disposed within the housing and in thermal contact with at least a portion of the housing and the thermally conductive article such that the heat sink is configured to remove heat from the capacitor in a radial direction of the capacitor assembly. Further, a method of forming the capacitor assembly is also presented.

  7. Methods for detecting the environmental coccoid form of Helicobacter pylori

    Directory of Open Access Journals (Sweden)

    Mahnaz eMazaheri Assadi

    2015-05-01

    Full Text Available Helicobacter pylori is recognized as the most common pathogen to cause gastritis, peptic and duodenal ulcers, and gastric cancer. The organisms are found in two forms: 1 spiral-shaped bacillus and 2 coccoid. H. pylori coccoid form, generally found in the environment, is the transformed form of the normal spiral-shaped bacillus after exposed to water or adverse environmental conditions such as exposure to sub-inhibitory concentrations of antimicrobial agents. The putative infectious capability and the viability of H. pylori under environmental conditions are controversial. This disagreement is partially due to the fact of lack in detecting the coccoid form of H. pylori in the environment. Accurate and effective detection methods of H. pylori will lead to rapid treatment and disinfection, and less human health damages and reduction in health care costs. In this review, we provide a brief introduction to H. pylori environmental coccoid forms, their transmission and detection methods. We further discuss the use of these detection methods including their accuracy and efficiency.

  8. Reconstruction of normal forms by learning informed observation geometries from data.

    Science.gov (United States)

    Yair, Or; Talmon, Ronen; Coifman, Ronald R; Kevrekidis, Ioannis G

    2017-09-19

    The discovery of physical laws consistent with empirical observations is at the heart of (applied) science and engineering. These laws typically take the form of nonlinear differential equations depending on parameters; dynamical systems theory provides, through the appropriate normal forms, an "intrinsic" prototypical characterization of the types of dynamical regimes accessible to a given model. Using an implementation of data-informed geometry learning, we directly reconstruct the relevant "normal forms": a quantitative mapping from empirical observations to prototypical realizations of the underlying dynamics. Interestingly, the state variables and the parameters of these realizations are inferred from the empirical observations; without prior knowledge or understanding, they parametrize the dynamics intrinsically without explicit reference to fundamental physical quantities.

  9. Method of forming composite fiber blends

    Science.gov (United States)

    McMahon, Paul E. (Inventor); Chung, Tai-Shung (Inventor); Ying, Lincoln (Inventor)

    1989-01-01

    The instant invention involves a process used in preparing fibrous tows which may be formed into polymeric plastic composites. The process involves the steps of (a) forming a tow of strong filamentary materials; (b) forming a thermoplastic polymeric fiber; (c) intermixing the two tows; and (d) withdrawing the intermixed tow for further use.

  10. Normalization methods in time series of platelet function assays

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  11. Combining Illumination Normalization Methods for Better Face Recognition

    NARCIS (Netherlands)

    Boom, B.J.; Tao, Q.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. There are two categories of illumination normalization methods. The first category performs a local preprocessing, where they correct a pixel value based on a local neighborhood in the images. The second

  12. A high precision method for normalization of cross sections

    International Nuclear Information System (INIS)

    Aguilera R, E.F.; Vega C, J.J.; Martinez Q, E.; Kolata, J.J.

    1988-08-01

    It was developed a system of 4 monitors and a program to eliminate, in the process of normalization of cross sections, the dependence of the alignment of the equipment and those condition of having centered of the beam. It was carried out a series of experiments with the systems 27 Al + 70, 72, 74, 76 Ge, 35 Cl + 58 Ni, 37 Cl + 58, 60, 62, 64 Ni and ( 81 Br, 109 Rh) + 60 Ni. For these experiments the typical precision of 1% was obtained in the normalization. It is demonstrated theoretical and experimentally the advantage of this method on those that use 1 or 2 monitors. (Author)

  13. On the construction of the Kolmogorov normal form for the Trojan asteroids

    CERN Document Server

    Gabern, F; Locatelli, U

    2004-01-01

    In this paper we focus on the stability of the Trojan asteroids for the planar Restricted Three-Body Problem (RTBP), by extending the usual techniques for the neighbourhood of an elliptic point to derive results in a larger vicinity. Our approach is based on the numerical determination of the frequencies of the asteroid and the effective computation of the Kolmogorov normal form for the corresponding torus. This procedure has been applied to the first 34 Trojan asteroids of the IAU Asteroid Catalog, and it has worked successfully for 23 of them. The construction of this normal form allows for computer-assisted proofs of stability. To show it, we have implemented a proof of existence of families of invariant tori close to a given asteroid, for a high order expansion of the Hamiltonian. This proof has been successfully applied to three Trojan asteroids.

  14. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.; Spoto, F.; Scollo, Giuseppe; Nijholt, Antinus

    2003-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq 1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with

  15. Generating all permutations by context-free grammars in Chomsky normal form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2006-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq1$, with

  16. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2004-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with

  17. Elevated temperature forming method and preheater apparatus

    Science.gov (United States)

    Krajewski, Paul E; Hammar, Richard Harry; Singh, Jugraj; Cedar, Dennis; Friedman, Peter A; Luo, Yingbing

    2013-06-11

    An elevated temperature forming system in which a sheet metal workpiece is provided in a first stage position of a multi-stage pre-heater, is heated to a first stage temperature lower than a desired pre-heat temperature, is moved to a final stage position where it is heated to a desired final stage temperature, is transferred to a forming press, and is formed by the forming press. The preheater includes upper and lower platens that transfer heat into workpieces disposed between the platens. A shim spaces the upper platen from the lower platen by a distance greater than a thickness of the workpieces to be heated by the platens and less than a distance at which the upper platen would require an undesirably high input of energy to effectively heat the workpiece without being pressed into contact with the workpiece.

  18. Lubricant Test Methods for Sheet Metal Forming

    DEFF Research Database (Denmark)

    Bay, Niels; Olsson, David Dam; Andreasen, Jan Lasson

    2008-01-01

    appearing in different sheet forming operations such as stretch forming, deep drawing, ironing and punching. The laboratory tests have been especially designed to model the conditions in industrial production. Application of the tests for evaluating new lubricants before introducing them in production has......Sheet metal forming of tribologically difficult materials such as stainless steel, Al-alloys and Ti-alloys or forming in tribologically difficult operations like ironing, punching or deep drawing of thick plate requires often use of environmentally hazardous lubricants such as chlorinated paraffin...... oils in order to avoid galling. The present paper describes a systematic research in the development of new, environmentally harmless lubricants focusing on the lubricant testing aspects. A system of laboratory tests has been developed to study the lubricant performance under the very varied conditions...

  19. Center manifolds, normal forms and bifurcations of vector fields with application to coupling between periodic and steady motions

    Science.gov (United States)

    Holmes, Philip J.

    1981-06-01

    We study the instabilities known to aeronautical engineers as flutter and divergence. Mathematically, these states correspond to bifurcations to limit cycles and multiple equilibrium points in a differential equation. Making use of the center manifold and normal form theorems, we concentrate on the situation in which flutter and divergence become coupled, and show that there are essentially two ways in which this is likely to occur. In the first case the system can be reduced to an essential model which takes the form of a single degree of freedom nonlinear oscillator. This system, which may be analyzed by conventional phase-plane techniques, captures all the qualitative features of the full system. We discuss the reduction and show how the nonlinear terms may be simplified and put into normal form. Invariant manifold theory and the normal form theorem play a major role in this work and this paper serves as an introduction to their application in mechanics. Repeating the approach in the second case, we show that the essential model is now three dimensional and that far more complex behavior is possible, including nonperiodic and ‘chaotic’ motions. Throughout, we take a two degree of freedom system as an example, but the general methods are applicable to multi- and even infinite degree of freedom problems.

  20. NOLB: Nonlinear Rigid Block Normal Mode Analysis Method

    OpenAIRE

    Hoffmann , Alexandre; Grudinin , Sergei

    2017-01-01

    International audience; We present a new conceptually simple and computationally efficient method for nonlinear normal mode analysis called NOLB. It relies on the rotations-translations of blocks (RTB) theoretical basis developed by Y.-H. Sanejouand and colleagues. We demonstrate how to physically interpret the eigenvalues computed in the RTB basis in terms of angular and linear velocities applied to the rigid blocks and how to construct a nonlinear extrapolation of motion out of these veloci...

  1. New method for computing ideal MHD normal modes in axisymmetric toroidal geometry

    International Nuclear Information System (INIS)

    Wysocki, F.; Grimm, R.C.

    1984-11-01

    Analytic elimination of the two magnetic surface components of the displacement vector permits the normal mode ideal MHD equations to be reduced to a scalar form. A Galerkin procedure, similar to that used in the PEST codes, is implemented to determine the normal modes computationally. The method retains the efficient stability capabilities of the PEST 2 energy principle code, while allowing computation of the normal mode frequencies and eigenfunctions, if desired. The procedure is illustrated by comparison with earlier various of PEST and by application to tilting modes in spheromaks, and to stable discrete Alfven waves in tokamak geometry

  2. Method and Apparatus for Forming Nanodroplets

    Science.gov (United States)

    Ackley, Donald; Forster, Anita

    2011-01-01

    This innovation uses partially miscible fluids to form nano- and microdroplets in a microfluidic droplet generator system. Droplet generators fabricated in PDMS (polydimethylsiloxane) are currently being used to fabricate engineered nanoparticles and microparticles. These droplet generators were first demonstrated in a T-junction configuration, followed by a cross-flow configuration. All of these generating devices have used immiscible fluids, such as oil and water. This immiscible fluid system can produce mono-dispersed distributions of droplets and articles with sizes ranging from a few hundred nanometers to a few hundred microns. For applications such as drug delivery, the ability to encapsulate aqueous solutions of drugs within particles formed from the droplets is desirable. Of particular interest are non-polar solvents that can dissolve lipids for the formation of liposomes in the droplet generators. Such fluids include ether, cyclohexane, butanol, and ethyl acetate. Ethyl acetate is of particular interest for two reasons. It is relatively nontoxic and it is formed from ether and acetic acid, and maybe broken down into its constituents at relatively low concentrations.

  3. High molecular gas fractions in normal massive star-forming galaxies in the young Universe.

    Science.gov (United States)

    Tacconi, L J; Genzel, R; Neri, R; Cox, P; Cooper, M C; Shapiro, K; Bolatto, A; Bouché, N; Bournaud, F; Burkert, A; Combes, F; Comerford, J; Davis, M; Schreiber, N M Förster; Garcia-Burillo, S; Gracia-Carpio, J; Lutz, D; Naab, T; Omont, A; Shapley, A; Sternberg, A; Weiner, B

    2010-02-11

    Stars form from cold molecular interstellar gas. As this is relatively rare in the local Universe, galaxies like the Milky Way form only a few new stars per year. Typical massive galaxies in the distant Universe formed stars an order of magnitude more rapidly. Unless star formation was significantly more efficient, this difference suggests that young galaxies were much more molecular-gas rich. Molecular gas observations in the distant Universe have so far largely been restricted to very luminous, rare objects, including mergers and quasars, and accordingly we do not yet have a clear idea about the gas content of more normal (albeit massive) galaxies. Here we report the results of a survey of molecular gas in samples of typical massive-star-forming galaxies at mean redshifts of about 1.2 and 2.3, when the Universe was respectively 40% and 24% of its current age. Our measurements reveal that distant star forming galaxies were indeed gas rich, and that the star formation efficiency is not strongly dependent on cosmic epoch. The average fraction of cold gas relative to total galaxy baryonic mass at z = 2.3 and z = 1.2 is respectively about 44% and 34%, three to ten times higher than in today's massive spiral galaxies. The slow decrease between z approximately 2 and z approximately 1 probably requires a mechanism of semi-continuous replenishment of fresh gas to the young galaxies.

  4. Generating All Circular Shifts by Context-Free Grammars in Greibach Normal Form

    NARCIS (Netherlands)

    Asveld, Peter R.J.

    2007-01-01

    For each alphabet Σn = {a1,a2,…,an}, linearly ordered by a1 < a2 < ⋯ < an, let Cn be the language of circular or cyclic shifts over Σn, i.e., Cn = {a1a2 ⋯ an-1an, a2a3 ⋯ ana1,…,ana1 ⋯ an-2an-1}. We study a few families of context-free grammars Gn (n ≥1) in Greibach normal form such that Gn generates

  5. Drug Use Normalization: A Systematic and Critical Mixed-Methods Review.

    Science.gov (United States)

    Sznitman, Sharon R; Taubman, Danielle S

    2016-09-01

    Drug use normalization, which is a process whereby drug use becomes less stigmatized and more accepted as normative behavior, provides a conceptual framework for understanding contemporary drug issues and changes in drug use trends. Through a mixed-methods systematic review of the normalization literature, this article seeks to (a) critically examine how the normalization framework has been applied in empirical research and (b) make recommendations for future research in this area. Twenty quantitative, 26 qualitative, and 4 mixed-methods studies were identified through five electronic databases and reference lists of published studies. Studies were assessed for relevance, study characteristics, quality, and aspects of normalization examined. None of the studies applied the most rigorous research design (experiments) or examined all of the originally proposed normalization dimensions. The most commonly assessed dimension of drug use normalization was "experimentation." In addition to the original dimensions, the review identified the following new normalization dimensions in the literature: (a) breakdown of demographic boundaries and other risk factors in relation to drug use; (b) de-normalization; (c) drug use as a means to achieve normal goals; and (d) two broad forms of micro-politics associated with managing the stigma of illicit drug use: assimilative and transformational normalization. Further development in normalization theory and methodology promises to provide researchers with a novel framework for improving our understanding of drug use in contemporary society. Specifically, quasi-experimental designs that are currently being made feasible by swift changes in cannabis policy provide researchers with new and improved opportunities to examine normalization processes.

  6. Planar undulator motion excited by a fixed traveling wave. Quasiperiodic averaging normal forms and the FEL pendulum

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, James A.; Heinemann, Klaus [New Mexico Univ., Albuquerque, NM (United States). Dept. of Mathematics and Statistics; Vogt, Mathias [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Gooden, Matthew [North Carolina State Univ., Raleigh, NC (United States). Dept. of Physics

    2013-03-15

    We present a mathematical analysis of planar motion of energetic electrons moving through a planar dipole undulator, excited by a fixed planar polarized plane wave Maxwell field in the X-Ray FEL regime. Our starting point is the 6D Lorentz system, which allows planar motions, and we examine this dynamical system as the wave length {lambda} of the traveling wave varies. By scalings and transformations the 6D system is reduced, without approximation, to a 2D system in a form for a rigorous asymptotic analysis using the Method of Averaging (MoA), a long time perturbation theory. The two dependent variables are a scaled energy deviation and a generalization of the so- called ponderomotive phase. As {lambda} varies the system passes through resonant and nonresonant (NR) zones and we develop NR and near-to-resonant (NtoR) MoA normal form approximations. The NtoR normal forms contain a parameter which measures the distance from a resonance. For a special initial condition, for the planar motion and on resonance, the NtoR normal form reduces to the well known FEL pendulum system. We then state and prove NR and NtoR first-order averaging theorems which give explicit error bounds for the normal form approximations. We prove the theorems in great detail, giving the interested reader a tutorial on mathematically rigorous perturbation theory in a context where the proofs are easily understood. The proofs are novel in that they do not use a near identity transformation and they use a system of differential inequalities. The NR case is an example of quasiperiodic averaging where the small divisor problem enters in the simplest possible way. To our knowledge the planar prob- lem has not been analyzed with the generality we aspire to here nor has the standard FEL pendulum system been derived with associated error bounds as we do here. We briefly discuss the low gain theory in light of our NtoR normal form. Our mathematical treatment of the noncollective FEL beam dynamics problem in

  7. Planar undulator motion excited by a fixed traveling wave. Quasiperiodic averaging normal forms and the FEL pendulum

    International Nuclear Information System (INIS)

    Ellison, James A.; Heinemann, Klaus; Gooden, Matthew

    2013-03-01

    We present a mathematical analysis of planar motion of energetic electrons moving through a planar dipole undulator, excited by a fixed planar polarized plane wave Maxwell field in the X-Ray FEL regime. Our starting point is the 6D Lorentz system, which allows planar motions, and we examine this dynamical system as the wave length λ of the traveling wave varies. By scalings and transformations the 6D system is reduced, without approximation, to a 2D system in a form for a rigorous asymptotic analysis using the Method of Averaging (MoA), a long time perturbation theory. The two dependent variables are a scaled energy deviation and a generalization of the so- called ponderomotive phase. As λ varies the system passes through resonant and nonresonant (NR) zones and we develop NR and near-to-resonant (NtoR) MoA normal form approximations. The NtoR normal forms contain a parameter which measures the distance from a resonance. For a special initial condition, for the planar motion and on resonance, the NtoR normal form reduces to the well known FEL pendulum system. We then state and prove NR and NtoR first-order averaging theorems which give explicit error bounds for the normal form approximations. We prove the theorems in great detail, giving the interested reader a tutorial on mathematically rigorous perturbation theory in a context where the proofs are easily understood. The proofs are novel in that they do not use a near identity transformation and they use a system of differential inequalities. The NR case is an example of quasiperiodic averaging where the small divisor problem enters in the simplest possible way. To our knowledge the planar prob- lem has not been analyzed with the generality we aspire to here nor has the standard FEL pendulum system been derived with associated error bounds as we do here. We briefly discuss the low gain theory in light of our NtoR normal form. Our mathematical treatment of the noncollective FEL beam dynamics problem in the

  8. Advanced method for making vitreous waste forms

    International Nuclear Information System (INIS)

    Pope, J.M.; Harrison, D.E.

    1980-01-01

    A process is described for making waste glass that circumvents the problems of dissolving nuclear waste in molten glass at high temperatures. Because the reactive mixing process is independent of the inherent viscosity of the melt, any glass composition can be prepared with equal facility. Separation of the mixing and melting operations permits novel glass fabrication methods to be employed

  9. Normal form of particle motion under the influence of an ac dipole

    Directory of Open Access Journals (Sweden)

    R. Tomás

    2002-05-01

    Full Text Available ac dipoles in accelerators are used to excite coherent betatron oscillations at a drive frequency close to the tune. These beam oscillations may last arbitrarily long and, in principle, there is no significant emittance growth if the ac dipole is adiabatically turned on and off. Therefore the ac dipole seems to be an adequate tool for nonlinear diagnostics provided the particle motion is well described in the presence of the ac dipole and nonlinearities. Normal forms and Lie algebra are powerful tools to study the nonlinear content of an accelerator lattice. In this article a way to obtain the normal form of the Hamiltonian of an accelerator with an ac dipole is described. The particle motion to first order in the nonlinearities is derived using Lie algebra techniques. The dependence of the Hamiltonian terms on the longitudinal coordinate is studied showing that they vary differently depending on the ac dipole parameters. The relation is given between the lines of the Fourier spectrum of the turn-by-turn motion and the Hamiltonian terms.

  10. Principal Typings in a Restricted Intersection Type System for Beta Normal Forms with De Bruijn Indices

    Directory of Open Access Journals (Sweden)

    Daniel Ventura

    2010-01-01

    Full Text Available The lambda-calculus with de Bruijn indices assembles each alpha-class of lambda-terms in a unique term, using indices instead of variable names. Intersection types provide finitary type polymorphism and can characterise normalisable lambda-terms through the property that a term is normalisable if and only if it is typeable. To be closer to computations and to simplify the formalisation of the atomic operations involved in beta-contractions, several calculi of explicit substitution were developed mostly with de Bruijn indices. Versions of explicit substitutions calculi without types and with simple type systems are well investigated in contrast to versions with more elaborate type systems such as intersection types. In previous work, we introduced a de Bruijn version of the lambda-calculus with an intersection type system and proved that it preserves subject reduction, a basic property of type systems. In this paper a version with de Bruijn indices of an intersection type system originally introduced to characterise principal typings for beta-normal forms is presented. We present the characterisation in this new system and the corresponding versions for the type inference and the reconstruction of normal forms from principal typings algorithms. We briefly discuss the failure of the subject reduction property and some possible solutions for it.

  11. Theory and praxis pf map analsys in CHEF part 1: Linear normal form

    Energy Technology Data Exchange (ETDEWEB)

    Michelotti, Leo; /Fermilab

    2008-10-01

    This memo begins a series which, put together, could comprise the 'CHEF Documentation Project' if there were such a thing. The first--and perhaps only--three will telegraphically describe theory, algorithms, implementation and usage of the normal form map analysis procedures encoded in CHEF's collection of libraries. [1] This one will begin the sequence by explaining the linear manipulations that connect the Jacobian matrix of a symplectic mapping to its normal form. It is a 'Reader's Digest' version of material I wrote in Intermediate Classical Dynamics (ICD) [2] and randomly scattered across technical memos, seminar viewgraphs, and lecture notes for the past quarter century. Much of its content is old, well known, and in some places borders on the trivial.1 Nevertheless, completeness requires their inclusion. The primary objective is the 'fundamental theorem' on normalization written on page 8. I plan to describe the nonlinear procedures in a subsequent memo and devote a third to laying out algorithms and lines of code, connecting them with equations written in the first two. Originally this was to be done in one short paper, but I jettisoned that approach after its first section exceeded a dozen pages. The organization of this document is as follows. A brief description of notation is followed by a section containing a general treatment of the linear problem. After the 'fundamental theorem' is proved, two further subsections discuss the generation of equilibrium distributions and issue of 'phase'. The final major section reviews parameterizations--that is, lattice functions--in two and four dimensions with a passing glance at the six-dimensional version. Appearances to the contrary, for the most part I have tried to restrict consideration to matters needed to understand the code in CHEF's libraries.

  12. Method for forming polymerized microfluidic devices

    Science.gov (United States)

    Sommer, Gregory J.; Hatch, Anson V.; Wang, Ying-Chih; Singh, Anup K.; Renzi, Ronald F.; Claudnic, Mark R.

    2013-03-12

    Methods for making a microfluidic device according to embodiments of the present invention include defining.about.cavity. Polymer precursor solution is positioned in the cavity, and exposed to light to begin the polymerization process and define a microchannel. In some embodiments, after the polymerization process is partially complete, a solvent rinse is performed, or fresh polymer precursor introduced into the microchannel. This may promote removal of unpolymerized material from the microchannel and enable smaller feature sizes. The polymer precursor solution may contain an iniferter. Polymerized features therefore may be capped with the iniferter, which is photoactive. The iniferter may aid later binding of a polyacrylamide gel to the microchannel surface.

  13. New Graphical Methods and Test Statistics for Testing Composite Normality

    Directory of Open Access Journals (Sweden)

    Marc S. Paolella

    2015-07-01

    Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.

  14. Theory and praxis of map analsys in CHEF part 2: Nonlinear normal form

    International Nuclear Information System (INIS)

    Michelotti, Leo

    2009-01-01

    This is the second of three memos describing how normal form map analysis is implemented in CHEF. The first (1) explained the manipulations required to assure that initial, linear transformations preserved Poincare invariants, thereby confirming correct normalization of action-angle coordinates. In this one, the transformation will be extended to nonlinear terms. The third, describing how the algorithms were implemented within the software of CHEF's libraries, most likely will never be written. The first section, Section 2, quickly lays out preliminary concepts and relationships. In Section 3, we shall review the perturbation theory - an iterative sequence of transformations that converts a nonlinear mapping into its normal form - and examine the equation which moves calculations from one step to the next. Following that is a section titled 'Interpretation', which identifies connections between the normalized mappings and idealized, integrable, fictitious Hamiltonian models. A final section contains closing comments, some of which may - but probably will not - preview work to be done later. My reasons for writing this memo and its predecessor have already been expressed. (1) To them can be added this: 'black box code' encourages users to proceed with little or no understanding of what it does or how it operates. So far, CHEF has avoided this trap admirably by failing to attract potential users. However, we reached a watershed last year: even I now have difficulty following the software through its maze of operations. Extensions to CHEF's physics functionalities, software upgrades, and even simple maintenance are becoming more difficult than they should. I hope these memos will mark parts of the maze for easier navigation in the future. Despite appearances to the contrary, I tried to include no (or very little) more than the minimum needed to understand what CHEF's nonlinear analysis modules do.1 As with the first memo, material has been lifted - and modified - from

  15. Theory and praxis of map analsys in CHEF part 2: Nonlinear normal form

    Energy Technology Data Exchange (ETDEWEB)

    Michelotti, Leo; /FERMILAB

    2009-04-01

    This is the second of three memos describing how normal form map analysis is implemented in CHEF. The first [1] explained the manipulations required to assure that initial, linear transformations preserved Poincare invariants, thereby confirming correct normalization of action-angle coordinates. In this one, the transformation will be extended to nonlinear terms. The third, describing how the algorithms were implemented within the software of CHEF's libraries, most likely will never be written. The first section, Section 2, quickly lays out preliminary concepts and relationships. In Section 3, we shall review the perturbation theory - an iterative sequence of transformations that converts a nonlinear mapping into its normal form - and examine the equation which moves calculations from one step to the next. Following that is a section titled 'Interpretation', which identifies connections between the normalized mappings and idealized, integrable, fictitious Hamiltonian models. A final section contains closing comments, some of which may - but probably will not - preview work to be done later. My reasons for writing this memo and its predecessor have already been expressed. [1] To them can be added this: 'black box code' encourages users to proceed with little or no understanding of what it does or how it operates. So far, CHEF has avoided this trap admirably by failing to attract potential users. However, we reached a watershed last year: even I now have difficulty following the software through its maze of operations. Extensions to CHEF's physics functionalities, software upgrades, and even simple maintenance are becoming more difficult than they should. I hope these memos will mark parts of the maze for easier navigation in the future. Despite appearances to the contrary, I tried to include no (or very little) more than the minimum needed to understand what CHEF's nonlinear analysis modules do.1 As with the first memo, material

  16. A structure-preserving approach to normal form analysis of power systems; Una propuesta de preservacion de estructura al analisis de su forma normal en sistemas de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Martinez Carrillo, Irma

    2008-01-15

    Power system dynamic behavior is inherently nonlinear and is driven by different processes at different time scales. The size and complexity of these mechanisms has stimulated the search for methods that reduce the original dimension but retain a certain degree of accuracy. In this dissertation, a novel nonlinear dynamical analysis method for the analysis of large amplitude oscillations that embraces ideas from normal form theory and singular perturbation techniques is proposed. This approach allows the full potential of the normal form method to be reached, and is suitably general for application to a wide variety of nonlinear systems. Drawing on the formal theory of dynamical systems, a structure-preserving model of the system is developed that preservers network and load characteristics. By exploiting the separation of fast and slow time scales of the model, an efficient approach based on singular perturbation techniques, is then derived for constructing a nonlinear power system representation that accurately preserves network structure. The method requires no reduction of the constraint equations and gives therefore, information about the effect of network and load characteristics on system behavior. Analytical expressions are then developed that provide approximate solutions to system performance near a singularity and techniques for interpreting these solutions in terms of modal functions are given. New insights into the nature of nonlinear oscillations are also offered and criteria for characterizing network effects on nonlinear system behavior are proposed. Theoretical insight into the behavior of dynamic coupling of differential-algebraic equations and the origin of nonlinearity is given, and implications for analyzing for design and placement of power system controllers in complex nonlinear systems are discussed. The extent of applicability of the proposed procedure is demonstrated by analyzing nonlinear behavior in two realistic test power systems

  17. Shear Stress-Normal Stress (Pressure) Ratio Decides Forming Callus in Patients with Diabetic Neuropathy

    Science.gov (United States)

    Noguchi, Hiroshi; Takehara, Kimie; Ohashi, Yumiko; Suzuki, Ryo; Yamauchi, Toshimasa; Kadowaki, Takashi; Sanada, Hiromi

    2016-01-01

    Aim. Callus is a risk factor, leading to severe diabetic foot ulcer; thus, prevention of callus formation is important. However, normal stress (pressure) and shear stress associated with callus have not been clarified. Additionally, as new valuables, a shear stress-normal stress (pressure) ratio (SPR) was examined. The purpose was to clarify the external force associated with callus formation in patients with diabetic neuropathy. Methods. The external force of the 1st, 2nd, and 5th metatarsal head (MTH) as callus predilection regions was measured. The SPR was calculated by dividing shear stress by normal stress (pressure), concretely, peak values (SPR-p) and time integral values (SPR-i). The optimal cut-off point was determined. Results. Callus formation region of the 1st and 2nd MTH had high SPR-i rather than noncallus formation region. The cut-off value of the 1st MTH was 0.60 and the 2nd MTH was 0.50. For the 5th MTH, variables pertaining to the external forces could not be determined to be indicators of callus formation because of low accuracy. Conclusions. The callus formation cut-off values of the 1st and 2nd MTH were clarified. In the future, it will be necessary to confirm the effect of using appropriate footwear and gait training on lowering SPR-i. PMID:28050567

  18. Shear Stress-Normal Stress (Pressure Ratio Decides Forming Callus in Patients with Diabetic Neuropathy

    Directory of Open Access Journals (Sweden)

    Ayumi Amemiya

    2016-01-01

    Full Text Available Aim. Callus is a risk factor, leading to severe diabetic foot ulcer; thus, prevention of callus formation is important. However, normal stress (pressure and shear stress associated with callus have not been clarified. Additionally, as new valuables, a shear stress-normal stress (pressure ratio (SPR was examined. The purpose was to clarify the external force associated with callus formation in patients with diabetic neuropathy. Methods. The external force of the 1st, 2nd, and 5th metatarsal head (MTH as callus predilection regions was measured. The SPR was calculated by dividing shear stress by normal stress (pressure, concretely, peak values (SPR-p and time integral values (SPR-i. The optimal cut-off point was determined. Results. Callus formation region of the 1st and 2nd MTH had high SPR-i rather than noncallus formation region. The cut-off value of the 1st MTH was 0.60 and the 2nd MTH was 0.50. For the 5th MTH, variables pertaining to the external forces could not be determined to be indicators of callus formation because of low accuracy. Conclusions. The callus formation cut-off values of the 1st and 2nd MTH were clarified. In the future, it will be necessary to confirm the effect of using appropriate footwear and gait training on lowering SPR-i.

  19. Standard Test Method for Normal Spectral Emittance at Elevated Temperatures

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1972-01-01

    1.1 This test method describes a highly accurate technique for measuring the normal spectral emittance of electrically conducting materials or materials with electrically conducting substrates, in the temperature range from 600 to 1400 K, and at wavelengths from 1 to 35 μm. 1.2 The test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is suitable for research laboratories where the highest precision and accuracy are desired, but is not recommended for routine production or acceptance testing. However, because of its high accuracy this test method can be used as a referee method to be applied to production and acceptance testing in cases of dispute. 1.3 The values stated in SI units are to be regarded as the standard. The values in parentheses are for information only. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this stan...

  20. Different methods of measuring ADC values in normal human brain

    International Nuclear Information System (INIS)

    Wei Youping; Sheng Junkang; Zhang Caiyuan

    2009-01-01

    Objective: To investigate better method of measuring ADC values of normal brain, and provide reference for further research. Methods: Twenty healthy people's MR imaging were reviewed. All of them underwent routine MRI scans and echo-planar diffusion-weighted imaging (DWI), and ADC maps were reconstructed on work station. Six regions of interest (ROI) were selected for each object, the mean ADC values were obtained for each position on DWI and ADC maps respectively. Results: On the anisotropic DWI map calculated in the hypothalamus, ADC M , ADC P , ADC S values were no significant difference (P>0.05), in the frontal white matter and internal capsule hindlimb, there was a significant difference (P ave value exist significant difference to direct measurement on the anisotropic (isotropic) ADC map (P<0.001). Conclusion: Diffusion of water in the frontal white matter and internal capsule are anisotropic, but it is isotropic in the hypothalamus; different quantitative methods of diffusion measurement of 4ADC values have significant difference, but ADC values calculated through the DWI map is more accurate, quantitative diffusion study of brain tissue should also consider the diffusion measurement method. (authors)

  1. Normal form analysis of linear beam dynamics in a coupled storage ring

    International Nuclear Information System (INIS)

    Wolski, Andrzej; Woodley, Mark D.

    2004-01-01

    The techniques of normal form analysis, well known in the literature, can be used to provide a straightforward characterization of linear betatron dynamics in a coupled lattice. Here, we consider both the beam distribution and the betatron oscillations in a storage ring. We find that the beta functions for uncoupled motion generalize in a simple way to the coupled case. Defined in the way that we propose, the beta functions remain well behaved (positive and finite) under all circumstances, and have essentially the same physical significance for the beam size and betatron oscillation amplitude as in the uncoupled case. Application of this analysis to the online modeling of the PEP-II rings is also discussed

  2. A Mathematical Framework for Critical Transitions: Normal Forms, Variance and Applications

    Science.gov (United States)

    Kuehn, Christian

    2013-06-01

    Critical transitions occur in a wide variety of applications including mathematical biology, climate change, human physiology and economics. Therefore it is highly desirable to find early-warning signs. We show that it is possible to classify critical transitions by using bifurcation theory and normal forms in the singular limit. Based on this elementary classification, we analyze stochastic fluctuations and calculate scaling laws of the variance of stochastic sample paths near critical transitions for fast-subsystem bifurcations up to codimension two. The theory is applied to several models: the Stommel-Cessi box model for the thermohaline circulation from geoscience, an epidemic-spreading model on an adaptive network, an activator-inhibitor switch from systems biology, a predator-prey system from ecology and to the Euler buckling problem from classical mechanics. For the Stommel-Cessi model we compare different detrending techniques to calculate early-warning signs. In the epidemics model we show that link densities could be better variables for prediction than population densities. The activator-inhibitor switch demonstrates effects in three time-scale systems and points out that excitable cells and molecular units have information for subthreshold prediction. In the predator-prey model explosive population growth near a codimension-two bifurcation is investigated and we show that early-warnings from normal forms can be misleading in this context. In the biomechanical model we demonstrate that early-warning signs for buckling depend crucially on the control strategy near the instability which illustrates the effect of multiplicative noise.

  3. Methods for forming particles from single source precursors

    Science.gov (United States)

    Fox, Robert V [Idaho Falls, ID; Rodriguez, Rene G [Pocatello, ID; Pak, Joshua [Pocatello, ID

    2011-08-23

    Single source precursors are subjected to carbon dioxide to form particles of material. The carbon dioxide may be in a supercritical state. Single source precursors also may be subjected to supercritical fluids other than supercritical carbon dioxide to form particles of material. The methods may be used to form nanoparticles. In some embodiments, the methods are used to form chalcopyrite materials. Devices such as, for example, semiconductor devices may be fabricated that include such particles. Methods of forming semiconductor devices include subjecting single source precursors to carbon dioxide to form particles of semiconductor material, and establishing electrical contact between the particles and an electrode.

  4. Child in a Form: The Definition of Normality and Production of Expertise in Teacher Statement Forms--The Case of Northern Finland, 1951-1990

    Science.gov (United States)

    Koskela, Anne; Vehkalahti, Kaisa

    2017-01-01

    This article shows the importance of paying attention to the role of professional devices, such as standardised forms, as producers of normality and deviance in the history of education. Our case study focused on the standardised forms used by teachers during child guidance clinic referrals and transfers to special education in northern Finland,…

  5. Metacognition and Reading: Comparing Three Forms of Metacognition in Normally Developing Readers and Readers with Dyslexia.

    Science.gov (United States)

    Furnes, Bjarte; Norman, Elisabeth

    2015-08-01

    Metacognition refers to 'cognition about cognition' and includes metacognitive knowledge, strategies and experiences (Efklides, 2008; Flavell, 1979). Research on reading has shown that better readers demonstrate more metacognitive knowledge than poor readers (Baker & Beall, 2009), and that reading ability improves through strategy instruction (Gersten, Fuchs, Williams, & Baker, 2001). The current study is the first to specifically compare the three forms of metacognition in dyslexic (N = 22) versus normally developing readers (N = 22). Participants read two factual texts, with learning outcome measured by a memory task. Metacognitive knowledge and skills were assessed by self-report. Metacognitive experiences were measured by predictions of performance and judgments of learning. Individuals with dyslexia showed insight into their reading problems, but less general knowledge of how to approach text reading. They more often reported lack of available reading strategies, but groups did not differ in the use of deep and surface strategies. Learning outcome and mean ratings of predictions of performance and judgments of learning were lower in dyslexic readers, but not the accuracy with which metacognitive experiences predicted learning. Overall, the results indicate that dyslexic reading and spelling problems are not generally associated with lower levels of metacognitive knowledge, metacognitive strategies or sensitivity to metacognitive experiences in reading situations. 2015 The Authors. Dyslexia Published by John Wiley & Sons Ltd.

  6. Development of standard testing methods for nuclear-waste forms

    International Nuclear Information System (INIS)

    Mendel, J.E.; Nelson, R.D.

    1981-11-01

    Standard test methods for waste package component development and design, safety analyses, and licensing are being developed for the Nuclear Waste Materials Handbook. This paper describes mainly the testing methods for obtaining waste form materials data

  7. Design of Normal Concrete Mixtures Using Workability-Dispersion-Cohesion Method

    Directory of Open Access Journals (Sweden)

    Hisham Qasrawi

    2016-01-01

    Full Text Available The workability-dispersion-cohesion method is a new proposed method for the design of normal concrete mixes. The method uses special coefficients called workability-dispersion and workability-cohesion factors. These coefficients relate workability to mobility and stability of the concrete mix. The coefficients are obtained from special charts depending on mix requirements and aggregate properties. The method is practical because it covers various types of aggregates that may not be within standard specifications, different water to cement ratios, and various degrees of workability. Simple linear relationships were developed for variables encountered in the mix design and were presented in graphical forms. The method can be used in countries where the grading or fineness of the available materials is different from the common international specifications (such as ASTM or BS. Results were compared to the ACI and British methods of mix design. The method can be extended to cover all types of concrete.

  8. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors and intermediate products formed by such methods

    Science.gov (United States)

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2012-12-04

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  9. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors formed by such methods

    Science.gov (United States)

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2014-09-09

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  10. Methods of forming thermal management systems and thermal management methods

    Science.gov (United States)

    Gering, Kevin L.; Haefner, Daryl R.

    2012-06-05

    A thermal management system for a vehicle includes a heat exchanger having a thermal energy storage material provided therein, a first coolant loop thermally coupled to an electrochemical storage device located within the first coolant loop and to the heat exchanger, and a second coolant loop thermally coupled to the heat exchanger. The first and second coolant loops are configured to carry distinct thermal energy transfer media. The thermal management system also includes an interface configured to facilitate transfer of heat generated by an internal combustion engine to the heat exchanger via the second coolant loop in order to selectively deliver the heat to the electrochemical storage device. Thermal management methods are also provided.

  11. Investigation of reliability, validity and normality Persian version of the California Critical Thinking Skills Test; Form B (CCTST

    Directory of Open Access Journals (Sweden)

    Khallli H

    2003-04-01

    Full Text Available Background: To evaluate the effectiveness of the present educational programs in terms of students' achieving problem solving, decision making and critical thinking skills, reliable, valid and standard instrument are needed. Purposes: To Investigate the Reliability, validity and Norm of CCTST Form.B .The California Critical Thinking Skills Test contain 34 multi-choice questions with a correct answer in the jive Critical Thinking (CT cognitive skills domain. Methods: The translated CCTST Form.B were given t0405 BSN nursing students ojNursing Faculties located in Tehran (Tehran, Iran and Shahid Beheshti Universitiesthat were selected in the through random sampling. In order to determine the face and content validity the test was translated and edited by Persian and English language professor and researchers. it was also confirmed by judgments of a panel of medical education experts and psychology professor's. CCTST reliability was determined with internal consistency and use of KR-20. The construct validity of the test was investigated with factor analysis and internal consistency and group difference. Results: The test coefficien for reliablity was 0.62. Factor Analysis indicated that CCTST has been formed from 5 factor (element namely: Analysis, Evaluation, lriference, Inductive and Deductive Reasoning. Internal consistency method shows that All subscales have been high and positive correlation with total test score. Group difference method between nursing and philosophy students (n=50 indicated that there is meaningfUl difference between nursing and philosophy students scores (t=-4.95,p=0.OOO1. Scores percentile norm also show that percentile offifty scores related to 11 raw score and 95, 5 percentiles are related to 17 and 6 raw score ordinary. Conclusions: The Results revealed that the questions test is sufficiently reliable as a research tool, and all subscales measure a single construct (Critical Thinking and are able to distinguished the

  12. Evaluation of normalization methods in mammalian microRNA-Seq data

    Science.gov (United States)

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  13. A One-Sample Test for Normality with Kernel Methods

    OpenAIRE

    Kellner , Jérémie; Celisse , Alain

    2015-01-01

    We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. O...

  14. Possibilities of Particle Finite Element Methods in Industrial Forming Processes

    Science.gov (United States)

    Oliver, J.; Cante, J. C.; Weyler, R.; Hernandez, J.

    2007-04-01

    The work investigates the possibilities offered by the particle finite element method (PFEM) in the simulation of forming problems involving large deformations, multiple contacts, and new boundaries generation. The description of the most distinguishing aspects of the PFEM, and its application to simulation of representative forming processes, illustrate the proposed methodology.

  15. A systematic evaluation of normalization methods in quantitative label-free proteomics.

    Science.gov (United States)

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2018-01-01

    To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.

  16. A new method locating good glass-forming compositions

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Dechuan [Department of Materials Physics and Chemistry, Northeastern University, No.3-11, Wenhua Road, Shenyang, 110819 (China); Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China); Geng, Yan [Department of Materials Physics and Chemistry, Northeastern University, No.3-11, Wenhua Road, Shenyang, 110819 (China); Li, Zhengkun [Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China); Liu, Dingming [Department of Materials Physics and Chemistry, Northeastern University, No.3-11, Wenhua Road, Shenyang, 110819 (China); Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China); Fu, Huameng; Zhu, Zhengwang [Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China); Qi, Yang, E-mail: qiyang@imp.neu.edu.cn [Department of Materials Physics and Chemistry, Northeastern University, No.3-11, Wenhua Road, Shenyang, 110819 (China); Zhang, Haifeng, E-mail: hfzhang@imr.ac.cn [Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang, 110016 (China)

    2015-10-15

    A new method was proposed to pinpoint the compositions with good glass forming ability (GFA) by combining atomic clusters and mixing entropy. The clusters were confirmed by analyzing competing crystalline phases. The method was applied to the Zr–Al–Ni–Cu–Ag alloy system. A series of glass formers with diameter up to 20 mm were quickly detected in this system. The good glass formers were located only after trying 5 compositions around the calculated composition. The method was also effective in other multi-component systems. This method might provide a new way to understand glass formation and to quickly pinpoint compositions with high GFA. - Highlights: • A new method was proposed to quickly design glass formers with high glass forming ability. • The method of designing pentabasic Zr–Al–Ni–Cu–Ag alloys was applied. • A series of new Zr-based bulk metallic glasses with critical diameter of 20 mm were discovered.

  17. A new method locating good glass-forming compositions

    International Nuclear Information System (INIS)

    Yu, Dechuan; Geng, Yan; Li, Zhengkun; Liu, Dingming; Fu, Huameng; Zhu, Zhengwang; Qi, Yang; Zhang, Haifeng

    2015-01-01

    A new method was proposed to pinpoint the compositions with good glass forming ability (GFA) by combining atomic clusters and mixing entropy. The clusters were confirmed by analyzing competing crystalline phases. The method was applied to the Zr–Al–Ni–Cu–Ag alloy system. A series of glass formers with diameter up to 20 mm were quickly detected in this system. The good glass formers were located only after trying 5 compositions around the calculated composition. The method was also effective in other multi-component systems. This method might provide a new way to understand glass formation and to quickly pinpoint compositions with high GFA. - Highlights: • A new method was proposed to quickly design glass formers with high glass forming ability. • The method of designing pentabasic Zr–Al–Ni–Cu–Ag alloys was applied. • A series of new Zr-based bulk metallic glasses with critical diameter of 20 mm were discovered

  18. Cognitive Factors in the Choice of Syntactic Form by Aphasic and Normal Speakers of English and Japanese: The Speaker's Impulse.

    Science.gov (United States)

    Menn, Lise; And Others

    This study examined the role of empathy in the choice of syntactic form and the degree of independence of pragmatic and syntactic abilities in a range of aphasic patients. Study 1 involved 9 English-speaking and 9 Japanese-speaking aphasic subjects with 10 English-speaking and 4 Japanese normal controls. Study 2 involved 14 English- and 6…

  19. A simple global representation for second-order normal forms of Hamiltonian systems relative to periodic flows

    International Nuclear Information System (INIS)

    Avendaño-Camacho, M; Vallejo, J A; Vorobjev, Yu

    2013-01-01

    We study the determination of the second-order normal form for perturbed Hamiltonians relative to the periodic flow of the unperturbed Hamiltonian H 0 . The formalism presented here is global, and can be easily implemented in any computer algebra system. We illustrate it by means of two examples: the Hénon–Heiles and the elastic pendulum Hamiltonians. (paper)

  20. Algorithms for finding Chomsky and Greibach normal forms for a fuzzy context-free grammar using an algebraic approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, E.T.

    1983-01-01

    Algorithms for the construction of the Chomsky and Greibach normal forms for a fuzzy context-free grammar using the algebraic approach are presented and illustrated by examples. The results obtained in this paper may have useful applications in fuzzy languages, pattern recognition, information storage and retrieval, artificial intelligence, database and pictorial information systems. 16 references.

  1. Machine learning methods for clinical forms analysis in mental health.

    Science.gov (United States)

    Strauss, John; Peguero, Arturo Martinez; Hirst, Graeme

    2013-01-01

    In preparation for a clinical information system implementation, the Centre for Addiction and Mental Health (CAMH) Clinical Information Transformation project completed multiple preparation steps. An automated process was desired to supplement the onerous task of manual analysis of clinical forms. We used natural language processing (NLP) and machine learning (ML) methods for a series of 266 separate clinical forms. For the investigation, documents were represented by feature vectors. We used four ML algorithms for our examination of the forms: cluster analysis, k-nearest neigh-bours (kNN), decision trees and support vector machines (SVM). Parameters for each algorithm were optimized. SVM had the best performance with a precision of 64.6%. Though we did not find any method sufficiently accurate for practical use, to our knowledge this approach to forms has not been used previously in mental health.

  2. Carbon nanotubes and methods of forming same at low temperature

    Science.gov (United States)

    Biris, Alexandru S.; Dervishi, Enkeleda

    2017-05-02

    In one aspect of the invention, a method for growth of carbon nanotubes includes providing a graphitic composite, decorating the graphitic composite with metal nanostructures to form graphene-contained powders, and heating the graphene-contained powders at a target temperature to form the carbon nanotubes in an argon/hydrogen environment that is devoid of a hydrocarbon source. In one embodiment, the target temperature can be as low as about 150.degree. C. (.+-.5.degree. C.).

  3. Hollow fiber membranes and methods for forming same

    Science.gov (United States)

    Bhandari, Dhaval Ajit; McCloskey, Patrick Joseph; Howson, Paul Edward; Narang, Kristi Jean; Koros, William

    2016-03-22

    The invention provides improved hollow fiber membranes having at least two layers, and methods for forming the same. The methods include co-extruding a first composition, a second composition, and a third composition to form a dual layer hollow fiber membrane. The first composition includes a glassy polymer; the second composition includes a polysiloxane; and the third composition includes a bore fluid. The dual layer hollow fiber membranes include a first layer and a second layer, the first layer being a porous layer which includes the glassy polymer of the first composition, and the second layer being a polysiloxane layer which includes the polysiloxane of the second composition.

  4. Flux form Semi-Lagrangian methods for parabolic problems

    Directory of Open Access Journals (Sweden)

    Bonaventura Luca

    2016-09-01

    Full Text Available A semi-Lagrangian method for parabolic problems is proposed, that extends previous work by the authors to achieve a fully conservative, flux-form discretization of linear and nonlinear diffusion equations. A basic consistency and stability analysis is proposed. Numerical examples validate the proposed method and display its potential for consistent semi-Lagrangian discretization of advection diffusion and nonlinear parabolic problems.

  5. FORMED: Bringing Formal Methods to the Engineering Desktop

    Science.gov (United States)

    2016-02-01

    FORMED: BRINGING FORMAL METHODS TO THE ENGINEERING DESKTOP BAE SYSTEMS FEBRUARY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE...This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government’s...BRINGING FORMAL METHODS TO THE ENGINEERING DESKTOP 5a. CONTRACT NUMBER FA8750-14-C-0024 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 63781D

  6. Forms and Methods of Agricultural Sector Innovative Activity Improvement

    Directory of Open Access Journals (Sweden)

    Aisha S. Ablyaeva

    2013-01-01

    Full Text Available The article is focused on basic forms and methods to improve the efficiency of innovative activity in the agricultural sector of Ukraine. It was determined that the development of agriculture in Ukraine is affected by a number of factors that must be considered to design innovative models of entrepreneurship development and ways to improve the efficiency of innovative entrepreneurship activity.

  7. The Case Method as a Form of Communication.

    Science.gov (United States)

    Kingsley, Lawrence

    1982-01-01

    Questions the wisdom of obscurantism as a basis for case writing. Contends that in its present state the case method, for most students, is an inefficient way of learning. Calls for a consensus that cases should be as well-written as other forms of scholarship. (PD)

  8. Electrodynamics, Differential Forms and the Method of Images

    Science.gov (United States)

    Low, Robert J.

    2011-01-01

    This paper gives a brief description of how Maxwell's equations are expressed in the language of differential forms and use this to provide an elegant demonstration of how the method of images (well known in electrostatics) also works for electrodynamics in the presence of an infinite plane conducting boundary. The paper should be accessible to an…

  9. Delivery Device and Method for Forming the Same

    Science.gov (United States)

    Ma, Peter X. (Inventor); Liu, Xiaohua (Inventor); McCauley, Laurie (Inventor)

    2014-01-01

    A delivery device includes a hollow container, and a plurality of biodegradable and/or erodible polymeric layers established in the container. A layer including a predetermined substance is established between each of the plurality of polymeric layers, whereby degradation of the polymeric layer and release of the predetermined substance occur intermittently. Methods for forming the device are also disclosed herein.

  10. Generation of Strategies for Environmental Deception in Two-Player Normal-Form Games

    Science.gov (United States)

    2015-06-18

    found in the literature is pre- sented by Kohlberg and Mertens [23]. A stable equilibrium by their definition is an equi- librium in an extensive-form...the equilibrium in this state provides them with an increased payoff. While interesting, Kohlberg and Mertens’ defi- 13 nition of equilibrium...stability used by Kohlberg and Mertens. Arsham’s work focuses on determining the amount by which a mixed-strategy Nash equilibrium’s payoff values can

  11. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  12. On a computer implementation of the block Gauss–Seidel method for normal systems of equations

    OpenAIRE

    Alexander I. Zhdanov; Ekaterina Yu. Bogdanova

    2016-01-01

    This article focuses on the modification of the block option Gauss-Seidel method for normal systems of equations, which is a sufficiently effective method of solving generally overdetermined, systems of linear algebraic equations of high dimensionality. The main disadvantage of methods based on normal equations systems is the fact that the condition number of the normal system is equal to the square of the condition number of the original problem. This fact has a negative impact on the rate o...

  13. Method of predicting surface deformation in the form of sinkholes

    Energy Technology Data Exchange (ETDEWEB)

    Chudek, M.; Arkuszewski, J.

    1980-06-01

    Proposes a method for predicting probability of sinkhole shaped subsidence, number of funnel-shaped subsidences and size of individual funnels. The following factors which influence the sudden subsidence of the surface in the form of funnels are analyzed: geologic structure of the strata between mining workings and the surface, mining depth, time factor, and geologic disolocations. Sudden surface subsidence is observed only in the case of workings situated up to a few dozen meters from the surface. Using the proposed method is explained with some examples. It is suggested that the method produces correct results which can be used in coal mining and in ore mining. (1 ref.) (In Polish)

  14. Study on electric parameters of wild and cultivated cotton forms being in normal state and irradiated

    International Nuclear Information System (INIS)

    Nazirov, N.N.; Kamalov, N.; Norbaev, N.

    1978-01-01

    The radiation effect on electric conductivity of tissues in case of alternating current, electrical capacity and cell impedance has been studied. Gamma irradiation of seedlings results in definite changes of electric factors of cells (electric conductivity, electric capacity, impedance). It is shown that especially strong changes have been revealed during gamma irradiation of radiosensitive wild form of cotton plants. The deviation of cell electric factors from the standard depends on the violation of evolutionally composed ion heterogeneity and cell colloid system state, which results in changes in their structure and metabolism in them

  15. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    Science.gov (United States)

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  16. Bilinear nodal transport method in weighted diamond difference form

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1987-01-01

    Nodal methods have been developed and implemented for the numerical solution of the discrete ordinates neutron transport equation. Numerical testing of these methods and comparison of their results to those obtained by conventional methods have established the high accuracy of nodal methods. Furthermore, it has been suggested that the linear-linear approximation is the most computationally efficient, practical nodal approximation. Indeed, this claim has been substantiated by comparing the accuracy in the solution, and the CPU time required to achieve convergence to that solution by several nodal approximations, as well as the diamond difference scheme. Two types of linear-linear nodal methods have been developed in the literature: analytic linear-linear (NLL) methods, in which the transverse-leakage terms are derived analytically, and approximate linear-linear (PLL) methods, in which these terms are approximated. In spite of their higher accuracy, NLL methods result in very complicated discrete-variable equations that exhibit a high degree of coupling, thus requiring special solution algorithms. On the other hand, the sacrificed accuracy in PLL methods is compensated for by the simple discrete-variable equations and diamond-difference-like solution algorithm. In this paper the authors outline the development of an NLL nodal method, the bilinear method, which can be written in a weighted diamond difference form with one spatial weight per dimension that is analytically derived rather than preassigned in an ad hoc fashion

  17. Method for forming H2-permselective oxide membranes

    Science.gov (United States)

    Gavalas, G.R.; Nam, S.W.; Tsapatsis, M.; Kim, S.

    1995-09-26

    Methods are disclosed for forming permselective oxide membranes that are highly selective to permeation of hydrogen by chemical deposition of reactants in the pore of porous tubes, such as Vycor{trademark} glass or Al{sub 2}O{sub 3} tubes. The porous tubes have pores extending through the tube wall. The process involves forming a stream containing a first reactant of the formula RX{sub n}, wherein R is silicon, titanium, boron or aluminum, X is chlorine, bromine or iodine, and n is a number which is equal to the valence of R; and forming another stream containing water vapor as the second reactant. Both of the reactant streams are passed along either the outside or the inside surface of a porous tube and the streams react in the pores of the porous tube to form a nonporous layer of R-oxide in the pores. The membranes are formed by the hydrolysis of the respective halides. In another embodiment, the first reactant stream contains a first reactant having the formula SiH{sub n}Cl{sub 4{minus}n} where n is 1, 2 or 3; and the second reactant stream contains water vapor and oxygen. In still another embodiment the first reactant stream containing a first reactant selected from the group consisting of Cl{sub 3}SiOSiCl{sub 3}, Cl{sub 3}SiOSiCl{sub 2}OSiCl{sub 3}, and mixtures thereof and the second reactant stream contains water vapor. In still another embodiment, membrane formation is carried out by an alternating flow deposition method. This involves a sequence of cycles, each cycle comprising introduction of the halide-containing stream and allowance of a specific time for reaction followed by purge and flow of the water vapor containing stream for a specific length of time. In all embodiments the nonporous layers formed are selectively permeable to hydrogen. 11 figs.

  18. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  19. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  20. Analysis of Voltage Forming Methods for Multiphase Inverters

    Directory of Open Access Journals (Sweden)

    Tadas Lipinskis

    2013-05-01

    Full Text Available The article discusses advantages of the multiphase AC induction motor over three or less phase motors. It presents possible stator winding configurations for a multiphase induction motor. Various fault control strategies were reviewed for phases feeding the motor. The authors propose a method for quality evaluation of voltage forming algorithm in the inverter. Simulation of a six-phase voltage source inverter, voltage in which is formed using a simple SPWM control algorithm, was performed in Matlab Simulink. Simulation results were evaluated using the proposed method. Inverter’s power stage was powered by 400 V DC source. The spectrum of output currents was analysed and the magnitude of the main frequency component was at least 12 times greater than the next biggest-magnitude component. The value of rectified inverter voltage was 373 V.Article in Lithuanian

  1. Article, component, and method of forming an article

    Science.gov (United States)

    Lacy, Benjamin Paul; Itzel, Gary Michael; Kottilingam, Srikanth Chandrudu; Dutta, Sandip; Schick, David Edward

    2018-05-22

    An article and method of forming an article are provided. The article includes a body portion separating an inner region and an outer region, an aperture in the body portion, the aperture fluidly connecting the inner region to the outer region, and a conduit extending from an outer surface of the body portion at the aperture and being arranged and disposed to controllably direct fluid from the inner region to the outer region. The method includes providing a body portion separating an inner region and an outer region, providing an aperture in the body portion, and forming a conduit over the aperture, the conduit extending from an outer surface of the body portion and being arranged and disposed to controllably direct fluid from the inner region to the outer region. The article is arranged and disposed for insertion within a hot gas path component.

  2. Numerical Methods for Plate Forming by Line Heating

    DEFF Research Database (Denmark)

    Clausen, Henrik Bisgaard

    2000-01-01

    Line heating is the process of forming originally flat plates into a desired shape by means of heat treatment. Parameter studies are carried out on a finite element model to provide knowledge of how the process behaves with varying heating conditions. For verification purposes, experiments are ca...... are carried out; one set of experiments investigates the actual heat flux distribution from a gas torch and another verifies the validty of the FE calculations. Finally, a method to predict the heating pattern is described....

  3. Methods of forming aluminum oxynitride-comprising bodies, including methods of forming a sheet of transparent armor

    Science.gov (United States)

    Chu, Henry Shiu-Hung [Idaho Falls, ID; Lillo, Thomas Martin [Idaho Falls, ID

    2008-12-02

    The invention includes methods of forming an aluminum oxynitride-comprising body. For example, a mixture is formed which comprises A:B:C in a respective molar ratio in the range of 9:3.6-6.2:0.1-1.1, where "A" is Al.sub.2O.sub.3, "B" is AlN, and "C" is a total of one or more of B.sub.2O.sub.3, SiO.sub.2, Si--Al--O--N, and TiO.sub.2. The mixture is sintered at a temperature of at least 1,600.degree. C. at a pressure of no greater than 500 psia effective to form an aluminum oxynitride-comprising body which is at least internally transparent and has at least 99% maximum theoretical density.

  4. Post-UV colony-forming ability of normal fibroblast strains and of the xeroderma pigmentosum group G strain

    International Nuclear Information System (INIS)

    Barrett, S.F.; Tarone, R.E.; Moshell, A.N.; Ganges, M.B.; Robbins, J.H.

    1981-01-01

    In xeroderma pigmentosum, an inherited disorder of defective DNA repair, post-uv colony-forming ability of fibroblasts from patients in complementation groups A through F correlates with the patients' neurological status. The first xeroderma pigmentosum patient assigned to the recently discovered group G had the neurological abnormalities of XP. Researchers have determined the post-uv colony-forming ability of cultured fibroblasts from this patient and from 5 more control donors. Log-phase fibroblasts were irradiated with 254 nm uv light from a germicidal lamp, trypsinized, and replated at known densities. After 2 to 4 weeks' incubation the cells were fixed, stained and scored for colony formation. The strains' post-uv colony-forming ability curves were obtained by plotting the log of the percent remaining post-uv colony-forming ability as a function of the uv dose. The post-uv colony-forming ability of 2 of the 5 new normal strains was in the previously defined control donor zone, but that of the other 3 extended down to the level of the most resistant xeroderma pigmentosum strain. The post-uv colony-forming ability curve of the group G fibroblasts was not significantly different from the curves of the group D fibroblast strains from patients with clinical histories similar to that of the group G patient

  5. Methods for forming complex oxidation reaction products including superconducting articles

    International Nuclear Information System (INIS)

    Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.

    1992-01-01

    This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product

  6. Method for forming thermally stable nanoparticles on supports

    Science.gov (United States)

    Roldan Cuenya, Beatriz; Naitabdi, Ahmed R.; Behafarid, Farzad

    2013-08-20

    An inverse micelle-based method for forming nanoparticles on supports includes dissolving a polymeric material in a solvent to provide a micelle solution. A nanoparticle source is dissolved in the micelle solution. A plurality of micelles having a nanoparticle in their core and an outer polymeric coating layer are formed in the micelle solution. The micelles are applied to a support. The polymeric coating layer is then removed from the micelles to expose the nanoparticles. A supported catalyst includes a nanocrystalline powder, thin film, or single crystal support. Metal nanoparticles having a median size from 0.5 nm to 25 nm, a size distribution having a standard deviation .ltoreq.0.1 of their median size are on or embedded in the support. The plurality of metal nanoparticles are dispersed and in a periodic arrangement. The metal nanoparticles maintain their periodic arrangement and size distribution following heat treatments of at least 1,000.degree. C.

  7. Alternative normalization methods demonstrate widespread cortical hypometabolism in untreated de novo Parkinson's disease

    DEFF Research Database (Denmark)

    Berti, Valentina; Polito, C; Borghammer, Per

    2012-01-01

    , recent studies suggested that conventional data normalization procedures may not always be valid, and demonstrated that alternative normalization strategies better allow detection of low magnitude changes. We hypothesized that these alternative normalization procedures would disclose more widespread...... metabolic alterations in de novo PD. METHODS: [18F]FDG PET scans of 26 untreated de novo PD patients (Hoehn & Yahr stage I-II) and 21 age-matched controls were compared using voxel-based analysis. Normalization was performed using gray matter (GM), white matter (WM) reference regions and Yakushev...... normalization. RESULTS: Compared to GM normalization, WM and Yakushev normalization procedures disclosed much larger cortical regions of relative hypometabolism in the PD group with extensive involvement of frontal and parieto-temporal-occipital cortices, and several subcortical structures. Furthermore...

  8. NNWSI waste form test method for unsaturated disposal conditions

    International Nuclear Information System (INIS)

    Bates, J.K.; Gerding, T.J.

    1985-03-01

    A test method has been developed to measure the release of radionuclides from the waste package under simulated NNWSI repository conditions, and to provide information concerning materials interactions that may occur in the repository. Data are presented from Unsaturated testing of simulated Savannah River Laboratory 165 glass completed through 26 weeks. The relationship between these results and those from parametric and analog testing are described. The data indicate that the waste form test is capable of producing consistent, reproducible results that will be useful in evaluating the role of the waste package in the long-term performance of the repository. 6 refs., 7 figs., 5 tabs

  9. Method of forming capsules containing a precise amount of material

    Science.gov (United States)

    Grossman, M.W.; George, W.A.; Maya, J.

    1986-06-24

    A method of forming a sealed capsule containing a submilligram quantity of mercury or the like, the capsule being constructed from a hollow glass tube, by placing a globule or droplet of the mercury in the tube. The tube is then evacuated and sealed and is subsequently heated so as to vaporize the mercury and fill the tube therewith. The tube is then separated into separate sealed capsules by heating spaced locations along the tube with a coiled heating wire means to cause collapse spaced locations there along and thus enable separation of the tube into said capsules. 7 figs.

  10. Method of forming a ceramic to ceramic joint

    Science.gov (United States)

    Cutler, Raymond Ashton; Hutchings, Kent Neal; Kleinlein, Brian Paul; Carolan, Michael Francis

    2010-04-13

    A method of joining at least two sintered bodies to form a composite structure, includes: providing a joint material between joining surfaces of first and second sintered bodies; applying pressure from 1 kP to less than 5 MPa to provide an assembly; heating the assembly to a conforming temperature sufficient to allow the joint material to conform to the joining surfaces; and further heating the assembly to a joining temperature below a minimum sintering temperature of the first and second sintered bodies. The joint material includes organic component(s) and ceramic particles. The ceramic particles constitute 40-75 vol. % of the joint material, and include at least one element of the first and/or second sintered bodies. Composite structures produced by the method are also disclosed.

  11. Worthwhile optical method for free-form mirrors qualification

    Science.gov (United States)

    Sironi, G.; Canestrari, R.; Toso, G.; Pareschi, G.

    2013-09-01

    We present an optical method for free-form mirrors qualification developed by the Italian National Institute for Astrophysics (INAF) in the context of the ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) Project which includes, among its items, the design, development and installation of a dual-mirror telescope prototype for the Cherenkov Telescope Array (CTA) observatory. The primary mirror panels of the telescope prototype are free-form concave mirrors with few microns accuracy required on the shape error. The developed technique is based on the synergy between a Ronchi-like optical test performed on the reflecting surface and the image, obtained by means of the TraceIT ray-tracing proprietary code, a perfect optics should generate in the same configuration. This deflectometry test allows the reconstruction of the slope error map that the TraceIT code can process to evaluate the measured mirror optical performance at the telescope focus. The advantages of the proposed method is that it substitutes the use of 3D coordinates measuring machine reducing production time and costs and offering the possibility to evaluate on-site the mirror image quality at the focus. In this paper we report the measuring concept and compare the obtained results to the similar ones obtained processing the shape error acquired by means of a 3D coordinates measuring machine.

  12. Imagine-Self Perspective-Taking and Rational Self-Interested Behavior in a Simple Experimental Normal-Form Game

    Directory of Open Access Journals (Sweden)

    Adam Karbowski

    2017-09-01

    Full Text Available The purpose of this study is to explore the link between imagine-self perspective-taking and rational self-interested behavior in experimental normal-form games. Drawing on the concept of sympathy developed by Adam Smith and further literature on perspective-taking in games, we hypothesize that introduction of imagine-self perspective-taking by decision-makers promotes rational self-interested behavior in a simple experimental normal-form game. In our study, we examined behavior of 404 undergraduate students in the two-person game, in which the participant can suffer a monetary loss only if she plays her Nash equilibrium strategy and the opponent plays her dominated strategy. Results suggest that the threat of suffering monetary losses effectively discourages the participants from choosing Nash equilibrium strategy. In general, players may take into account that opponents choose dominated strategies due to specific not self-interested motivations or errors. However, adopting imagine-self perspective by the participants leads to more Nash equilibrium choices, perhaps by alleviating participants’ attributions of susceptibility to errors or non-self-interested motivation to the opponents.

  13. Imagine-Self Perspective-Taking and Rational Self-Interested Behavior in a Simple Experimental Normal-Form Game.

    Science.gov (United States)

    Karbowski, Adam; Ramsza, Michał

    2017-01-01

    The purpose of this study is to explore the link between imagine-self perspective-taking and rational self-interested behavior in experimental normal-form games. Drawing on the concept of sympathy developed by Adam Smith and further literature on perspective-taking in games, we hypothesize that introduction of imagine-self perspective-taking by decision-makers promotes rational self-interested behavior in a simple experimental normal-form game. In our study, we examined behavior of 404 undergraduate students in the two-person game, in which the participant can suffer a monetary loss only if she plays her Nash equilibrium strategy and the opponent plays her dominated strategy. Results suggest that the threat of suffering monetary losses effectively discourages the participants from choosing Nash equilibrium strategy. In general, players may take into account that opponents choose dominated strategies due to specific not self-interested motivations or errors. However, adopting imagine-self perspective by the participants leads to more Nash equilibrium choices, perhaps by alleviating participants' attributions of susceptibility to errors or non-self-interested motivation to the opponents.

  14. Normalization Methods and Selection Strategies for Reference Materials in Stable Isotope Analyses - Review

    International Nuclear Information System (INIS)

    Skrzypek, G.; Sadler, R.; Paul, D.; Forizs, I.

    2011-01-01

    A stable isotope analyst has to make a number of important decisions regarding how to best determine the 'true' stable isotope composition of analysed samples in reference to an international scale. It has to be decided which reference materials should be used, the number of reference materials and how many repetitions of each standard is most appropriate for a desired level of precision, and what normalization procedure should be selected. In this paper we summarise what is known about propagation of uncertainties associated with normalization procedures and propagation of uncertainties associated with reference materials used as anchors for the determination of 'true' values for δ''1''3C and δ''1''8O. Normalization methods Several normalization methods transforming the 'raw' value obtained from mass spectrometers to one of the internationally recognized scales has been developed. However, as summarised by Paul et al. different normalization transforms alone may lead to inconsistencies between laboratories. The most common normalization procedures are: single-point anchoring (versus working gas and certified reference standard), modified single-point normalization, linear shift between the measured and the true isotopic composition of two certified reference standards, two-point and multipoint linear normalization methods. The accuracy of these various normalization methods has been compared by using analytical laboratory data by Paul et al., with the single-point and normalization versus tank calibrations resulting in the largest normalization errors, and that also exceed the analytical uncertainty recommended for δ 13 C. The normalization error depends greatly on the relative differences between the stable isotope composition of the reference material and the sample. On the other hand, the normalization methods using two or more certified reference standards produces a smaller normalization error, if the reference materials are bracketing the whole range of

  15. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. On a computer implementation of the block Gauss–Seidel method for normal systems of equations

    Directory of Open Access Journals (Sweden)

    Alexander I. Zhdanov

    2016-12-01

    Full Text Available This article focuses on the modification of the block option Gauss-Seidel method for normal systems of equations, which is a sufficiently effective method of solving generally overdetermined, systems of linear algebraic equations of high dimensionality. The main disadvantage of methods based on normal equations systems is the fact that the condition number of the normal system is equal to the square of the condition number of the original problem. This fact has a negative impact on the rate of convergence of iterative methods based on normal equations systems. To increase the speed of convergence of iterative methods based on normal equations systems, for solving ill-conditioned problems currently different preconditioners options are used that reduce the condition number of the original system of equations. However, universal preconditioner for all applications does not exist. One of the effective approaches that improve the speed of convergence of the iterative Gauss–Seidel method for normal systems of equations, is to use its version of the block. The disadvantage of the block Gauss–Seidel method for production systems is the fact that it is necessary to calculate the pseudoinverse matrix for each iteration. We know that finding the pseudoinverse is a difficult computational procedure. In this paper, we propose a procedure to replace the matrix pseudo-solutions to the problem of normal systems of equations by Cholesky. Normal equations arising at each iteration of Gauss–Seidel method, have a relatively low dimension compared to the original system. The results of numerical experimentation demonstrating the effectiveness of the proposed approach are given.

  17. Combustible structural composites and methods of forming combustible structural composites

    Science.gov (United States)

    Daniels, Michael A.; Heaps, Ronald J.; Steffler, Eric D.; Swank, W. David

    2013-04-02

    Combustible structural composites and methods of forming same are disclosed. In an embodiment, a combustible structural composite includes combustible material comprising a fuel metal and a metal oxide. The fuel metal is present in the combustible material at a weight ratio from 1:9 to 1:1 of the fuel metal to the metal oxide. The fuel metal and the metal oxide are capable of exothermically reacting upon application of energy at or above a threshold value to support self-sustaining combustion of the combustible material within the combustible structural composite. Structural-reinforcing fibers are present in the composite at a weight ratio from 1:20 to 10:1 of the structural-reinforcing fibers to the combustible material. Other embodiments and aspects are disclosed.

  18. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Directory of Open Access Journals (Sweden)

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  19. Ophthalmic Drug Dosage Forms: Characterisation and Research Methods

    OpenAIRE

    Baranowski, Przemysław; Karolewicz, Bożena; Gajda, Maciej; Pluta, Janusz

    2014-01-01

    This paper describes hitherto developed drug forms for topical ocular administration, that is, eye drops, ointments, in situ gels, inserts, multicompartment drug delivery systems, and ophthalmic drug forms with bioadhesive properties. Heretofore, many studies have demonstrated that new and more complex ophthalmic drug forms exhibit advantage over traditional ones and are able to increase the bioavailability of the active substance by, among others, reducing the susceptibility of drug forms to...

  20. Method of forming composite fiber blends and molding same

    Science.gov (United States)

    McMahon, Paul E. (Inventor); Chung, Tai-Shung (Inventor)

    1989-01-01

    The instant invention involves a process used in preparing fibrous tows which may be formed into polymeric plastic composites. The process involves the steps of (a) forming a tow of strong filamentary materials; (b) forming a thermoplastic polymeric fiber; (c) intermixing the two tows; and (d) withdrawing the intermixed tow for further use.

  1. Method of forming buried oxide layers in silicon

    Science.gov (United States)

    Sadana, Devendra Kumar; Holland, Orin Wayne

    2000-01-01

    A process for forming Silicon-On-Insulator is described incorporating the steps of ion implantation of oxygen into a silicon substrate at elevated temperature, ion implanting oxygen at a temperature below 200.degree. C. at a lower dose to form an amorphous silicon layer, and annealing steps to form a mixture of defective single crystal silicon and polycrystalline silicon or polycrystalline silicon alone and then silicon oxide from the amorphous silicon layer to form a continuous silicon oxide layer below the surface of the silicon substrate to provide an isolated superficial layer of silicon. The invention overcomes the problem of buried isolated islands of silicon oxide forming a discontinuous buried oxide layer.

  2. Comparison of normalization methods for the analysis of metagenomic gene abundance data.

    Science.gov (United States)

    Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik

    2018-04-20

    In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead

  3. Design of Normal Concrete Mixtures Using Workability-Dispersion-Cohesion Method

    OpenAIRE

    Qasrawi, Hisham

    2016-01-01

    The workability-dispersion-cohesion method is a new proposed method for the design of normal concrete mixes. The method uses special coefficients called workability-dispersion and workability-cohesion factors. These coefficients relate workability to mobility and stability of the concrete mix. The coefficients are obtained from special charts depending on mix requirements and aggregate properties. The method is practical because it covers various types of aggregates that may not be within sta...

  4. Forming MOFs into spheres by use of molecular gastronomy methods.

    Science.gov (United States)

    Spjelkavik, Aud I; Aarti; Divekar, Swapnil; Didriksen, Terje; Blom, Richard

    2014-07-14

    A novel method utilizing hydrocolloids to prepare nicely shaped spheres of metal-organic frameworks (MOFs) has been developed. Microcrystalline CPO-27-Ni particles are dispersed in either alginate or chitosan solutions, which are added dropwise to solutions containing, respectively, either divalent group 2 cations or base that act as gelling agents. Well-shaped spheres are immediately formed, which can be dried into spheres containing mainly MOF (>95 wt %). The spheronizing procedures have been optimized with respect to maximum specific surface area, shape, and particle density of the final sphere. At optimal conditions, well-shaped 2.5-3.5 mm diameter CPO-27-Ni spheres with weight-specific surface areas <10 % lower than the nonformulated CPO-27-Ni precursor, and having sphere densities in the range 0.8 to 0.9 g cm(-3) and particle crushing strengths above 20 N, can be obtained. The spheres are well suited for use in fixed-bed catalytic or adsorption processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A new normalization method based on electrical field lines for electrical capacitance tomography

    International Nuclear Information System (INIS)

    Zhang, L F; Wang, H X

    2009-01-01

    Electrical capacitance tomography (ECT) is considered to be one of the most promising process tomography techniques. The image reconstruction for ECT is an inverse problem to find the spatially distributed permittivities in a pipe. Usually, the capacitance measurements obtained from the ECT system are normalized at the high and low permittivity for image reconstruction. The parallel normalization model is commonly used during the normalization process, which assumes the distribution of materials in parallel. Thus, the normalized capacitance is a linear function of measured capacitance. A recently used model is a series normalization model which results in the normalized capacitance as a nonlinear function of measured capacitance. The newest presented model is based on electrical field centre lines (EFCL), and is a mixture of two normalization models. The multi-threshold method of this model is presented in this paper. The sensitivity matrices based on different normalization models were obtained, and image reconstruction was carried out accordingly. Simulation results indicate that reconstructed images with higher quality can be obtained based on the presented model

  6. A statistical analysis of count normalization methods used in positron-emission tomography

    International Nuclear Information System (INIS)

    Holmes, T.J.; Ficke, D.C.; Snyder, D.L.

    1984-01-01

    As part of the Positron-Emission Tomography (PET) reconstruction process, annihilation counts are normalized for photon absorption, detector efficiency and detector-pair duty-cycle. Several normalization methods of time-of-flight and conventional systems are analyzed mathematically for count bias and variance. The results of the study have some implications on hardware and software complexity and on image noise and distortion

  7. Decoupled Simulation Method For Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Sebastiani, G.; Brosius, A.; Tekkaya, A. E.; Homberg, W.; Kleiner, M.

    2007-01-01

    Within the scope of this article a decoupling algorithm to reduce computing time in Finite Element Analyses of incremental forming processes will be investigated. Based on the given position of the small forming zone, the presented algorithm aims at separating a Finite Element Model in an elastic and an elasto-plastic deformation zone. Including the elastic response of the structure by means of model simplifications, the costly iteration in the elasto-plastic zone can be restricted to the small forming zone and to few supporting elements in order to reduce computation time. Since the forming zone moves along the specimen, an update of both, forming zone with elastic boundary and supporting structure, is needed after several increments.The presented paper discusses the algorithmic implementation of the approach and introduces several strategies to implement the denoted elastic boundary condition at the boundary of the plastic forming zone

  8. Ophthalmic Drug Dosage Forms: Characterisation and Research Methods

    Directory of Open Access Journals (Sweden)

    Przemysław Baranowski

    2014-01-01

    Full Text Available This paper describes hitherto developed drug forms for topical ocular administration, that is, eye drops, ointments, in situ gels, inserts, multicompartment drug delivery systems, and ophthalmic drug forms with bioadhesive properties. Heretofore, many studies have demonstrated that new and more complex ophthalmic drug forms exhibit advantage over traditional ones and are able to increase the bioavailability of the active substance by, among others, reducing the susceptibility of drug forms to defense mechanisms of the human eye, extending contact time of drug with the cornea, increasing the penetration through the complex anatomical structure of the eye, and providing controlled release of drugs into the eye tissues, which allows reducing the drug application frequency. The rest of the paper describes recommended in vitro and in vivo studies to be performed for various ophthalmic drugs forms in order to assess whether the form is acceptable from the perspective of desired properties and patient’s compliance.

  9. A method and machine for forming pleated and bellow tubes

    International Nuclear Information System (INIS)

    Banks, J.W.

    1975-01-01

    In a machine, the rollers outside the rough tube are rigidly supported for assuring the accurate forming of each turn of the pleated tube, the latter being position-indexed independently of the already formed turns. An inner roller is supported by a device for adjusting and indexing the position thereof on a carriage. The thus obtained tubes are suitable, in particular, for forming expansion sealing joints for power generators or nuclear reactors [fr

  10. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  11. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  12. Trade unions and social movements: methods and forms of interaction

    Directory of Open Access Journals (Sweden)

    O. L. Tupytsia

    2017-07-01

    New typological features of social movements are formed on the basis of a new type of social and communication links. Social hierarchy and class antagonisms, which provided the basis for the existence and development of trade unions in the past, expired. Therefore, the organizational forms of trade unions as social movements have become more available and acceptable for contemporary citizens.

  13. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  14. The impact of sample non-normality on ANOVA and alternative methods.

    Science.gov (United States)

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  15. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    International Nuclear Information System (INIS)

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-01

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline

  16. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    Energy Technology Data Exchange (ETDEWEB)

    Tam, Allison [Stanford Institutes of Medical Research Program, Stanford University School of Medicine, Stanford, California 94305 (United States); Barker, Jocelyn [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 (United States); Rubin, Daniel [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 and Department of Medicine (Biomedical Informatics Research), Stanford University School of Medicine, Stanford, California 94305 (United States)

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  17. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  19. A pseudospectra-based approach to non-normal stability of embedded boundary methods

    Science.gov (United States)

    Rapaka, Narsimha; Samtaney, Ravi

    2017-11-01

    We present non-normal linear stability of embedded boundary (EB) methods employing pseudospectra and resolvent norms. Stability of the discrete linear wave equation is characterized in terms of the normalized distance of the EB to the nearest ghost node (α) in one and two dimensions. An important objective is that the CFL condition based on the Cartesian grid spacing remains unaffected by the EB. We consider various discretization methods including both central and upwind-biased schemes. Stability is guaranteed when α Funds under Award No. URF/1/1394-01.

  20. Static and Vibrational Analysis of Partially Composite Beams Using the Weak-Form Quadrature Element Method

    Directory of Open Access Journals (Sweden)

    Zhiqiang Shen

    2012-01-01

    Full Text Available Deformation of partially composite beams under distributed loading and free vibrations of partially composite beams under various boundary conditions are examined in this paper. The weak-form quadrature element method, which is characterized by direct evaluation of the integrals involved in the variational description of a problem, is used. One quadrature element is normally sufficient for a partially composite beam regardless of the magnitude of the shear connection stiffness. The number of integration points in a quadrature element is adjustable in accordance with convergence requirement. Results are compared with those of various finite element formulations. It is shown that the weak form quadrature element solution for partially composite beams is free of slip locking, and high computational accuracy is achieved with smaller number of degrees of freedom. Besides, it is found that longitudinal inertia of motion cannot be simply neglected in assessment of dynamic behavior of partially composite beams.

  1. Solitary-wave families of the Ostrovsky equation: An approach via reversible systems theory and normal forms

    International Nuclear Information System (INIS)

    Roy Choudhury, S.

    2007-01-01

    The Ostrovsky equation is an important canonical model for the unidirectional propagation of weakly nonlinear long surface and internal waves in a rotating, inviscid and incompressible fluid. Limited functional analytic results exist for the occurrence of one family of solitary-wave solutions of this equation, as well as their approach to the well-known solitons of the famous Korteweg-de Vries equation in the limit as the rotation becomes vanishingly small. Since solitary-wave solutions often play a central role in the long-time evolution of an initial disturbance, we consider such solutions here (via the normal form approach) within the framework of reversible systems theory. Besides confirming the existence of the known family of solitary waves and its reduction to the KdV limit, we find a second family of multihumped (or N-pulse) solutions, as well as a continuum of delocalized solitary waves (or homoclinics to small-amplitude periodic orbits). On isolated curves in the relevant parameter region, the delocalized waves reduce to genuine embedded solitons. The second and third families of solutions occur in regions of parameter space distinct from the known solitary-wave solutions and are thus entirely new. Directions for future work are also mentioned

  2. An approach to normal forms of Kuramoto model with distributed delays and the effect of minimal delay

    Energy Technology Data Exchange (ETDEWEB)

    Niu, Ben, E-mail: niubenhit@163.com [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Guo, Yuxiao [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Jiang, Weihua [Department of Mathematics, Harbin Institute of Technology, Harbin 150001 (China)

    2015-09-25

    Heterogeneous delays with positive lower bound (gap) are taken into consideration in Kuramoto model. On the Ott–Antonsen's manifold, the dynamical transitional behavior from incoherence to coherence is mediated by Hopf bifurcation. We establish a perturbation technique on complex domain, by which universal normal forms, stability and criticality of the Hopf bifurcation are obtained. Theoretically, a hysteresis loop is found near the subcritically bifurcated coherent state. With respect to Gamma distributed delay with fixed mean and variance, we find that the large gap decreases Hopf bifurcation value, induces supercritical bifurcations, avoids the hysteresis loop and significantly increases in the number of coexisting coherent states. The effect of gap is finally interpreted from the viewpoint of excess kurtosis of Gamma distribution. - Highlights: • Heterogeneously delay-coupled Kuramoto model with minimal delay is considered. • Perturbation technique on complex domain is established for bifurcation analysis. • Hysteresis phenomenon is investigated in a theoretical way. • The effect of excess kurtosis of distributed delays is discussed.

  3. The Impact of Normalization Methods on RNA-Seq Data Analysis

    Science.gov (United States)

    Zyprych-Walczak, J.; Szabelska, A.; Handschuh, L.; Górczak, K.; Klamecka, K.; Figlerowicz, M.; Siatkowski, I.

    2015-01-01

    High-throughput sequencing technologies, such as the Illumina Hi-seq, are powerful new tools for investigating a wide range of biological and medical problems. Massive and complex data sets produced by the sequencers create a need for development of statistical and computational methods that can tackle the analysis and management of data. The data normalization is one of the most crucial steps of data processing and this process must be carefully considered as it has a profound effect on the results of the analysis. In this work, we focus on a comprehensive comparison of five normalization methods related to sequencing depth, widely used for transcriptome sequencing (RNA-seq) data, and their impact on the results of gene expression analysis. Based on this study, we suggest a universal workflow that can be applied for the selection of the optimal normalization procedure for any particular data set. The described workflow includes calculation of the bias and variance values for the control genes, sensitivity and specificity of the methods, and classification errors as well as generation of the diagnostic plots. Combining the above information facilitates the selection of the most appropriate normalization method for the studied data sets and determines which methods can be used interchangeably. PMID:26176014

  4. Evaluation of directional normalization methods for Landsat TM/ETM+ over primary Amazonian lowland forests

    Science.gov (United States)

    Van doninck, Jasper; Tuomisto, Hanna

    2017-06-01

    Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.

  5. A System of Test Methods for Sheet Metal Forming Tribology

    DEFF Research Database (Denmark)

    Bay, Niels; Olsson, David Dam; Andreasen, Jan Lasson

    2007-01-01

    Sheet metal forming of tribologically difficult materials such as stainless steel, Al-alloys and Ti-alloys or forming in tribologically difficult operations like ironing, punching or deep drawing of thick plate requires often use of environmentally hazardous lubricants such as chlorinated paraffin...... oils in order to avoid galling. The present paper describes a systematic research in the development of new, environmentally harmless lubricants focusing on the lubricant testing aspects. A system of laboratory tests has been developed to study the lubricant performance under the very varied conditions...... appearing in different sheet forming operations such as stamping, deep drawing, ironing and punching. The laboratory tests have been especially designed to model the conditions in industrial production....

  6. Algebraic method for analysis of nonlinear systems with a normal matrix

    International Nuclear Information System (INIS)

    Konyaev, Yu.A.; Salimova, A.F.

    2014-01-01

    A promising method has been proposed for analyzing a class of quasilinear nonautonomous systems of differential equations whose matrix can be represented as a sum of nonlinear normal matrices, which makes it possible to analyze stability without using the Lyapunov functions [ru

  7. Method of normal coordinates in the formulation of a system with dissipation: The harmonic oscillator

    International Nuclear Information System (INIS)

    Mshelia, E.D.

    1994-07-01

    The method of normal coordinates of the theory of vibrations is used in decoupling the motion of n oscillators (1 ≤ n ≤4) representing intrinsic degrees of freedom coupled to collective motion in a quantum mechanical model that allows the determination of the probability for energy transfer from collective to intrinsic excitations in a dissipative system. (author). 21 refs

  8. A study of the up-and-down method for non-normal distribution functions

    DEFF Research Database (Denmark)

    Vibholm, Svend; Thyregod, Poul

    1988-01-01

    The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...

  9. An asymptotic expression for the eigenvalues of the normalization kernel of the resonating group method

    International Nuclear Information System (INIS)

    Lomnitz-Adler, J.; Brink, D.M.

    1976-01-01

    A generating function for the eigenvalues of the RGM Normalization Kernel is expressed in terms of the diagonal matrix elements of thw GCM Overlap Kernel. An asymptotic expression for the eigenvalues is obtained by using the Method of Steepest Descent. (Auth.)

  10. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, Saira; Bissell, Mina J

    2004-12-17

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double

  11. EMG normalization method based on grade 3 of manual muscle testing: Within- and between-day reliability of normalization tasks and application to gait analysis.

    Science.gov (United States)

    Tabard-Fougère, Anne; Rose-Dulcina, Kevin; Pittet, Vincent; Dayer, Romain; Vuillerme, Nicolas; Armand, Stéphane

    2018-02-01

    Electromyography (EMG) is an important parameter in Clinical Gait Analysis (CGA), and is generally interpreted with timing of activation. EMG amplitude comparisons between individuals, muscles or days need normalization. There is no consensus on existing methods. The gold standard, maximum voluntary isometric contraction (MVIC), is not adapted to pathological populations because patients are often unable to perform an MVIC. The normalization method inspired by the isometric grade 3 of manual muscle testing (isoMMT3), which is the ability of a muscle to maintain a position against gravity, could be an interesting alternative. The aim of this study was to evaluate the within- and between-day reliability of the isoMMT3 EMG normalizing method during gait compared with the conventional MVIC method. Lower limb muscles EMG (gluteus medius, rectus femoris, tibialis anterior, semitendinosus) were recorded bilaterally in nine healthy participants (five males, aged 29.7±6.2years, BMI 22.7±3.3kgm -2 ) giving a total of 18 independent legs. Three repeated measurements of the isoMMT3 and MVIC exercises were performed with an EMG recording. EMG amplitude of the muscles during gait was normalized by these two methods. This protocol was repeated one week later. Within- and between-day reliability of normalization tasks were similar for isoMMT3 and MVIC methods. Within- and between-day reliability of gait EMG normalized by isoMMT3 was higher than with MVIC normalization. These results indicate that EMG normalization using isoMMT3 is a reliable method with no special equipment needed and will support CGA interpretation. The next step will be to evaluate this method in pathological populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Emission computer tomographic orthopan display of the jaws - method and normal values

    International Nuclear Information System (INIS)

    Bockisch, A.; Koenig, R.; Biersack, H.J.; Wahl, G.

    1990-01-01

    A tomoscintigraphic method is described to create orthopan-like projections of the jaws from SPECT bone scans using cylinder projection. On the basis of this projection a numerical analysis of the dental regions is performed in the same computer code. For each dental region the activity relative to the contralateral region and relative to the average activity of the corresponding jaw is calculated. Using this method, a set of normal activity relations has been established by investigation of 24 patients. (orig.) [de

  13. Friction stir method for forming structures and materials

    Science.gov (United States)

    Feng, Zhili; David, Stan A.; Frederick, David Alan

    2011-11-22

    Processes for forming an enhanced material or structure are disclosed. The structure typically includes a preform that has a first common surface and a recess below the first common surface. A filler is added to the recess and seams are friction stir welded, and materials may be stir mixed.

  14. Methods of Forming Professional Competence of Students as Future Teachers

    Science.gov (United States)

    Omarov, Yessen B.; Toktarbayev, Darkhan Gabdyl-Samatovich; Rybin, Igor Vyacheslavovich; Saliyevaa, Aigul Zhanayevna; Zhumabekova, Fatima Niyazbekovna; Hamzina, Sholpan; Baitlessova, Nursulu; Sakenov, Janat

    2016-01-01

    The article presents an analysis of the problem of professional competence; a methodological basis of forming professional competence of college students as future teachers is established. The essence of professional competence is defined. The structure has been experimentally proved and developed; the contents, criteria and levels of professional…

  15. Method for forming microspheres for encapsulation of nuclear waste

    Science.gov (United States)

    Angelini, Peter; Caputo, Anthony J.; Hutchens, Richard E.; Lackey, Walter J.; Stinton, David P.

    1984-01-01

    Microspheres for nuclear waste storage are formed by gelling droplets containing the waste in a gelation fluid, transferring the gelled droplets to a furnace without the washing step previously used, and heating the unwashed gelled droplets in the furnace under temperature or humidity conditions that result in a substantially linear rate of removal of volatile components therefrom.

  16. Method of inactivating reproducible forms of mycoplasma in biological preparations

    International Nuclear Information System (INIS)

    Veber, P.; Jurmanova, K.; Lesko, J.; Hana, L.; Veber, V.

    1978-01-01

    Inactivation of mycoplasms in biological materials was achieved using gamma radiation with a dose rate of 1x10 4 to 5x10 6 rads/h for 1 to 250 hours. The technique is advantageous for allowing the inactivation of the final form of products (tablets, vaccines, etc.). (J.P.)

  17. A systematic study of genome context methods: calibration, normalization and combination

    Directory of Open Access Journals (Sweden)

    Dale Joseph M

    2010-10-01

    Full Text Available Abstract Background Genome context methods have been introduced in the last decade as automatic methods to predict functional relatedness between genes in a target genome using the patterns of existence and relative locations of the homologs of those genes in a set of reference genomes. Much work has been done in the application of these methods to different bioinformatics tasks, but few papers present a systematic study of the methods and their combination necessary for their optimal use. Results We present a thorough study of the four main families of genome context methods found in the literature: phylogenetic profile, gene fusion, gene cluster, and gene neighbor. We find that for most organisms the gene neighbor method outperforms the phylogenetic profile method by as much as 40% in sensitivity, being competitive with the gene cluster method at low sensitivities. Gene fusion is generally the worst performing of the four methods. A thorough exploration of the parameter space for each method is performed and results across different target organisms are presented. We propose the use of normalization procedures as those used on microarray data for the genome context scores. We show that substantial gains can be achieved from the use of a simple normalization technique. In particular, the sensitivity of the phylogenetic profile method is improved by around 25% after normalization, resulting, to our knowledge, on the best-performing phylogenetic profile system in the literature. Finally, we show results from combining the various genome context methods into a single score. When using a cross-validation procedure to train the combiners, with both original and normalized scores as input, a decision tree combiner results in gains of up to 20% with respect to the gene neighbor method. Overall, this represents a gain of around 15% over what can be considered the state of the art in this area: the four original genome context methods combined using a

  18. Methods, forms and means of forming the religious literacy among the students

    Directory of Open Access Journals (Sweden)

    Efimov Vladimir Fedorovich

    2016-12-01

    Full Text Available The article discusses the range of problems related to the formation of religious literacy among the students. The author specifically notes that this aspect is formed in an ideological key and includes understanding of the nature and typology of religions, their historical origins and current status, and the presence of tolerance towards persons with different beliefs, and ability to fruitful (without violence to conscience someone coexistence and social interaction. To achieve this purpose the article presents and uses the described methodological tools.

  19. Experimental Method for Characterizing Electrical Steel Sheets in the Normal Direction

    Directory of Open Access Journals (Sweden)

    Thierry Belgrand

    2010-10-01

    Full Text Available This paper proposes an experimental method to characterise magnetic laminations in the direction normal to the sheet plane. The principle, which is based on a static excitation to avoid planar eddy currents, is explained and specific test benches are proposed. Measurements of the flux density are made with a sensor moving in and out of an air-gap. A simple analytical model is derived in order to determine the permeability in the normal direction. The experimental results for grain oriented steel sheets are presented and a comparison is provided with values obtained from literature.

  20. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  1. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  2. Solare Cell Roof Tile And Method Of Forming Same

    Science.gov (United States)

    Hanoka, Jack I.; Real, Markus

    1999-11-16

    A solar cell roof tile includes a front support layer, a transparent encapsulant layer, a plurality of interconnected solar cells and a backskin layer. The front support layer is formed of light transmitting material and has first and second surfaces. The transparent encapsulant layer is disposed adjacent the second surface of the front support layer. The interconnected solar cells has a first surface disposed adjacent the transparent encapsulant layer. The backskin layer has a first surface disposed adjacent a second surface of the interconnected solar cells, wherein a portion of the backskin layer wraps around and contacts the first surface of the front support layer to form the border region. A portion of the border region has an extended width. The solar cell roof tile may have stand-offs disposed on the extended width border region for providing vertical spacing with respect to an adjacent solar cell roof tile.

  3. Numerical Methods for Plate Forming by Line Heating

    DEFF Research Database (Denmark)

    Clausen, Henrik Bisgaard

    2000-01-01

    Few researchers have addressed so far the topic Line Heating in the search for better control of the process. Various methods to help understanding the mechanics have been used, including beam analysis approximation, equivalent force calculation and three-dimensional finite element analysis. I...... consider here finite element methods to model the behaviour and to predict the heating paths....

  4. The research on AP1000 nuclear main pumps’ complete characteristics and the normalization method

    International Nuclear Information System (INIS)

    Zhu, Rongsheng; Liu, Yong; Wang, Xiuli; Fu, Qiang; Yang, Ailing; Long, Yun

    2017-01-01

    Highlights: • Complete characteristics of main pump are researched into. • The quadratic character of head and torque under some operatings. • The characteristics tend to be the same under certain conditions. • The normalization method gives proper estimations on external characteristics. • The normalization method can efficiently improve the security computing. - Abstract: The paper summarizes the complete characteristics of nuclear main pumps based on experimental results and makes a detailed study, and then draws a series of important conclusions: with regard to the overall flow area, the runaway operating and 0-revolving-speed operating of nuclear main pumps both have quadratic characteristics; with regard to the infinite flow, the braking operation and the 0-revolving-speed operation show consistent external characteristics. To remedy the shortcomings of the traditional complete-characteristic expression with regards to only describing limited flow sections at specific revolving speeds, the paper proposes a normalization method. As an important boundary condition of the security computing of unstable transient process of the primary reactor coolant pump and the nuclear island primary circuit and secondary circuit, the precision of complete-characteristic data and curve impacts the precision of security computing. A normalization curve obtained by applying the normalization method to process complete-characteristic data could correctly, completely and precisely express the complete characteristics of the primary reactor coolant pump under any rotational speed and full flow, and is capable of giving proper estimations on external characteristics of the flow outside the test range and even of the infinite flow. These advantages are of great significance for the improvement of security computing of transient processes of the primary reactor coolant pump and the circuit system.

  5. Developing TOPSIS method using statistical normalization for selecting knowledge management strategies

    Directory of Open Access Journals (Sweden)

    Amin Zadeh Sarraf

    2013-09-01

    Full Text Available Purpose: Numerous companies are expecting their knowledge management (KM to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. Design/methodology/approach: An extension of TOPSIS, a multi-attribute decision making (MADM technique, to a group decision environment is investigated. TOPSIS is a practical and useful technique for ranking and selection of a number of externally determined alternatives through distance measures. The entropy method is often used for assessing weights in the TOPSIS method. Entropy in information theory is a criterion uses for measuring the amount of disorder represented by a discrete probability distribution. According to decrease resistance degree of employees opposite of implementing a new strategy, it seems necessary to spot all managers’ opinion. The normal distribution considered the most prominent probability distribution in statistics is used to normalize gathered data. Findings: The results of this study show that by considering 6 criteria for alternatives Evaluation, the most appropriate KM strategy to implement  in our company was ‘‘Personalization’’. Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the approach such as normal distribution of sample and community. These assumptions can be changed in future work. Originality/value: This paper proposes an effective solution based on combined entropy and TOPSIS approach to help companies that need to evaluate and select KM strategies. In represented solution, opinions of all managers is gathered and normalized by using standard normal distribution and central limit theorem. Keywords: Knowledge management; strategy; TOPSIS; Normal distribution; entropy

  6. Method of forming catalyst layer by single step infiltration

    Science.gov (United States)

    Gerdes, Kirk; Lee, Shiwoo; Dowd, Regis

    2018-05-01

    Provided herein is a method for electrocatalyst infiltration of a porous substrate, of particular use for preparation of a cathode for a solid oxide fuel cell. The method generally comprises preparing an electrocatalyst infiltrate solution comprising an electrocatalyst, surfactant, chelating agent, and a solvent; pretreating a porous mixed ionic-electric conductive substrate; and applying the electrocatalyst infiltration solution to the porous mixed ionic-electric conductive substrate.

  7. Method of forming an electrically conductive cellulose composite

    Science.gov (United States)

    Evans, Barbara R [Oak Ridge, TN; O'Neill, Hugh M [Knoxville, TN; Woodward, Jonathan [Ashtead, GB

    2011-11-22

    An electrically conductive cellulose composite includes a cellulose matrix and an electrically conductive carbonaceous material incorporated into the cellulose matrix. The electrical conductivity of the cellulose composite is at least 10 .mu.S/cm at 25.degree. C. The composite can be made by incorporating the electrically conductive carbonaceous material into a culture medium with a cellulose-producing organism, such as Gluconoacetobacter hansenii. The composites can be used to form electrodes, such as for use in membrane electrode assemblies for fuel cells.

  8. An automatic method to discriminate malignant masses from normal tissue in digital mammograms

    International Nuclear Information System (INIS)

    Brake, Guido M. te; Karssemeijer, Nico; Hendriks, Jan H.C.L.

    2000-01-01

    Specificity levels of automatic mass detection methods in mammography are generally rather low, because suspicious looking normal tissue is often hard to discriminate from real malignant masses. In this work a number of features were defined that are related to image characteristics that radiologists use to discriminate real lesions from normal tissue. An artificial neural network was used to map the computed features to a measure of suspiciousness for each region that was found suspicious by a mass detection method. Two data sets were used to test the method. The first set of 72 malignant cases (132 films) was a consecutive series taken from the Nijmegen screening programme, 208 normal films were added to improve the estimation of the specificity of the method. The second set was part of the new DDSM data set from the University of South Florida. A total of 193 cases (772 films) with 372 annotated malignancies was used. The measure of suspiciousness that was computed using the image characteristics was successful in discriminating tumours from false positive detections. Approximately 75% of all cancers were detected in at least one view at a specificity level of 0.1 false positive per image. (author)

  9. Dual phase magnetic material component and method of forming

    Science.gov (United States)

    Dial, Laura Cerully; DiDomizio, Richard; Johnson, Francis

    2017-04-25

    A magnetic component having intermixed first and second regions, and a method of preparing that magnetic component are disclosed. The first region includes a magnetic phase and the second region includes a non-magnetic phase. The method includes mechanically masking pre-selected sections of a surface portion of the component by using a nitrogen stop-off material and heat-treating the component in a nitrogen-rich atmosphere at a temperature greater than about 900.degree. C. Both the first and second regions are substantially free of carbon, or contain only limited amounts of carbon; and the second region includes greater than about 0.1 weight % of nitrogen.

  10. Form of silicon and method of making the same

    Science.gov (United States)

    Strobel, Timothy A.; Kim, Duck Young; Kurakevych, Oleksandr O.

    2017-07-04

    The invention relates to a new phase of silicon, Si.sub.24, and a method of making the same. Si.sub.24 has a quasi-direct band gap, with a direct gap value of 1.34 eV and an indirect gap value of 1.3 eV. The invention also relates to a compound of the formula Na.sub.4Si.sub.24 and a method of making the same. N.sub.a4Si.sub.24 may be used as a precursor to make Si.sub.24.

  11. An imbalance fault detection method based on data normalization and EMD for marine current turbines.

    Science.gov (United States)

    Zhang, Milu; Wang, Tianzhen; Tang, Tianhao; Benbouzid, Mohamed; Diallo, Demba

    2017-05-01

    This paper proposes an imbalance fault detection method based on data normalization and Empirical Mode Decomposition (EMD) for variable speed direct-drive Marine Current Turbine (MCT) system. The method is based on the MCT stator current under the condition of wave and turbulence. The goal of this method is to extract blade imbalance fault feature, which is concealed by the supply frequency and the environment noise. First, a Generalized Likelihood Ratio Test (GLRT) detector is developed and the monitoring variable is selected by analyzing the relationship between the variables. Then, the selected monitoring variable is converted into a time series through data normalization, which makes the imbalance fault characteristic frequency into a constant. At the end, the monitoring variable is filtered out by EMD method to eliminate the effect of turbulence. The experiments show that the proposed method is robust against turbulence through comparing the different fault severities and the different turbulence intensities. Comparison with other methods, the experimental results indicate the feasibility and efficacy of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Recursive form of general limited memory variable metric methods

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2013-01-01

    Roč. 49, č. 2 (2013), s. 224-235 ISSN 0023-5954 Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * limited memory methods * variable metric updates * recursive matrix formulation * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://dml.cz/handle/10338.dmlcz/143365

  13. Method of making nanostructured glass-ceramic waste forms

    Science.gov (United States)

    Gao, Huizhen; Wang, Yifeng; Rodriguez, Mark A.; Bencoe, Denise N.

    2012-12-18

    A method of rendering hazardous materials less dangerous comprising trapping the hazardous material in nanopores of a nanoporous composite material, reacting the trapped hazardous material to render it less volatile/soluble, sealing the trapped hazardous material, and vitrifying the nanoporous material containing the less volatile/soluble hazardous material.

  14. Application of specific gravity method for normalization of urinary excretion rates of radionuclides

    International Nuclear Information System (INIS)

    Thakur, Smita S.; Yadav, J.R.; Rao, D.D.

    2015-01-01

    In vitro bioassay monitoring is based on the determination of activity concentration in biological samples excreted from the body and is most suitable for alpha and beta emitters. For occupational workers handling actinides in reprocessing facilities possibility of internal exposure exists and urine assay is preferred method for monitoring such exposure. Urine samples collected for 24 h duration, is the true representative of bioassay sample and hence in the case of insufficient collection time, specific gravity applied method of normalization of urine sample is used. The present study reports the data of specific gravity generated for controlled group of Indian population by the use of densitometer and its application in urinary sample activity normalization. The average specific gravity value obtained for the controlled group was 1.008±0.005 gm/ml. (author)

  15. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  16. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... image analysis methods used in longitudinal studies. The implication is increased measurement variation and a risk of bias in the estimations (e.g. in the volume change for a structure). We proposed two quite different approaches for scanner drift normalization and demonstrated the performance...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  17. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  18. Manganite perovskite ceramics, their precursors and methods for forming

    Science.gov (United States)

    Payne, David Alan; Clothier, Brent Allen

    2015-03-10

    Disclosed are a variety of ceramics having the formula Ln.sub.1-xM.sub.xMnO.sub.3, where 0.Itoreq.x.Itoreq.1 and where Ln is La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu or Y; M is Ca, Sr, Ba, Cd, or Pb; manganite precursors for preparing the ceramics; a method for preparing the precursors; and a method for transforming the precursors into uniform, defect-free ceramics having magnetoresistance properties. The manganite precursors contain a sol and are derived from the metal alkoxides: Ln(OR).sub.3, M(OR).sub.2 and Mn(OR).sub.2, where R is C.sub.2 to C.sub.6 alkyl or C.sub.3 to C.sub.9 alkoxyalkyl, or C.sub.6 to C.sub.9 aryl. The preferred ceramics are films prepared by a spin coating method and are particularly suited for incorporation into a device such as an integrated circuit device.

  19. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  20. Normal Values of Tissue-Muscle Perfusion Indexes of Lower Limbs Obtained with a Scintigraphic Method.

    Science.gov (United States)

    Manevska, Nevena; Stojanoski, Sinisa; Pop Gjorceva, Daniela; Todorovska, Lidija; Miladinova, Daniela; Zafirova, Beti

    2017-09-01

    Introduction Muscle perfusion is a physiologic process that can undergo quantitative assessment and thus define the range of normal values of perfusion indexes and perfusion reserve. The investigation of the microcirculation has a crucial role in determining the muscle perfusion. Materials and method The study included 30 examinees, 24-74 years of age, without a history of confirmed peripheral artery disease and all had normal findings on Doppler ultrasonography and pedo-brachial index of lower extremity (PBI). 99mTc-MIBI tissue muscle perfusion scintigraphy of lower limbs evaluates tissue perfusion in resting condition "rest study" and after workload "stress study", through quantitative parameters: Inter-extremity index (for both studies), left thigh/right thigh (LT/RT) left calf/right calf (LC/RC) and perfusion reserve (PR) for both thighs and calves. Results In our investigated group we assessed the normal values of quantitative parameters of perfusion indexes. Indexes ranged for LT/RT in rest study 0.91-1.05, in stress study 0.92-1.04. LC/RC in rest 0.93-1.07 and in stress study 0.93-1.09. The examinees older than 50 years had insignificantly lower perfusion reserve of these parameters compared with those younger than 50, LC (p=0.98), and RC (p=0.6). Conclusion This non-invasive scintigraphic method allows in individuals without peripheral artery disease to determine the range of normal values of muscle perfusion at rest and stress condition and to clinically implement them in evaluation of patients with peripheral artery disease for differentiating patients with normal from those with impaired lower limbs circulation.

  1. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization.

    Science.gov (United States)

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.

  2. Sustainable tourism as a method of forming a tolerant society

    Directory of Open Access Journals (Sweden)

    Dryga Svetlana

    2016-01-01

    Full Text Available The article concentrates on potential capabilities for the development of sustainable tourism, as well as its role in the formation of tolerant social relations. The authors revealed the profound impact of sustainable and hike tourism on emergence of the phenomenon ‘new tourist’. They also offered the description of levels of tolerance and their influence on the sustainable tendencies in modern tourism. There is a growing trend for tourism in modern international community to act as a high-powered regulator of socio-cultural relations and, simultaneously, as the crucial factor of counteraction to that xenophobia. A head-on clash of local and foreign cultures, which is an integral part of the very notion of tourism, is not supposed to assume itself in highly extreme forms, with the air of predominance of any of them, moreover, to be based on national, racial, religious, linguistic or educational differences. To put the idea across more efficiently, the authors resorted to exploiting such useful tools as the analysis and synthesis methodology, as well as that of comparison and prognostics. What is produced in the outcome of this study is revealing and emphasizing the levels of tolerance, characterizing the uneasy interrelationships between the so-called ‘new’ tourists and local community. The research findings could find practical applications for designing of new tourist products and elaborating of new networks of footpaths for walking tours.

  3. Method for forming biaxially textured articles by powder metallurgy

    Science.gov (United States)

    Goyal, Amit; Williams, Robert K.; Kroeger, Donald M.

    2002-01-01

    A method of preparing a biaxially textured alloy article comprises the steps of preparing a mixture comprising Ni powder and at least one powder selected from the group consisting of Cr, W, V, Mo, Cu, Al, Ce, YSZ, Y, Rare Earths, (RE), MgO, CeO.sub.2, and Y.sub.2 O.sub.3 ; compacting the mixture, followed by heat treating and rapidly recrystallizing to produce a biaxial texture on the article. In some embodiments the alloy article further comprises electromagnetic or electro-optical devices and possesses superconducting properties.

  4. A method for named entity normalization in biomedical articles: application to diseases and plants.

    Science.gov (United States)

    Cho, Hyejin; Choi, Wonjun; Lee, Hyunju

    2017-10-13

    In biomedical articles, a named entity recognition (NER) technique that identifies entity names from texts is an important element for extracting biological knowledge from articles. After NER is applied to articles, the next step is to normalize the identified names into standard concepts (i.e., disease names are mapped to the National Library of Medicine's Medical Subject Headings disease terms). In biomedical articles, many entity normalization methods rely on domain-specific dictionaries for resolving synonyms and abbreviations. However, the dictionaries are not comprehensive except for some entities such as genes. In recent years, biomedical articles have accumulated rapidly, and neural network-based algorithms that incorporate a large amount of unlabeled data have shown considerable success in several natural language processing problems. In this study, we propose an approach for normalizing biological entities, such as disease names and plant names, by using word embeddings to represent semantic spaces. For diseases, training data from the National Center for Biotechnology Information (NCBI) disease corpus and unlabeled data from PubMed abstracts were used to construct word representations. For plants, a training corpus that we manually constructed and unlabeled PubMed abstracts were used to represent word vectors. We showed that the proposed approach performed better than the use of only the training corpus or only the unlabeled data and showed that the normalization accuracy was improved by using our model even when the dictionaries were not comprehensive. We obtained F-scores of 0.808 and 0.690 for normalizing the NCBI disease corpus and manually constructed plant corpus, respectively. We further evaluated our approach using a data set in the disease normalization task of the BioCreative V challenge. When only the disease corpus was used as a dictionary, our approach significantly outperformed the best system of the task. The proposed approach shows robust

  5. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    Science.gov (United States)

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  6. Numerical Validation of the Delaunay Normalization and the Krylov-Bogoliubov-Mitropolsky Method

    Directory of Open Access Journals (Sweden)

    David Ortigosa

    2014-01-01

    Full Text Available A scalable second-order analytical orbit propagator programme based on modern and classical perturbation methods is being developed. As a first step in the validation and verification of part of our orbit propagator programme, we only consider the perturbation produced by zonal harmonic coefficients in the Earth’s gravity potential, so that it is possible to analyze the behaviour of the mathematical expressions involved in Delaunay normalization and the Krylov-Bogoliubov-Mitropolsky method in depth and determine their limits.

  7. Standard test method for static leaching of monolithic waste forms for disposal of radioactive waste

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method provides a measure of the chemical durability of a simulated or radioactive monolithic waste form, such as a glass, ceramic, cement (grout), or cermet, in a test solution at temperatures <100°C under low specimen surface- area-to-leachant volume (S/V) ratio conditions. 1.2 This test method can be used to characterize the dissolution or leaching behaviors of various simulated or radioactive waste forms in various leachants under the specific conditions of the test based on analysis of the test solution. Data from this test are used to calculate normalized elemental mass loss values from specimens exposed to aqueous solutions at temperatures <100°C. 1.3 The test is conducted under static conditions in a constant solution volume and at a constant temperature. The reactivity of the test specimen is determined from the amounts of components released and accumulated in the solution over the test duration. A wide range of test conditions can be used to study material behavior, includin...

  8. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  9. Standard Test Method for Normal Spectral Emittance at Elevated Temperatures of Nonconducting Specimens

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1971-01-01

    1.1 This test method describes an accurate technique for measuring the normal spectral emittance of electrically nonconducting materials in the temperature range from 1000 to 1800 K, and at wavelengths from 1 to 35 μm. It is particularly suitable for measuring the normal spectral emittance of materials such as ceramic oxides, which have relatively low thermal conductivity and are translucent to appreciable depths (several millimetres) below the surface, but which become essentially opaque at thicknesses of 10 mm or less. 1.2 This test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is particularly suitable for research laboratories, where the highest precision and accuracy are desired, and is not recommended for routine production or acceptance testing. Because of its high accuracy, this test method may be used as a reference method to be applied to production and acceptance testing in case of dispute. 1.3 This test metho...

  10. 1H MR spectroscopy of the normal human brains : comparison of automated prescan method with manual method

    International Nuclear Information System (INIS)

    Lim, Myung Kwan; Suh, Chang Hae; Cho, Young Kook; Kim, Jin Hee

    1998-01-01

    The purpose of this paper is to evaluate regional differences in relative metabolite ratios in the normal human brain by 1 H MR spectroscopy (MRS), and compare the spectral quality obtained by the automated prescan method (PROBE) and the manual method. A total of 61 reliable spectra were obtained by PROBE (28/34=82% success) and by the manual method (33/33=100% success). Regional differences in the spectral patterns of the five regions were clearly demonstrated by both PROBE and the manual methods. for prescanning, the manual method took slightly longer than PROBE (3-5 mins and 2 mins, respectively). There were no significant differences in spectral patterns and relative metabolic ratios between the two methods. However, auto-prescan by PROBE seemed to be very vulnerable to slight movement by patients, and in three cases, an acceptable spectrum was thus not obtained. PROBE is a highly practical and reliable method for single voxel 1 H MRS of the human brain; the two methods of prescanning do not result in significantly different spectral patterns and the relative metabolite ratios. PROBE, however, is vulnerable to slight movement by patients, and if the success rate for obtaining quality spectra is to be increased, regardless of the patient's condition and the region of the brain, it must be used in conjunction with the manual method. (author). 23 refs., 2 tabs., 3 figs

  11. Study on compressive strength of self compacting mortar cubes under normal & electric oven curing methods

    Science.gov (United States)

    Prasanna Venkatesh, G. J.; Vivek, S. S.; Dhinakaran, G.

    2017-07-01

    In the majority of civil engineering applications, the basic building blocks were the masonry units. Those masonry units were developed as a monolithic structure by plastering process with the help of binding agents namely mud, lime, cement and their combinations. In recent advancements, the mortar study plays an important role in crack repairs, structural rehabilitation, retrofitting, pointing and plastering operations. The rheology of mortar includes flowable, passing and filling properties which were analogous with the behaviour of self compacting concrete. In self compacting (SC) mortar cubes, the cement was replaced by mineral admixtures namely silica fume (SF) from 5% to 20% (with an increment of 5%), metakaolin (MK) from 10% to 30% (with an increment of 10%) and ground granulated blast furnace slag (GGBS) from 25% to 75% (with an increment of 25%). The ratio between cement and fine aggregate was kept constant as 1: 2 for all normal and self compacting mortar mixes. The accelerated curing namely electric oven curing with the differential temperature of 128°C for the period of 4 hours was adopted. It was found that the compressive strength obtained from the normal and electric oven method of curing was higher for self compacting mortar cubes than normal mortar cube. The cement replacement by 15% SF, 20% MK and 25%GGBS obtained higher strength under both curing conditions.

  12. Modeling the Circle of Willis Using Electrical Analogy Method under both Normal and Pathological Circumstances

    Science.gov (United States)

    Abdi, Mohsen; Karimi, Alireza; Navidbakhsh, Mahdi; Rahmati, Mohammadali; Hassani, Kamran; Razmkon, Ali

    2013-01-01

    Background and objective: The circle of Willis (COW) supports adequate blood supply to the brain. The cardiovascular system, in the current study, is modeled using an equivalent electronic system focusing on the COW. Methods: In our previous study we used 42 compartments to model whole cardiovascular system. In the current study, nevertheless, we extended our model by using 63 compartments to model whole CS. Each cardiovascular artery is modeled using electrical elements, including resistor, capacitor, and inductor. The MATLAB Simulink software is used to obtain the left and right ventricles pressure as well as pressure distribution at efferent arteries of the circle of Willis. Firstly, the normal operation of the system is shown and then the stenosis of cerebral arteries is induced in the circuit and, consequently, the effects are studied. Results: In the normal condition, the difference between pressure distribution of right and left efferent arteries (left and right ACA–A2, left and right MCA, left and right PCA–P2) is calculated to indicate the effect of anatomical difference between left and right sides of supplying arteries of the COW. In stenosis cases, the effect of internal carotid artery occlusion on efferent arteries pressure is investigated. The modeling results are verified by comparing to the clinical observation reported in the literature. Conclusion: We believe the presented model is a useful tool for representing the normal operation of the cardiovascular system and study of the pathologies. PMID:25505747

  13. Development and Validation of a HPLC Method for the Determination of Lacidipine in Pure Form and in Pharmaceutical Dosage Form

    International Nuclear Information System (INIS)

    Vinodh, M.; Vinayak, M.; Rahul, K.; Pankaj, P.

    2012-01-01

    A simple and reliable high-performance liquid chromatography (HPLC) method was developed and validated for Lacidipine in pure form and pharmaceutical dosage form. The method was developed on X bridge C-18 column (150 mm x 4.6 mm, 5 μm) with a mobile phase gradient system of ammonium acetate and acetonitrile. The effluent was monitored by PDA detector at 240 nm. Calibration curve was linear over the concentration range of 50-250 μg/ml. For Intra-day and inter-day precision % RSD values were found to be 0.83 % and 0.41 % respectively. Recovery of Lacidipine was found to be in the range of 99.78-101.76 %. The limits of detection (LOD) and quantification (LOQ) were 1.0 and 7.3 μg/ml respectively. The developed RP-HPLC method was successfully applied for the quantitative determination of lacidipine in pharmaceutical dosage. (author)

  14. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  15. Histological versus stereological methods applied at spermatogonia during normal human development

    DEFF Research Database (Denmark)

    Cortes, D

    1990-01-01

    The number of spermatogonia per tubular transverse section (S/T), and the percentage of seminiferous tubulus containing spermatogonia (the fertility index (FI] were measured in 40 pairs of normal autopsy testes aged 28 weeks of gestation-40 years. S/T and FI showed similar changes during the whole...... period, and were minimal between 1 and 4 years. The number of spermatogonia per testis (S/testis) and the number of spermatogonia per cm3 testis tissue (S/cm3) were estimated by stereological methods in the same testes. S/T and FI respectively were significantly correlated both to S/testis and S/cm3. So...

  16. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  17. Impact of PET/CT image reconstruction methods and liver uptake normalization strategies on quantitative image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kuhnert, Georg; Sterzer, Sergej; Kahraman, Deniz; Dietlein, Markus; Drzezga, Alexander; Kobe, Carsten [University Hospital of Cologne, Department of Nuclear Medicine, Cologne (Germany); Boellaard, Ronald [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Scheffler, Matthias; Wolf, Juergen [University Hospital of Cologne, Lung Cancer Group Cologne, Department I of Internal Medicine, Center for Integrated Oncology Cologne Bonn, Cologne (Germany)

    2016-02-15

    In oncological imaging using PET/CT, the standardized uptake value has become the most common parameter used to measure tracer accumulation. The aim of this analysis was to evaluate ultra high definition (UHD) and ordered subset expectation maximization (OSEM) PET/CT reconstructions for their potential impact on quantification. We analyzed 40 PET/CT scans of lung cancer patients who had undergone PET/CT. Standardized uptake values corrected for body weight (SUV) and lean body mass (SUL) were determined in the single hottest lesion in the lung and normalized to the liver for UHD and OSEM reconstruction. Quantitative uptake values and their normalized ratios for the two reconstruction settings were compared using the Wilcoxon test. The distribution of quantitative uptake values and their ratios in relation to the reconstruction method used were demonstrated in the form of frequency distribution curves, box-plots and scatter plots. The agreement between OSEM and UHD reconstructions was assessed through Bland-Altman analysis. A significant difference was observed after OSEM and UHD reconstruction for SUV and SUL data tested (p < 0.0005 in all cases). The mean values of the ratios after OSEM and UHD reconstruction showed equally significant differences (p < 0.0005 in all cases). Bland-Altman analysis showed that the SUV and SUL and their normalized values were, on average, up to 60 % higher after UHD reconstruction as compared to OSEM reconstruction. OSEM and HD reconstruction brought a significant difference for SUV and SUL, which remained constantly high after normalization to the liver, indicating that standardization of reconstruction and the use of comparable SUV measurements are crucial when using PET/CT. (orig.)

  18. Automated PCR setup for forensic casework samples using the Normalization Wizard and PCR Setup robotic methods.

    Science.gov (United States)

    Greenspoon, S A; Sykes, K L V; Ban, J D; Pollard, A; Baisden, M; Farr, M; Graham, N; Collins, B L; Green, M M; Christenson, C C

    2006-12-20

    Human genome, pharmaceutical and research laboratories have long enjoyed the application of robotics to performing repetitive laboratory tasks. However, the utilization of robotics in forensic laboratories for processing casework samples is relatively new and poses particular challenges. Since the quantity and quality (a mixture versus a single source sample, the level of degradation, the presence of PCR inhibitors) of the DNA contained within a casework sample is unknown, particular attention must be paid to procedural susceptibility to contamination, as well as DNA yield, especially as it pertains to samples with little biological material. The Virginia Department of Forensic Science (VDFS) has successfully automated forensic casework DNA extraction utilizing the DNA IQ(trade mark) System in conjunction with the Biomek 2000 Automation Workstation. Human DNA quantitation is also performed in a near complete automated fashion utilizing the AluQuant Human DNA Quantitation System and the Biomek 2000 Automation Workstation. Recently, the PCR setup for casework samples has been automated, employing the Biomek 2000 Automation Workstation and Normalization Wizard, Genetic Identity version, which utilizes the quantitation data, imported into the software, to create a customized automated method for DNA dilution, unique to that plate of DNA samples. The PCR Setup software method, used in conjunction with the Normalization Wizard method and written for the Biomek 2000, functions to mix the diluted DNA samples, transfer the PCR master mix, and transfer the diluted DNA samples to PCR amplification tubes. Once the process is complete, the DNA extracts, still on the deck of the robot in PCR amplification strip tubes, are transferred to pre-labeled 1.5 mL tubes for long-term storage using an automated method. The automation of these steps in the process of forensic DNA casework analysis has been accomplished by performing extensive optimization, validation and testing of the

  19. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  20. Birkhoff normalization

    NARCIS (Netherlands)

    Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.

    2003-01-01

    The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for

  1. BIOCHEMICAL EFFECTS IN NORMAL AND STONE FORMING RATS TREATED WITH THE RIPE KERNEL JUICE OF PLANTAIN (MUSA PARADISIACA)

    Science.gov (United States)

    Devi, V. Kalpana; Baskar, R.; Varalakshmi, P.

    1993-01-01

    The effect of Musa paradisiaca stem kernel juice was investigated in experimental urolithiatic rats. Stone forming rats exhibited a significant elevation in the activities of two oxalate synthesizing enzymes - Glycollic acid oxidase and Lactate dehydrogenase. Deposition and excretion of stone forming constituents in kidney and urine were also increased in these rats. The enzyme activities and the level of crystalline components were lowered with the extract treatment. The extract also reduced the activities of urinary alkaline phosphatase, lactate dehydrogenase, r-glutamyl transferase, inorganic pyrophosphatase and β-glucuronidase in calculogenic rats. No appreciable changes were noticed with leucine amino peptidase activity in treated rats. PMID:22556626

  2. Fabricating TiO2 nanocolloids by electric spark discharge method at normal temperature and pressure

    Science.gov (United States)

    Tseng, Kuo-Hsiung; Chang, Chaur-Yang; Chung, Meng-Yun; Cheng, Ting-Shou

    2017-11-01

    In this study, TiO2 nanocolloids were successfully fabricated in deionized water without using suspending agents through using the electric spark discharge method at room temperature and under normal atmospheric pressure. This method was exceptional because it did not create nanoparticle dispersion and the produced colloids contained no derivatives. The proposed method requires only traditional electrical discharge machines (EDMs), self-made magnetic stirrers, and Ti wires (purity, 99.99%). The EDM pulse on time (T on) and pulse off time (T off) were respectively set at 50 and 100 μs, 100 and 100 μs, 150 and 100 μs, and 200 and 100 μs to produce four types of TiO2 nanocolloids. Zetasizer analysis of the nanocolloids showed that a decrease in T on increased the suspension stability, but there were no significant correlations between T on and particle size. Colloids produced from the four production configurations showed a minimum particle size between 29.39 and 52.85 nm and a zeta-potential between -51.2 and -46.8 mV, confirming that the method introduced in this study can be used to produce TiO2 nanocolloids with excellent suspension stability. Scanning electron microscopy with energy dispersive spectroscopy also indicated that the TiO2 colloids did not contain elements other than Ti and oxygen.

  3. Fabricating TiO2 nanocolloids by electric spark discharge method at normal temperature and pressure.

    Science.gov (United States)

    Tseng, Kuo-Hsiung; Chang, Chaur-Yang; Chung, Meng-Yun; Cheng, Ting-Shou

    2017-11-17

    In this study, TiO 2 nanocolloids were successfully fabricated in deionized water without using suspending agents through using the electric spark discharge method at room temperature and under normal atmospheric pressure. This method was exceptional because it did not create nanoparticle dispersion and the produced colloids contained no derivatives. The proposed method requires only traditional electrical discharge machines (EDMs), self-made magnetic stirrers, and Ti wires (purity, 99.99%). The EDM pulse on time (T on ) and pulse off time (T off ) were respectively set at 50 and 100 μs, 100 and 100 μs, 150 and 100 μs, and 200 and 100 μs to produce four types of TiO 2 nanocolloids. Zetasizer analysis of the nanocolloids showed that a decrease in T on increased the suspension stability, but there were no significant correlations between T on and particle size. Colloids produced from the four production configurations showed a minimum particle size between 29.39 and 52.85 nm and a zeta-potential between -51.2 and -46.8 mV, confirming that the method introduced in this study can be used to produce TiO 2 nanocolloids with excellent suspension stability. Scanning electron microscopy with energy dispersive spectroscopy also indicated that the TiO 2 colloids did not contain elements other than Ti and oxygen.

  4. Contrast sensitivity measured by two different test methods in healthy, young adults with normal visual acuity.

    Science.gov (United States)

    Koefoed, Vilhelm F; Baste, Valborg; Roumes, Corinne; Høvding, Gunnar

    2015-03-01

    This study reports contrast sensitivity (CS) reference values obtained by two different test methods in a strictly selected population of healthy, young adults with normal uncorrected visual acuity. Based on these results, the index of contrast sensitivity (ICS) is calculated, aiming to establish ICS reference values for this population and to evaluate the possible usefulness of ICS as a tool to compare the degree of agreement between different CS test methods. Military recruits with best eye uncorrected visual acuity 0.00 LogMAR or better, normal colour vision and age 18-25 years were included in a study to record contrast sensitivity using Optec 6500 (FACT) at spatial frequencies of 1.5, 3, 6, 12 and 18 cpd in photopic and mesopic light and CSV-1000E at spatial frequencies of 3, 6, 12 and 18 cpd in photopic light. Index of contrast sensitivity was calculated based on data from the three tests, and the Bland-Altman technique was used to analyse the agreement between ICS obtained by the different test methods. A total of 180 recruits were included. Contrast sensitivity frequency data for all tests were highly skewed with a marked ceiling effect for the photopic tests. The median ICS for Optec 6500 at 85 cd/m2 was -0.15 (95% percentile 0.45), compared with -0.00 (95% percentile 1.62) for Optec at 3 cd/m2 and 0.30 (95% percentile 1.20) FOR CSV-1000E. The mean difference between ICSFACT 85 and ICSCSV was -0.43 (95% CI -0.56 to -0.30, p<0.00) with limits of agreement (LoA) within -2.10 and 1.22. The regression line on the difference of average was near to zero (R2=0.03). The results provide reference CS and ICS values in a young, adult population with normal visual acuity. The agreement between the photopic tests indicated that they may be used interchangeably. There was little agreement between the mesopic and photopic tests. The mesopic test seemed best suited to differentiate between candidates and may therefore possibly be useful for medical selection purposes.

  5. A Classification Method of Normal and Overweight Females Based on Facial Features for Automated Medical Applications

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available Obesity and overweight have become serious public health problems worldwide. Obesity and abdominal obesity are associated with type 2 diabetes, cardiovascular diseases, and metabolic syndrome. In this paper, we first suggest a method of predicting normal and overweight females according to body mass index (BMI based on facial features. A total of 688 subjects participated in this study. We obtained the area under the ROC curve (AUC value of 0.861 and kappa value of 0.521 in Female: 21–40 (females aged 21–40 years group, and AUC value of 0.76 and kappa value of 0.401 in Female: 41–60 (females aged 41–60 years group. In two groups, we found many features showing statistical differences between normal and overweight subjects by using an independent two-sample t-test. We demonstrated that it is possible to predict BMI status using facial characteristics. Our results provide useful information for studies of obesity and facial characteristics, and may provide useful clues in the development of applications for alternative diagnosis of obesity in remote healthcare.

  6. Normalized impact factor (NIF): an adjusted method for calculating the citation rate of biomedical journals.

    Science.gov (United States)

    Owlia, P; Vasei, M; Goliaei, B; Nassiri, I

    2011-04-01

    The interests in journal impact factor (JIF) in scientific communities have grown over the last decades. The JIFs are used to evaluate journals quality and the papers published therein. JIF is a discipline specific measure and the comparison between the JIF dedicated to different disciplines is inadequate, unless a normalization process is performed. In this study, normalized impact factor (NIF) was introduced as a relatively simple method enabling the JIFs to be used when evaluating the quality of journals and research works in different disciplines. The NIF index was established based on the multiplication of JIF by a constant factor. The constants were calculated for all 54 disciplines of biomedical field during 2005, 2006, 2007, 2008 and 2009 years. Also, ranking of 393 journals in different biomedical disciplines according to the NIF and JIF were compared to illustrate how the NIF index can be used for the evaluation of publications in different disciplines. The findings prove that the use of the NIF enhances the equality in assessing the quality of research works produced by researchers who work in different disciplines. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    International Nuclear Information System (INIS)

    Gigase, Yves

    2007-01-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)

  8. Environmental dose-assessment methods for normal operations at DOE nuclear sites

    International Nuclear Information System (INIS)

    Strenge, D.L.; Kennedy, W.E. Jr.; Corley, J.P.

    1982-09-01

    Methods for assessing public exposure to radiation from normal operations at DOE facilities are reviewed in this report. The report includes a discussion of environmental doses to be calculated, a review of currently available environmental pathway models and a set of recommended models for use when environmental pathway modeling is necessary. Currently available models reviewed include those used by DOE contractors, the Environmental Protection Agency (EPA), the Nuclear Regulatory Commission (NRC), and other organizations involved in environmental assessments. General modeling areas considered for routine releases are atmospheric transport, airborne pathways, waterborne pathways, direct exposure to penetrating radiation, and internal dosimetry. The pathway models discussed in this report are applicable to long-term (annual) uniform releases to the environment: they do not apply to acute releases resulting from accidents or emergency situations

  9. The Effect of Normal Force on Tribocorrosion Behaviour of Ti-10Zr Alloy and Porous TiO2-ZrO2 Thin Film Electrochemical Formed

    Science.gov (United States)

    Dănăilă, E.; Benea, L.

    2017-06-01

    The tribocorrosion behaviour of Ti-10Zr alloy and porous TiO2-ZrO2 thin film electrochemical formed on Ti-10Zr alloy was evaluated in Fusayama-Mayer artificial saliva solution. Tribocorrosion experiments were performed using a unidirectional pin-on-disc experimental set-up which was mechanically and electrochemically instrumented, under various solicitation conditions. The effect of applied normal force on tribocorrosion performance of the tested materials was determined. Open circuit potential (OCP) measurements performed before, during and after sliding tests were applied in order to determine the tribocorrosion degradation. The applied normal force was found to greatly affect the potential during tribocorrosion experiments, an increase in the normal force inducing a decrease in potential accelerating the depassivation of the materials studied. The results show a decrease in friction coefficient with gradually increasing the normal load. It was proved that the porous TiO2-ZrO2 thin film electrochemical formed on Ti-10Zr alloy lead to an improvement of tribocorrosion resistance compared to non-anodized Ti-10Zr alloy intended for biomedical applications.

  10. 48 CFR 215.404-70 - DD Form 1547, Record of Weighted Guidelines Method Application.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false DD Form 1547, Record of... TYPES CONTRACTING BY NEGOTIATION Contract Pricing 215.404-70 DD Form 1547, Record of Weighted Guidelines Method Application. Follow the procedures at PGI 215.404-70 for use of DD Form 1547 whenever a structured...

  11. Basic sculpturing methods as innovatory incentives in the development of aesthetic form concepts

    DEFF Research Database (Denmark)

    Thomsen, Bente Dahl

    2009-01-01

      Many project teams grapple for a long time with developing ideas to the form concept because of a lack of methods to solve the many form problems they face in sketching. They also have difficulty in translating the project requirements for product proportions or volumes to an aesthetic form...

  12. Review of clinically accessible methods to determine lean body mass for normalization of standardized uptake values

    International Nuclear Information System (INIS)

    DEVRIESE, Joke; POTTEL, Hans; BEELS, Laurence; MAES, Alex; VAN DE WIELE, Christophe; GHEYSENS, Olivier

    2016-01-01

    With the routine use of 2-deoxy-2-[ 18 F]-fluoro-D-glucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) scans, metabolic activity of tumors can be quantitatively assessed through calculation of SUVs. One possible normalization parameter for the standardized uptake value (SUV) is lean body mass (LBM), which is generally calculated through predictive equations based on height and body weight. (Semi-)direct measurements of LBM could provide more accurate results in cancer populations than predictive equations based on healthy populations. In this context, four methods to determine LBM are reviewed: bioelectrical impedance analysis, dual-energy X-ray absorptiometry. CT, and magnetic resonance imaging. These methods were selected based on clinical accessibility and are compared in terms of methodology, precision and accuracy. By assessing each method’s specific advantages and limitations, a well-considered choice of method can hopefully lead to more accurate SUVLBM values, hence more accurate quantitative assessment of 18F-FDG PET images.

  13. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  14. Adjustment technique without explicit formation of normal equations /conjugate gradient method/

    Science.gov (United States)

    Saxena, N. K.

    1974-01-01

    For a simultaneous adjustment of a large geodetic triangulation system, a semiiterative technique is modified and used successfully. In this semiiterative technique, known as the conjugate gradient (CG) method, original observation equations are used, and thus the explicit formation of normal equations is avoided, 'huge' computer storage space being saved in the case of triangulation systems. This method is suitable even for very poorly conditioned systems where solution is obtained only after more iterations. A detailed study of the CG method for its application to large geodetic triangulation systems was done that also considered constraint equations with observation equations. It was programmed and tested on systems as small as two unknowns and three equations up to those as large as 804 unknowns and 1397 equations. When real data (573 unknowns, 965 equations) from a 1858-km-long triangulation system were used, a solution vector accurate to four decimal places was obtained in 2.96 min after 1171 iterations (i.e., 2.0 times the number of unknowns).

  15. Shack-Hartmann centroid detection method based on high dynamic range imaging and normalization techniques

    International Nuclear Information System (INIS)

    Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas

    2010-01-01

    In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.

  16. A Hard X-Ray Study of the Normal Star-Forming Galaxy M83 with NuSTAR

    DEFF Research Database (Denmark)

    Yukita, M.; Hornschemeier, A. E.; Lehmer, B. D.

    2016-01-01

    We present the results from sensitive, multi-epoch NuSTAR observations of the late-type star-forming galaxy M83 (d = 4.6 Mpc). This is the first investigation to spatially resolve the hard (E > 10 keV) X-ray emission of this galaxy. The nuclear region and similar to 20 off-nuclear point sources......, including a previously discovered ultraluminous X-ray source, are detected in our NuSTAR observations. The X-ray hardnesses and luminosities of the majority of the point sources are consistent with hard X-ray sources resolved in the starburst galaxy NGC 253. We infer that the hard X-ray emission is most...

  17. Method of forming a ceramic matrix composite and a ceramic matrix component

    Science.gov (United States)

    de Diego, Peter; Zhang, James

    2017-05-30

    A method of forming a ceramic matrix composite component includes providing a formed ceramic member having a cavity, filling at least a portion of the cavity with a ceramic foam. The ceramic foam is deposited on a barrier layer covering at least one internal passage of the cavity. The method includes processing the formed ceramic member and ceramic foam to obtain a ceramic matrix composite component. Also provided is a method of forming a ceramic matrix composite blade and a ceramic matrix composite component.

  18. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    Science.gov (United States)

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  19. Methods of forming and realization of assortment policy of retail business enterprises

    Directory of Open Access Journals (Sweden)

    Kudenko Kiril

    2016-07-01

    Full Text Available Within the framework of the article systematisation of methods of forming and realisation of assortment policy of enterprises of retail business is done. Recommendations concerning the priority of the use of separate methods of forming and realisation of assortment policy with different purposes, taking into account their content, advantages and disadvantages are developed.

  20. Dynamic pathways to mediate reactions buried in thermal fluctuations. I. Time-dependent normal form theory for multidimensional Langevin equation.

    Science.gov (United States)

    Kawai, Shinnosuke; Komatsuzaki, Tamiki

    2009-12-14

    We present a novel theory which enables us to explore the mechanism of reaction selectivity and robust functions in complex systems persisting under thermal fluctuation. The theory constructs a nonlinear coordinate transformation so that the equation of motion for the new reaction coordinate is independent of the other nonreactive coordinates in the presence of thermal fluctuation. In this article we suppose that reacting systems subject to thermal noise are described by a multidimensional Langevin equation without a priori assumption for the form of potential. The reaction coordinate is composed not only of all the coordinates and velocities associated with the system (solute) but also of the random force exerted by the environment (solvent) with friction constants. The sign of the reaction coordinate at any instantaneous moment in the region of a saddle determines the fate of the reaction, i.e., whether the reaction will proceed through to the products or go back to the reactants. By assuming the statistical properties of the random force, one can know a priori a well-defined boundary of the reaction which separates the full position-velocity space in the saddle region into mainly reactive and mainly nonreactive regions even under thermal fluctuation. The analytical expression of the reaction coordinate provides the firm foundation on the mechanism of how and why reaction proceeds in thermal fluctuating environments.

  1. Analytical energy gradient for the two-component normalized elimination of the small component method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter, E-mail: dcremer@smu.edu [Computational and Theoretical Chemistry Group (CATCO), Department of Chemistry, Southern Methodist University, 3215 Daniel Ave, Dallas, Texas 75275-0314 (United States)

    2015-06-07

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg{sub 2} and Cn{sub 2}, which are due to the admixture of more bonding character to the highest occupied spinors.

  2. Analytical energy gradient for the two-component normalized elimination of the small component method

    Science.gov (United States)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter

    2015-06-01

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg2 and Cn2, which are due to the admixture of more bonding character to the highest occupied spinors.

  3. A Gauss-Newton method for the integration of spatial normal fields in shape Space

    KAUST Repository

    Balzer, Jonathan

    2011-08-09

    We address the task of adjusting a surface to a vector field of desired surface normals in space. The described method is entirely geometric in the sense, that it does not depend on a particular parametrization of the surface in question. It amounts to solving a nonlinear least-squares problem in shape space. Previously, the corresponding minimization has been performed by gradient descent, which suffers from slow convergence and susceptibility to local minima. Newton-type methods, although significantly more robust and efficient, have not been attempted as they require second-order Hadamard differentials. These are difficult to compute for the problem of interest and in general fail to be positive-definite symmetric. We propose a novel approximation of the shape Hessian, which is not only rigorously justified but also leads to excellent numerical performance of the actual optimization. Moreover, a remarkable connection to Sobolev flows is exposed. Three other established algorithms from image and geometry processing turn out to be special cases of ours. Our numerical implementation founds on a fast finite-elements formulation on the minimizing sequence of triangulated shapes. A series of examples from a wide range of different applications is discussed to underline flexibility and efficiency of the approach. © 2011 Springer Science+Business Media, LLC.

  4. A Bootstrap Based Measure Robust to the Choice of Normalization Methods for Detecting Rhythmic Features in High Dimensional Data.

    Science.gov (United States)

    Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A; Peddada, Shyamal D

    2018-01-01

    Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html.

  5. Evaluation of four methods for separation of lymphocytes from normal individuals and patients with cancer and tuberculosis.

    Science.gov (United States)

    Patrick, C C; Graber, C D; Loadholt, C B

    1976-01-01

    An optimal technique was sought for lymphocyte recovery from normal and chronic diseased individuals. Lymphocytes were separated by four techniques: Plasmagel, Ficoll--Hypaque, a commercial semiautomatic method, and simple centrifugation using blood drawn from ten normal individuals, ten cancer patients, and ten tuberculosis patients. The lymphocyte mixture obtained after using each method was analyzed for percent recovery, amount if contamination by erythrocytes and neutrophils, and percent viability. The results show that the semiautomatic method yielded the best percent recovery of lymphocytes for normal individuals, while the simple centrifugation method contributed the highest percent recovery for cancer and tuberculosis patients. The Ficoll-Hypaque method gave the lowest erythrocyte contamination for all three types of individuals tested, while the Plasmagel method gave the lowest neutrophil contamination for all three types of individuals. The simple centrifugation method yielded all viable lymphocytes and thus gave the highest percent viability.

  6. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data

    Directory of Open Access Journals (Sweden)

    Li Chen

    2018-04-01

    Full Text Available Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios—a simple but effective normalization method—for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  7. Normal mode analysis of macromolecular systems with the mobile block Hessian method

    International Nuclear Information System (INIS)

    Ghysels, An; Van Speybroeck, Veronique; Van Neck, Dimitri; Waroquier, Michel; Brooks, Bernard R.

    2015-01-01

    Until recently, normal mode analysis (NMA) was limited to small proteins, not only because the required energy minimization is a computationally exhausting task, but also because NMA requires the expensive diagonalization of a 3N a ×3N a matrix with N a the number of atoms. A series of simplified models has been proposed, in particular the Rotation-Translation Blocks (RTB) method by Tama et al. for the simulation of proteins. It makes use of the concept that a peptide chain or protein can be seen as a subsequent set of rigid components, i.e. the peptide units. A peptide chain is thus divided into rigid blocks with six degrees of freedom each. Recently we developed the Mobile Block Hessian (MBH) method, which in a sense has similar features as the RTB method. The main difference is that MBH was developed to deal with partially optimized systems. The position/orientation of each block is optimized while the internal geometry is kept fixed at a plausible - but not necessarily optimized - geometry. This reduces the computational cost of the energy minimization. Applying the standard NMA on a partially optimized structure however results in spurious imaginary frequencies and unwanted coordinate dependence. The MBH avoids these unphysical effects by taking into account energy gradient corrections. Moreover the number of variables is reduced, which facilitates the diagonalization of the Hessian. In the original implementation of MBH, atoms could only be part of one rigid block. The MBH is now extended to the case where atoms can be part of two or more blocks. Two basic linkages can be realized: (1) blocks connected by one link atom, or (2) by two link atoms, where the latter is referred to as the hinge type connection. In this work we present the MBH concept and illustrate its performance with the crambin protein as an example

  8. Feasibility of Computed Tomography-Guided Methods for Spatial Normalization of Dopamine Transporter Positron Emission Tomography Image.

    Science.gov (United States)

    Kim, Jin Su; Cho, Hanna; Choi, Jae Yong; Lee, Seung Ha; Ryu, Young Hoon; Lyoo, Chul Hyoung; Lee, Myung Sik

    2015-01-01

    Spatial normalization is a prerequisite step for analyzing positron emission tomography (PET) images both by using volume-of-interest (VOI) template and voxel-based analysis. Magnetic resonance (MR) or ligand-specific PET templates are currently used for spatial normalization of PET images. We used computed tomography (CT) images acquired with PET/CT scanner for the spatial normalization for [18F]-N-3-fluoropropyl-2-betacarboxymethoxy-3-beta-(4-iodophenyl) nortropane (FP-CIT) PET images and compared target-to-cerebellar standardized uptake value ratio (SUVR) values with those obtained from MR- or PET-guided spatial normalization method in healthy controls and patients with Parkinson's disease (PD). We included 71 healthy controls and 56 patients with PD who underwent [18F]-FP-CIT PET scans with a PET/CT scanner and T1-weighted MR scans. Spatial normalization of MR images was done with a conventional spatial normalization tool (cvMR) and with DARTEL toolbox (dtMR) in statistical parametric mapping software. The CT images were modified in two ways, skull-stripping (ssCT) and intensity transformation (itCT). We normalized PET images with cvMR-, dtMR-, ssCT-, itCT-, and PET-guided methods by using specific templates for each modality and measured striatal SUVR with a VOI template. The SUVR values measured with FreeSurfer-generated VOIs (FSVOI) overlaid on original PET images were also used as a gold standard for comparison. The SUVR values derived from all four structure-guided spatial normalization methods were highly correlated with those measured with FSVOI (P normalization methods provided reliable striatal SUVR values comparable to those obtained with MR-guided methods. CT-guided methods can be useful for analyzing dopamine transporter PET images when MR images are unavailable.

  9. Composite materials and bodies including silicon carbide and titanium diboride and methods of forming same

    Science.gov (United States)

    Lillo, Thomas M.; Chu, Henry S.; Harrison, William M.; Bailey, Derek

    2013-01-22

    Methods of forming composite materials include coating particles of titanium dioxide with a substance including boron (e.g., boron carbide) and a substance including carbon, and reacting the titanium dioxide with the substance including boron and the substance including carbon to form titanium diboride. The methods may be used to form ceramic composite bodies and materials, such as, for example, a ceramic composite body or material including silicon carbide and titanium diboride. Such bodies and materials may be used as armor bodies and armor materials. Such methods may include forming a green body and sintering the green body to a desirable final density. Green bodies formed in accordance with such methods may include particles comprising titanium dioxide and a coating at least partially covering exterior surfaces thereof, the coating comprising a substance including boron (e.g., boron carbide) and a substance including carbon.

  10. Comparing the normalization methods for the differential analysis of Illumina high-throughput RNA-Seq data.

    Science.gov (United States)

    Li, Peipei; Piao, Yongjun; Shon, Ho Sun; Ryu, Keun Ho

    2015-10-28

    Recently, rapid improvements in technology and decrease in sequencing costs have made RNA-Seq a widely used technique to quantify gene expression levels. Various normalization approaches have been proposed, owing to the importance of normalization in the analysis of RNA-Seq data. A comparison of recently proposed normalization methods is required to generate suitable guidelines for the selection of the most appropriate approach for future experiments. In this paper, we compared eight non-abundance (RC, UQ, Med, TMM, DESeq, Q, RPKM, and ERPKM) and two abundance estimation normalization methods (RSEM and Sailfish). The experiments were based on real Illumina high-throughput RNA-Seq of 35- and 76-nucleotide sequences produced in the MAQC project and simulation reads. Reads were mapped with human genome obtained from UCSC Genome Browser Database. For precise evaluation, we investigated Spearman correlation between the normalization results from RNA-Seq and MAQC qRT-PCR values for 996 genes. Based on this work, we showed that out of the eight non-abundance estimation normalization methods, RC, UQ, Med, TMM, DESeq, and Q gave similar normalization results for all data sets. For RNA-Seq of a 35-nucleotide sequence, RPKM showed the highest correlation results, but for RNA-Seq of a 76-nucleotide sequence, least correlation was observed than the other methods. ERPKM did not improve results than RPKM. Between two abundance estimation normalization methods, for RNA-Seq of a 35-nucleotide sequence, higher correlation was obtained with Sailfish than that with RSEM, which was better than without using abundance estimation methods. However, for RNA-Seq of a 76-nucleotide sequence, the results achieved by RSEM were similar to without applying abundance estimation methods, and were much better than with Sailfish. Furthermore, we found that adding a poly-A tail increased alignment numbers, but did not improve normalization results. Spearman correlation analysis revealed that RC, UQ

  11. Application of the moving frame method to deformed Willmore surfaces in space forms

    Science.gov (United States)

    Paragoda, Thanuja

    2018-06-01

    The main goal of this paper is to use the theory of exterior differential forms in deriving variations of the deformed Willmore energy in space forms and study the minimizers of the deformed Willmore energy in space forms. We derive both first and second order variations of deformed Willmore energy in space forms explicitly using moving frame method. We prove that the second order variation of deformed Willmore energy depends on the intrinsic Laplace Beltrami operator, the sectional curvature and some special operators along with mean and Gauss curvatures of the surface embedded in space forms, while the first order variation depends on the extrinsic Laplace Beltrami operator.

  12. Quantitative Analysis of Differential Proteome Expression in Bladder Cancer vs. Normal Bladder Cells Using SILAC Method.

    Directory of Open Access Journals (Sweden)

    Ganglong Yang

    Full Text Available The best way to increase patient survival rate is to identify patients who are likely to progress to muscle-invasive or metastatic disease upfront and treat them more aggressively. The human cell lines HCV29 (normal bladder epithelia, KK47 (low grade nonmuscle invasive bladder cancer, NMIBC, and YTS1 (metastatic bladder cancer have been widely used in studies of molecular mechanisms and cell signaling during bladder cancer (BC progression. However, little attention has been paid to global quantitative proteome analysis of these three cell lines. We labeled HCV29, KK47, and YTS1 cells by the SILAC method using three stable isotopes each of arginine and lysine. Labeled proteins were analyzed by 2D ultrahigh-resolution liquid chromatography LTQ Orbitrap mass spectrometry. Among 3721 unique identified and annotated proteins in KK47 and YTS1 cells, 36 were significantly upregulated and 74 were significantly downregulated with >95% confidence. Differential expression of these proteins was confirmed by western blotting, quantitative RT-PCR, and cell staining with specific antibodies. Gene ontology (GO term and pathway analysis indicated that the differentially regulated proteins were involved in DNA replication and molecular transport, cell growth and proliferation, cellular movement, immune cell trafficking, and cell death and survival. These proteins and the advanced proteome techniques described here will be useful for further elucidation of molecular mechanisms in BC and other types of cancer.

  13. NDT-Bobath method in normalization of muscle tone in post-stroke patients.

    Science.gov (United States)

    Mikołajewska, Emilia

    2012-01-01

    Ischaemic stroke is responsible for 80-85% of strokes. There is great interest in finding effective methods of rehabilitation for post-stroke patients. The aim of this study was to assess the results of rehabilitation carried out in the normalization of upper limb muscle tonus in patients, estimated on the Ashworth Scale for Grading Spasticity. The examined group consisted of 60 patients after ischaemic stroke. 10 sessions of NDT-Bobath therapy were provided within 2 weeks (ten days of therapy). Patient examinations using the Ashworth Scale for Grading Spasticity were done twice: the first time on admission and the second after the last session of the therapy to assess rehabilitation effects. Among the patients involved in the study, the results measured on the Ashworth Scale (where possible) were as follows: recovery in 16 cases (26.67%), relapse in 1 case (1.67%), no measurable changes (or change within the same grade of the scale) in 8 cases (13.33%). Statistically significant changes were observed in the health status of the patients. These changes, in the area of muscle tone, were favorable and reflected in the outcomes of the assessment using the Ashworth Scale for Grading Spasticity.

  14. A design method for two-layer beams consisting of normal and fibered high strength concrete

    International Nuclear Information System (INIS)

    Iskhakov, I.; Ribakov, Y.

    2007-01-01

    Two-layer fibered concrete beams can be analyzed using conventional methods for composite elements. The compressed zone of such beam section is made of high strength concrete (HSC), and the tensile one of normal strength concrete (NSC). The problems related to such type of beams are revealed and studied. An appropriate depth of each layer is prescribed. Compatibility conditions between HSC and NSC layers are found. It is based on the shear deformations equality on the layers border in a section with maximal depth of the compression zone. For the first time a rigorous definition of HSC is given using a comparative analysis of deformability and strength characteristics of different concrete classes. According to this definition, HSC has no download branch in the stress-strain diagram, the stress-strain function has minimum exponent, the ductility parameter is minimal and the concrete tensile strength remains constant with an increase in concrete compression strength. The application fields of two-layer concrete beams based on different static schemes and load conditions make known. It is known that the main disadvantage of HSCs is their low ductility. In order to overcome this problem, fibers are added to the HSC layer. Influence of different fiber volume ratios on structural ductility is discussed. An upper limit of the required fibers volume ratio is found based on compatibility equation of transverse tensile concrete deformations and deformations of fibers

  15. Effects Of Combinations Of Patternmaking Methods And Dress Forms On Garment Appearance

    Directory of Open Access Journals (Sweden)

    Fujii Chinami

    2017-09-01

    Full Text Available We investigated the effects of the combinations of patternmaking methods and dress forms on the appearance of a garment. Six upper garments were made using three patternmaking methods used in France, Italy, and Japan, and two dress forms made in Japan and France. The patterns and the appearances of the garments were compared using geometrical measurements. Sensory evaluations of the differences in garment appearance and fit on each dress form were also carried out. In the patterns, the positions of bust and waist darts were different. The waist dart length, bust dart length, and positions of the bust top were different depending on the patternmaking method, even when the same dress form was used. This was a result of differences in the measurements used and the calculation methods employed for other dimensions. This was because the ideal body shape was different for each patternmaking method. Even for garments produced for the same dress form, the appearances of the shoulder, bust, and waist from the front, side, and back views were different depending on the patternmaking method. As a result of the sensory evaluation, it was also found that the bust and waist shapes of the garments were different depending on the combination of patternmaking method and dress form. Therefore, to obtain a garment with better appearance, it is necessary to understand the effects of the combinations of patternmaking methods and body shapes.

  16. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    Science.gov (United States)

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  17. The pathophysiology of the aqueduct stroke volume in normal pressure hydrocephalus: can co-morbidity with other forms of dementia be excluded?

    Energy Technology Data Exchange (ETDEWEB)

    Bateman, Grant A. [John Hunter Hospital, Department of Medical Imaging, Newcastle (Australia); Levi, Christopher R.; Wang, Yang; Lovett, Elizabeth C. [Hunter Medical Research Institute, Clinical Neurosciences Program, Newcastle (Australia); Schofield, Peter [James Fletcher Hospital, Neuropsychiatry Unit, Newcastle (Australia)

    2005-10-01

    Variable results are obtained from the treatment of normal pressure hydrocephalus (NPH) by shunt insertion. There is a high correlation between NPH and the pathology of Alzheimer's disease (AD) on brain biopsy. There is an overlap between AD and vascular dementia (VaD), suggesting that a correlation exists between NPH and other forms of dementia. This study seeks to (1) understand the physiological factors behind, and (2) define the ability of, the aqueduct stroke volume to exclude dementia co-morbidity. Twenty-four patients from a dementia clinic were classified as having either early AD or VaD on the basis of clinical features, Hachinski score and neuropsychological testing. They were compared with 16 subjects with classical clinical findings of NPH and 12 aged-matched non-cognitively impaired subjects. MRI flow quantification was used to measure aqueduct stroke volume and arterial pulse volume. An arterio-cerebral compliance ratio was calculated from the two volumes in each patient. The aqueduct stroke volume was elevated in all three forms of dementia, with no significant difference noted between the groups. The arterial pulse volume was elevated by 24% in VaD and reduced by 35% in NPH, compared to normal (P=0.05 and P=0.002, respectively), and was normal in AD. There was a spectrum of relative compliance with normal compliance in VaD and reduced compliance in AD and NPH. The aqueduct stroke volume depends on the arterial pulse volume and the relative compliance between the arterial tree and brain. The aqueduct stroke volume cannot exclude significant co-morbidity in NPH. (orig.)

  18. The pathophysiology of the aqueduct stroke volume in normal pressure hydrocephalus: can co-morbidity with other forms of dementia be excluded?

    International Nuclear Information System (INIS)

    Bateman, Grant A.; Levi, Christopher R.; Wang, Yang; Lovett, Elizabeth C.; Schofield, Peter

    2005-01-01

    Variable results are obtained from the treatment of normal pressure hydrocephalus (NPH) by shunt insertion. There is a high correlation between NPH and the pathology of Alzheimer's disease (AD) on brain biopsy. There is an overlap between AD and vascular dementia (VaD), suggesting that a correlation exists between NPH and other forms of dementia. This study seeks to (1) understand the physiological factors behind, and (2) define the ability of, the aqueduct stroke volume to exclude dementia co-morbidity. Twenty-four patients from a dementia clinic were classified as having either early AD or VaD on the basis of clinical features, Hachinski score and neuropsychological testing. They were compared with 16 subjects with classical clinical findings of NPH and 12 aged-matched non-cognitively impaired subjects. MRI flow quantification was used to measure aqueduct stroke volume and arterial pulse volume. An arterio-cerebral compliance ratio was calculated from the two volumes in each patient. The aqueduct stroke volume was elevated in all three forms of dementia, with no significant difference noted between the groups. The arterial pulse volume was elevated by 24% in VaD and reduced by 35% in NPH, compared to normal (P=0.05 and P=0.002, respectively), and was normal in AD. There was a spectrum of relative compliance with normal compliance in VaD and reduced compliance in AD and NPH. The aqueduct stroke volume depends on the arterial pulse volume and the relative compliance between the arterial tree and brain. The aqueduct stroke volume cannot exclude significant co-morbidity in NPH. (orig.)

  19. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Analysis of the nonlinear dynamic behavior of power systems using normal forms of superior order; Analisis del comportamiento dinamico no lineal de sistemas de potencia usando formas normales de orden superior

    Energy Technology Data Exchange (ETDEWEB)

    Marinez Carrillo, Irma

    2003-08-01

    This thesis investigates the application of parameter disturbance methods of analysis to the nonlinear dynamic systems theory, for the study of the stability of small signal of electric power systems. The work is centered in the determination of two fundamental aspects of interest in the study of the nonlinear dynamic behavior of the system: the characterization and quantification of the nonlinear interaction degree between the fundamental ways of oscillation of the system and the study of the ways with greater influence in the response of the system in the presence of small disturbances. With these objectives, a general mathematical model, based on the application of the expansion in series of power of the nonlinear model of the power system and the theory of normal forms of vector fields is proposed for the study of the dynamic behavior of the power system. The proposed tool generalizes the existing methods in the literature to consider effects of superior order in the dynamic model of the power system. Starting off of this representation, a methodology is proposed to obtain analytical solutions of loop back and the extension of the existing methods is investigated to identify and quantify the of interaction degree among the fundamental ways of oscillation of the system. The developed tool allows, from analytical expressions of loop backs, the development of analytical measures to evaluate the stress degree in the system, the interaction between the fundamental ways of oscillation and the determination of stability borders. The conceptual development of the proposed method in this thesis offers, on the other hand, a great flexibility to incorporate detailed models of the power system and the evaluation of diverse measures of the nonlinear modal interaction. Finally, the results are presented of the application of the method of analysis proposed for the study of the nonlinear dynamic behavior in a machine-infinite bus system considering different modeled degrees

  1. Form gene clustering method about pan-ethnic-group products based on emotional semantic

    Science.gov (United States)

    Chen, Dengkai; Ding, Jingjing; Gao, Minzhuo; Ma, Danping; Liu, Donghui

    2016-09-01

    The use of pan-ethnic-group products form knowledge primarily depends on a designer's subjective experience without user participation. The majority of studies primarily focus on the detection of the perceptual demands of consumers from the target product category. A pan-ethnic-group products form gene clustering method based on emotional semantic is constructed. Consumers' perceptual images of the pan-ethnic-group products are obtained by means of product form gene extraction and coding and computer aided product form clustering technology. A case of form gene clustering about the typical pan-ethnic-group products is investigated which indicates that the method is feasible. This paper opens up a new direction for the future development of product form design which improves the agility of product design process in the era of Industry 4.0.

  2. Effects of Different LiDAR Intensity Normalization Methods on Scotch Pine Forest Leaf Area Index Estimation

    Directory of Open Access Journals (Sweden)

    YOU Haotian

    2018-02-01

    Full Text Available The intensity data of airborne light detection and ranging (LiDAR are affected by many factors during the acquisition process. It is of great significance for the normalization and application of LiDAR intensity data to study the effective quantification and normalization of the effect from each factor. In this paper, the LiDAR data were normalized with range, angel of incidence, range and angle of incidence based on radar equation, respectively. Then two metrics, including canopy intensity sum and ratio of intensity, were extracted and used to estimate forest LAI, which was aimed at quantifying the effects of intensity normalization on forest LAI estimation. It was found that the range intensity normalization could improve the accuracy of forest LAI estimation. While the angle of incidence intensity normalization did not improve the accuracy and made the results worse. Although the range and incidence angle normalized intensity data could improve the accuracy, the improvement was less than the result of range intensity normalization. Meanwhile, the differences between the results of forest LAI estimation from raw intensity data and normalized intensity data were relatively big for canopy intensity sum metrics. However, the differences were relatively small for the ratio of intensity metrics. The results demonstrated that the effects of intensity normalization on forest LAI estimation were depended on the choice of affecting factor, and the influential level is closely related to the characteristics of metrics used. Therefore, the appropriate method of intensity normalization should be chosen according to the characteristics of metrics used in the future research, which could avoid the waste of cost and the reduction of estimation accuracy caused by the introduction of inappropriate affecting factors into intensity normalization.

  3. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  4. Normalization Methods and Selection Strategies for Reference Materials in Stable Isotope Analyes. Review

    Energy Technology Data Exchange (ETDEWEB)

    Skrzypek, G. [West Australian Biogeochemistry Centre, John de Laeter Centre of Mass Spectrometry, School of Plant Biology, University of Western Australia, Crawley (Australia); Sadler, R. [School of Agricultural and Resource Economics, University of Western Australia, Crawley (Australia); Paul, D. [Department of Civil Engineering (Geosciences), Indian Institute of Technology Kanpur, Kanpur (India); Forizs, I. [Institute for Geochemical Research, Hungarian Academy of Sciences, Budapest (Hungary)

    2013-07-15

    Stable isotope ratio mass spectrometers are highly precise, but not accurate instruments. Therefore, results have to be normalized to one of the isotope scales (e.g., VSMOW, VPDB) based on well calibrated reference materials. The selection of reference materials, numbers of replicates, {delta}-values of these reference materials and normalization technique have been identified as crucial in determining the uncertainty associated with the final results. The most common normalization techniques and reference materials have been tested using both Monte Carlo simulations and laboratory experiments to investigate aspects of error propagation during the normalization of isotope data. The range of observed differences justifies the need to employ the same sets of standards worldwide for each element and each stable isotope analytical technique. (author)

  5. Methods to evvaluate normal rainfall for short-term wetland hydrology assessment

    Science.gov (United States)

    Jaclyn Sumner; Michael J. Vepraskas; Randall K. Kolka

    2009-01-01

    Identifying sites meeting wetland hydrology requirements is simple when long-term (>10 years) records are available. Because such data are rare, we hypothesized that a single-year of hydrology data could be used to reach the same conclusion as with long-term data, if the data were obtained during a period of normal or below normal rainfall. Long-term (40-45 years)...

  6. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    Science.gov (United States)

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  7. Neutron absorbers and methods of forming at least a portion of a neutron absorber

    Energy Technology Data Exchange (ETDEWEB)

    Guillen, Donna P; Porter, Douglas L; Swank, W David; Erickson, Arnold W

    2014-12-02

    Methods of forming at least a portion of a neutron absorber include combining a first material and a second material to form a compound, reducing the compound into a plurality of particles, mixing the plurality of particles with a third material, and pressing the mixture of the plurality of particles and the third material. One or more components of neutron absorbers may be formed by such methods. Neutron absorbers may include a composite material including an intermetallic compound comprising hafnium aluminide and a matrix material comprising pure aluminum.

  8. Materials interactions test methods to measure radionuclide release from waste forms under repository-relevant conditions

    International Nuclear Information System (INIS)

    Strickert, R.G.; Erikson, R.L.; Shade, J.W.

    1984-10-01

    At the request of the Basalt Waste Isolation Project, the Materials Characterization Center has collected and developed a set of procedures into a waste form compliance test method (MCC-14.4). The purpose of the test is to measure the steady-state concentrations of specified radionuclides in solutions contacting a waste form material. The test method uses a crushed waste form and basalt material suspended in a synthetic basalt groundwater and agitated for up to three months at 150 0 C under anoxic conditions. Elemental and radioisotopic analyses are made on filtered and unfiltered aliquots of the solution. Replicate experiments are performed and simultaneous tests are conducted with an approved test material (ATM) to help ensure precise and reliable data for the actual waste form material. Various features of the test method, equipment, and test conditions are reviewed. Experimental testing using actinide-doped borosilicate glasses are also discussed. 9 references, 2 tables

  9. Method for forming permanent magnets with different polarities for use in microelectromechanical devices

    Science.gov (United States)

    Roesler, Alexander W [Tijeras, NM; Christenson, Todd R [Albuquerque, NM

    2007-04-24

    Methods are provided for forming a plurality of permanent magnets with two different north-south magnetic pole alignments for use in microelectromechanical (MEM) devices. These methods are based on initially magnetizing the permanent magnets all in the same direction, and then utilizing a combination of heating and a magnetic field to switch the polarity of a portion of the permanent magnets while not switching the remaining permanent magnets. The permanent magnets, in some instances, can all have the same rare-earth composition (e.g. NdFeB) or can be formed of two different rare-earth materials (e.g. NdFeB and SmCo). The methods can be used to form a plurality of permanent magnets side-by-side on or within a substrate with an alternating polarity, or to form a two-dimensional array of permanent magnets in which the polarity of every other row of the array is alternated.

  10. Platinum catalyst formed on carbon nanotube by the in-liquid plasma method for fuel cell

    Energy Technology Data Exchange (ETDEWEB)

    Show, Yoshiyuki; Hirai, Akira; Almowarai, Anas; Ueno, Yutaro

    2015-12-01

    In-liquid plasma was generated in the carbon nanotube (CNT) dispersion fluid using platinum electrodes. The generated plasma spattered the surface of the platinum electrodes and dispersed platinum particles into the CNT dispersion. Therefore, the platinum nanoparticles were successfully formed on the CNT surface in the dispersion. The platinum nanoparticles were applied to the proton exchange membrane fuel cell (PEMFC) as a catalyst. The electrical power of 108 mW/cm{sup 2} was observed from the fuel cell which was assembled with the platinum catalyst formed on the CNT by the in-liquid plasma method. - Highlights: • The platinum catalyst was successfully formed on the CNT surface in the dispersion by the in-liquid plasma method. • The electrical power of 108 mW/cm{sup 2} was observed from the fuel cell which was assembled with the platinum catalyst formed on the CNT by the in-liquid plasma method.

  11. Accelerated in-vitro release testing methods for extended-release parenteral dosage forms.

    Science.gov (United States)

    Shen, Jie; Burgess, Diane J

    2012-07-01

    This review highlights current methods and strategies for accelerated in-vitro drug release testing of extended-release parenteral dosage forms such as polymeric microparticulate systems, lipid microparticulate systems, in-situ depot-forming systems and implants. Extended-release parenteral dosage forms are typically designed to maintain the effective drug concentration over periods of weeks, months or even years. Consequently, 'real-time' in-vitro release tests for these dosage forms are often run over a long time period. Accelerated in-vitro release methods can provide rapid evaluation and therefore are desirable for quality control purposes. To this end, different accelerated in-vitro release methods using United States Pharmacopeia (USP) apparatus have been developed. Different mechanisms of accelerating drug release from extended-release parenteral dosage forms, along with the accelerated in-vitro release testing methods currently employed are discussed. Accelerated in-vitro release testing methods with good discriminatory ability are critical for quality control of extended-release parenteral products. Methods that can be used in the development of in-vitro-in-vivo correlation (IVIVC) are desirable; however, for complex parenteral products this may not always be achievable. © 2012 The Authors. JPP © 2012 Royal Pharmaceutical Society.

  12. Accelerated in vitro release testing methods for extended release parenteral dosage forms

    Science.gov (United States)

    Shen, Jie; Burgess, Diane J.

    2012-01-01

    Objectives This review highlights current methods and strategies for accelerated in vitro drug release testing of extended release parenteral dosage forms such as polymeric microparticulate systems, lipid microparticulate systems, in situ depot-forming systems, and implants. Key findings Extended release parenteral dosage forms are typically designed to maintain the effective drug concentration over periods of weeks, months or even years. Consequently, “real-time” in vitro release tests for these dosage forms are often run over a long time period. Accelerated in vitro release methods can provide rapid evaluation and therefore are desirable for quality control purposes. To this end, different accelerated in vitro release methods using United States Pharmacopoeia (USP) apparatus have been developed. Different mechanisms of accelerating drug release from extended release parenteral dosage forms, along with the accelerated in vitro release testing methods currently employed are discussed. Conclusions Accelerated in vitro release testing methods with good discriminatory ability are critical for quality control of extended release parenteral products. Methods that can be used in the development of in vitro-in vivo correlation (IVIVC) are desirable, however for complex parenteral products this may not always be achievable. PMID:22686344

  13. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  14. The preparation method of solid boron solution in silicon carbide in the form of micro powder

    International Nuclear Information System (INIS)

    Pampuch, R.; Stobierski, L.; Lis, J.; Bialoskorski, J.; Ermer, E.

    1993-01-01

    The preparation method of solid boron solution in silicon carbide in the form of micro power has been worked out. The method consists in introducing mixture of boron, carbon and silicon and heating in the atmosphere of inert gas to the 1573 K

  15. Method of forming a nanocluster comprising dielectric layer and device comprising such a layer

    NARCIS (Netherlands)

    2009-01-01

    A method of forming a dielectric layer (330) on a further layer (114, 320) of a semiconductor device (300) is disclosed. The method comprises depositing a dielectric precursor compound and a further precursor compound over the further layer (114, 320), the dielectric precursor compound comprising a

  16. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  17. A simple method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation

    International Nuclear Information System (INIS)

    Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.

    1994-01-01

    Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de

  18. Investigating the Effect of Normalization Norms in Flexible Manufacturing Sytem Selection Using Multi-Criteria Decision-Making Methods

    Directory of Open Access Journals (Sweden)

    Prasenjit Chatterjee

    2014-07-01

    Full Text Available The main objective of this paper is to assess the effect of different normalization norms within multi-criteria decisionmaking (MADM models. Three well accepted MCDM tools, namely, preference ranking organization method for enrichment evaluation (PROMETHEE, grey relation analysis (GRA and technique for order preference by similarity to ideal solution (TOPSIS methods are applied for solving a flexible manufacturing system (FMS selection problem in a discrete manufacturing environment. Finally, by the introduction of different normalization norms to the decision algorithms, its effct on the FMS selection problem using these MCDM models are also studied.

  19. Histological versus stereological methods applied at spermatogonia during normal human development

    DEFF Research Database (Denmark)

    Cortes, Dina

    1990-01-01

    The number of spermatogonia per tubular transverse section (S/T), and the percentage of seminiferous tubulus containing spermatogonia (the fertility index (FI] were measured in 40 pairs of normal autopsy testes aged 28 weeks of gestation-40 years. S/T and FI showed similar changes during the whol...

  20. Fraud adversely affecting the budget of the Europen Union: the forms, methods and causes

    Directory of Open Access Journals (Sweden)

    Zlata Đurđević

    2006-09-01

    Full Text Available The paper analyses the forms, methods and causes of fraud that are perpetrated to the detriment of the budget of the European Union. The forms in which EU fraud appears are shown according to the criterion of kind of budgetary resource. Crime affecting the budgetary revenue of the EU tends to appear in the form of customs duty-evasion and false declarations concerning the customs-relevant information about goods. Crime adversely affecting the expenditure side of the EU budget appears in the form of subsidy fraud in the area of the Common Agricultural Policy, and subsidy fraud in the area of the structural policies. The methods used for the EU fraud committed and considered in the paper are document forgery, concealment of goods, corruption, violence and fictional business and evasion of the laws. In conclusion an explanation is given of the main exogenous criminogenic factors that lead to the EU frauds commonly perpetrated.

  1. Method of forming components for a high-temperature secondary electrochemical cell

    Science.gov (United States)

    Mrazek, Franklin C.; Battles, James E.

    1983-01-01

    A method of forming a component for a high-temperature secondary electrochemical cell having a positive electrode including a sulfide selected from the group consisting of iron sulfides, nickel sulfides, copper sulfides and cobalt sulfides, a negative electrode including an alloy of aluminum and an electrically insulating porous separator between said electrodes. The improvement comprises forming a slurry of solid particles dispersed in a liquid electrolyte such as the lithium chloride-potassium chloride eutetic, casting the slurry into a form having the shape of one of the components and smoothing the exposed surface of the slurry, cooling the cast slurry to form the solid component, and removing same. Electrodes and separators can be thus formed.

  2. An analysis of normalization methods for Drosophila RNAi genomic screens and development of a robust validation scheme

    Science.gov (United States)

    Wiles, Amy M.; Ravi, Dashnamoorthy; Bhavani, Selvaraj; Bishop, Alexander J.R.

    2010-01-01

    Genome-wide RNAi screening is a powerful, yet relatively immature technology that allows investigation into the role of individual genes in a process of choice. Most RNAi screens identify a large number of genes with a continuous gradient in the assessed phenotype. Screeners must then decide whether to examine just those genes with the most robust phenotype or to examine the full gradient of genes that cause an effect and how to identify the candidate genes to be validated. We have used RNAi in Drosophila cells to examine viability in a 384-well plate format and compare two screens, untreated control and treatment. We compare multiple normalization methods, which take advantage of different features within the data, including quantile normalization, background subtraction, scaling, cellHTS2 1, and interquartile range measurement. Considering the false-positive potential that arises from RNAi technology, a robust validation method was designed for the purpose of gene selection for future investigations. In a retrospective analysis, we describe the use of validation data to evaluate each normalization method. While no normalization method worked ideally, we found that a combination of two methods, background subtraction followed by quantile normalization and cellHTS2, at different thresholds, captures the most dependable and diverse candidate genes. Thresholds are suggested depending on whether a few candidate genes are desired or a more extensive systems level analysis is sought. In summary, our normalization approaches and experimental design to perform validation experiments are likely to apply to those high-throughput screening systems attempting to identify genes for systems level analysis. PMID:18753689

  3. INNOVATIVE FORMS SUPPORTING SAFE METHODS OF WORK IN SAFETY ENGINEERING FOR THE DEVELOPMENT OF INTELLIGENT SPECIALIZATIONS

    Directory of Open Access Journals (Sweden)

    Anna GEMBALSKA-KWIECIEŃ

    2016-10-01

    Full Text Available The article discusses innovative forms of participation of employees in the work safety system. It also presents the advantages of these forms of employees’ involvement. The aim of empirical studies was the analysis of their behavior and attitude towards health and safety at work. The issues considered in the article have a significant impact on the improvement of methods of prevention related to work safety and aided the creation of a healthy society.

  4. Inside-sediment partitioning of PAH, PCB and organochlorine compounds and inferences on sampling and normalization methods

    International Nuclear Information System (INIS)

    Opel, Oliver; Palm, Wolf-Ulrich; Steffen, Dieter; Ruck, Wolfgang K.L.

    2011-01-01

    Comparability of sediment analyses for semivolatile organic substances is still low. Neither screening of the sediments nor organic-carbon based normalization is sufficient to obtain comparable results. We are showing the interdependency of grain-size effects with inside-sediment organic-matter distribution for PAH, PCB and organochlorine compounds. Surface sediment samples collected by Van-Veen grab were sieved and analyzed for 16 PAH, 6 PCB and 18 organochlorine pesticides (OCP) as well as organic-matter content. Since bulk concentrations are influenced by grain-size effects themselves, we used a novel normalization method based on the sum of concentrations in the separate grain-size fractions of the sediments. By calculating relative normalized concentrations, it was possible to clearly show underlying mechanisms throughout a heterogeneous set of samples. Furthermore, we were able to show that, for comparability, screening at <125 μm is best suited and can be further improved by additional organic-carbon normalization. - Research highlights: → New method for the comparison of heterogeneous sets of sediment samples. → Assessment of organic pollutants partitioning mechanisms in sediments. → Proposed method for more comparable sediment sampling. - Inside-sediment partitioning mechanisms are shown using a new mathematical approach and discussed in terms of sediment sampling and normalization.

  5. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    Science.gov (United States)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  6. Optimization and validation of spectrophotometric methods for determination of finasteride in dosage and biological forms

    Science.gov (United States)

    Amin, Alaa S.; Kassem, Mohammed A.

    2012-01-01

    Aim and Background: Three simple, accurate and sensitive spectrophotometric methods for the determination of finasteride in pure, dosage and biological forms, and in the presence of its oxidative degradates were developed. Materials and Methods: These methods are indirect, involve the addition of excess oxidant potassium permanganate for method A; cerric sulfate [Ce(SO4)2] for methods B; and N-bromosuccinimide (NBS) for method C of known concentration in acid medium to finasteride, and the determination of the unreacted oxidant by measurement of the decrease in absorbance of methylene blue for method A, chromotrope 2R for method B, and amaranth for method C at a suitable maximum wavelength, λmax: 663, 528, and 520 nm, for the three methods, respectively. The reaction conditions for each method were optimized. Results: Regression analysis of the Beer plots showed good correlation in the concentration ranges of 0.12–3.84 μg mL–1 for method A, and 0.12–3.28 μg mL–1 for method B and 0.14 – 3.56 μg mL–1 for method C. The apparent molar absorptivity, Sandell sensitivity, detection and quantification limits were evaluated. The stoichiometric ratio between the finasteride and the oxidant was estimated. The validity of the proposed methods was tested by analyzing dosage forms and biological samples containing finasteride with relative standard deviation ≤ 0.95. Conclusion: The proposed methods could successfully determine the studied drug with varying excess of its oxidative degradation products, with recovery between 99.0 and 101.4, 99.2 and 101.6, and 99.6 and 101.0% for methods A, B, and C, respectively. PMID:23781478

  7. Standard test method for splitting tensile strength for brittle nuclear waste forms

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1989-01-01

    1.1 This test method is used to measure the static splitting tensile strength of cylindrical specimens of brittle nuclear waste forms. It provides splitting tensile-strength data that can be used to compare the strength of waste forms when tests are done on one size of specimen. 1.2 The test method is applicable to glass, ceramic, and concrete waste forms that are sufficiently homogeneous (Note 1) but not to coated-particle, metal-matrix, bituminous, or plastic waste forms, or concretes with large-scale heterogeneities. Cementitious waste forms with heterogeneities >1 to 2 mm and 5 mm can be tested using this procedure provided the specimen size is increased from the reference size of 12.7 mm diameter by 6 mm length, to 51 mm diameter by 100 mm length, as recommended in Test Method C 496 and Practice C 192. Note 1—Generally, the specimen structural or microstructural heterogeneities must be less than about one-tenth the diameter of the specimen. 1.3 This test method can be used as a quality control chec...

  8. Derivative spectrophotometric method for simultaneous determination of clindamycin phosphate and tretinoin in pharmaceutical dosage forms.

    Science.gov (United States)

    Barazandeh Tehrani, Maliheh; Namadchian, Melika; Fadaye Vatan, Sedigheh; Souri, Effat

    2013-04-10

    A derivative spectrophotometric method was proposed for the simultaneous determination of clindamycin and tretinoin in pharmaceutical dosage forms. The measurement was achieved using the first and second derivative signals of clindamycin at (1D) 251 nm and (2D) 239 nm and tretinoin at (1D) 364 nm and (2D) 387 nm.The proposed method showed excellent linearity at both first and second derivative order in the range of 60-1200 and 1.25-25 μg/ml for clindamycin phosphate and tretinoin respectively. The within-day and between-day precision and accuracy was in acceptable range (CVpharmaceutical dosage form.

  9. [Quantitative analysis method based on fractal theory for medical imaging of normal brain development in infants].

    Science.gov (United States)

    Li, Heheng; Luo, Liangping; Huang, Li

    2011-02-01

    The present paper is aimed to study the fractal spectrum of the cerebral computerized tomography in 158 normal infants of different age groups, based on the calculation of chaotic theory. The distribution range of neonatal period was 1.88-1.90 (mean = 1.8913 +/- 0.0064); It reached a stable condition at the level of 1.89-1.90 during 1-12 months old (mean = 1.8927 +/- 0.0045); The normal range of 1-2 years old infants was 1.86-1.90 (mean = 1.8863 +/- 4 0.0085); It kept the invariance of the quantitative value among 1.88-1.91(mean = 1.8958 +/- 0.0083) during 2-3 years of age. ANOVA indicated there's no significant difference between boys and girls (F = 0.243, P > 0.05), but the difference of age groups was significant (F = 8.947, P development.

  10. Indomethacin nanocrystals prepared by different laboratory scale methods: effect on crystalline form and dissolution behavior

    Energy Technology Data Exchange (ETDEWEB)

    Martena, Valentina; Censi, Roberta [University of Camerino, School of Pharmacy (Italy); Hoti, Ela; Malaj, Ledjan [University of Tirana, Department of Pharmacy (Albania); Di Martino, Piera, E-mail: piera.dimartino@unicam.it [University of Camerino, School of Pharmacy (Italy)

    2012-12-15

    The objective of this study is to select very simple and well-known laboratory scale methods able to reduce particle size of indomethacin until the nanometric scale. The effect on the crystalline form and the dissolution behavior of the different samples was deliberately evaluated in absence of any surfactants as stabilizers. Nanocrystals of indomethacin (native crystals are in the {gamma} form) (IDM) were obtained by three laboratory scale methods: A (Batch A: crystallization by solvent evaporation in a nano-spray dryer), B (Batch B-15 and B-30: wet milling and lyophilization), and C (Batch C-20-N and C-40-N: Cryo-milling in the presence of liquid nitrogen). Nanocrystals obtained by the method A (Batch A) crystallized into a mixture of {alpha} and {gamma} polymorphic forms. IDM obtained by the two other methods remained in the {gamma} form and a different attitude to the crystallinity decrease were observed, with a more considerable decrease in crystalline degree for IDM milled for 40 min in the presence of liquid nitrogen. The intrinsic dissolution rate (IDR) revealed a higher dissolution rate for Batches A and C-40-N, due to the higher IDR of {alpha} form than {gamma} form for the Batch A, and the lower crystallinity degree for both the Batches A and C-40-N. These factors, as well as the decrease in particle size, influenced the IDM dissolution rate from the particle samples. Modifications in the solid physical state that may occur using different particle size reduction treatments have to be taken into consideration during the scale up and industrial development of new solid dosage forms.

  11. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  12. Development and application of the analytical energy gradient for the normalized elimination of the small component method

    NARCIS (Netherlands)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter

    2011-01-01

    The analytical energy gradient of the normalized elimination of the small component (NESC) method is derived for the first time and implemented for the routine calculation of NESC geometries and other first order molecular properties. Essential for the derivation is the correct calculation of the

  13. New spectrofluorimetric method for the determination of nizatidine in bulk form and in pharmaceutical preparations

    Science.gov (United States)

    Karasakal, Ayça; Ulu, Sevgi Tatar

    2013-08-01

    A simple, accurate and highly sensitive spectrofluorimetric method has been developed for determination of nizatidine in pure form and in pharmaceutical dosage forms. The method is based on the reaction between nizatidine and 1-dimethylaminonaphthalene-5-sulphonyl chloride in carbonate buffer, pH 10.5, to yield a highly fluorescent derivative peaking at 513 nm after excitation at 367 nm. Various factors affecting the fluorescence intensity of nizatidin-dansyl derivative were studied and conditions were optimized. The method was validated as per ICH guidelines. The fluorescence concentration plot was rectilinear over the range of 25-300 ng/mL. Limit of detection and limit of quantification were calculated as 11.71 and 35.73 ng/mL, respectively. The proposed method was successfully applied to pharmaceutical preparations.

  14. SU-E-J-178: A Normalization Method Can Remove Discrepancy in Ventilation Function Due to Different Breathing Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Qu, H; Yu, N; Stephans, K; Xia, P [Cleveland Clinic, Cleveland, OH (United States)

    2014-06-01

    Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed.

  15. SU-E-J-178: A Normalization Method Can Remove Discrepancy in Ventilation Function Due to Different Breathing Patterns

    International Nuclear Information System (INIS)

    Qu, H; Yu, N; Stephans, K; Xia, P

    2014-01-01

    Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed

  16. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  17. MODEL OF METHODS OF FORMING BIOLOGICAL PICTURE OF THE WORLD OF SECONDARY SCHOOL PUPILS

    Directory of Open Access Journals (Sweden)

    Mikhail A. Yakunchev

    2016-12-01

    Full Text Available Introduction: the problem of development of a model of methods of forming the biological picture of the world of pupils as a multicomponent and integrative expression of the complete educational process is considered in the article. It is stated that the results of the study have theoretical and practical importance for effective subject preparation of senior pupils based on acquiring of systematic and generalized knowledge about wildlife. The correspondence of the main idea of the article to the scientific profile of the journal “Integration of Education” determines the choice of the periodical for publication. Materials and Methods: the results of the analysis of materials on modeling of the educational process, on specific models of the formation of a complete comprehension of the scientific picture of the world and its biological component make it possible to suggest a lack of elaboration of the aspect of pedagogical research under study. Therefore, the search for methods to overcome these gaps and to substantiate a particular model, relevant for its practical application by a teacher, is important. The study was based on the use of methods of theoretical level, including the analysis of pedagogical and methodological literature, modeling and generalized expression of the model of forming the biological picture of the world of secondary school senior pupils, which were of higher priority. Results: the use of models of organization of subject preparation of secondary school pupils takes a priority position, as they help to achieve the desired results of training, education and development. The model of methods of forming a biological picture of the world is represented as a theoretical construct in the unity of objective, substantive, procedural, diagnostic and effective blocks. Discussion and Conclusions: in a generalized form the article expresses the model of methods of forming the biological picture of the world of secondary school

  18. A new general method for transform canonically a Hamiltonian in another one of a given form

    International Nuclear Information System (INIS)

    Gomez T, A.

    2002-01-01

    The more general method to perform a canonical transformation of a Hamiltonian into another one of a given form is based on the repeated use of the Hamilton-Jacobi equation. This is usually a tedious technique which leads to some particular solutions of the problem. We present a new general method which does not rely on the Hamilton-Jacobi equation and moreover it gives all the possible solutions. (Author)

  19. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming Analysis, Simulation and Engineering Applications

    CERN Document Server

    Hu, Ping; Liu, Li-zhong; Zhu, Yi-guo

    2013-01-01

    Over the last 15 years, the application of innovative steel concepts in the automotive industry has increased steadily. Numerical simulation technology of hot forming of high-strength steel allows engineers to modify the formability of hot forming steel metals and to optimize die design schemes. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming focuses on hot and cold forming theories, numerical methods, relative simulation and experiment techniques for high-strength steel forming and die design in the automobile industry. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming introduces the general theories of cold forming, then expands upon advanced hot forming theories and simulation methods, including: • the forming process, • constitutive equations, • hot boundary constraint treatment, and • hot forming equipment and experiments. Various calculation methods of cold and hot forming, based on the authors’ experience in commercial CAE software f...

  20. Method of forming a ceramic superconducting composite wire using a molten pool

    International Nuclear Information System (INIS)

    Geballe, T.H.; Feigelson, R.S.; Gazit, D.

    1991-01-01

    This paper describes a method for making a flexible superconductive composite wire. It comprises: drawing a wire of noble metal through a molten material, formed by melting a solid formed by pressing powdered Bi 2 O 3 , CaCO 3 SrCO 3 and CuO in a ratio of components necessary for forming a Bi-Sr-Ca-Cu-O superconductor, into the solid and sintering at a temperature in the range of 750 degrees - 800 degrees C. for 10-20 hours, whereby the wire is coated by the molten material; and cooling the coated wire to solidify the molten material to form the superconductive flexible composite wire without need of further annealing

  1. Multi-satellites normalization of the FengYun-2s visible detectors by the MVP method

    Science.gov (United States)

    Li, Yuan; Rong, Zhi-guo; Zhang, Li-jun; Sun, Ling; Xu, Na

    2013-08-01

    After January 13, 2012, FY-2F had successfully launched, the total number of the in orbit operating FengYun-2 geostationary meteorological satellites reached three. For accurate and efficient application of multi-satellite observation data, the study of the multi-satellites normalization of the visible detector was urgent. The method required to be non-rely on the in orbit calibration. So as to validate the calibration results before and after the launch; calculate day updating surface bidirectional reflectance distribution function (BRDF); at the same time track the long-term decay phenomenon of the detector's linearity and responsivity. By research of the typical BRDF model, the normalization method was designed. Which could effectively solute the interference of surface directional reflectance characteristics, non-rely on visible detector in orbit calibration. That was the Median Vertical Plane (MVP) method. The MVP method was based on the symmetry of principal plane, which were the directional reflective properties of the general surface targets. Two geostationary satellites were taken as the endpoint of a segment, targets on the intersecting line of the segment's MVP and the earth surface could be used as a normalization reference target (NRT). Observation on the NRT by two satellites at the moment the sun passing through the MVP brought the same observation zenith, solar zenith, and opposite relative direction angle. At that time, the linear regression coefficients of the satellite output data were the required normalization coefficients. The normalization coefficients between FY-2D, FY-2E and FY-2F were calculated, and the self-test method of the normalized results was designed and realized. The results showed the differences of the responsivity between satellites could up to 10.1%(FY-2E to FY-2F); the differences of the output reflectance calculated by the broadcast calibration look-up table could up to 21.1%(FY-2D to FY-2F); the differences of the output

  2. Future of the Learning Activities in Teenage School: Content, Methods, and Forms

    Directory of Open Access Journals (Sweden)

    Vorontsov A.B.

    2015-11-01

    Full Text Available the early 1990s their scientific research results have been formed in the educational system and began to be used in general primary school. However, when the widespread use of developmental education in elementary school, further studies on the age possibilities of adolescents and the content of their education have not been completed. Targeted research was organized again under the leadership of B.D. Elkonin only in 2000. Designing of teenage school in the framework of the principles and ideology of this system started at the same time at the Psychological Institute of the Russian Academy of Education and many other educational institutions. The article presents the hypothetical ideas about the content, forms and methods of organization of educational process in the second stage of schooling. Particular attention is paid to the fate of the educational activity in teenage school, as well as methods and forms of organization of other activities in the adolescent school.

  3. Interaction between droplets in a ternary microemulsion evaluated by the relative form factor method

    International Nuclear Information System (INIS)

    Nagao, Michihiro; Seto, Hideki; Yamada, Norifumi L.

    2007-01-01

    This paper describes the concentration dependence of the interaction between water droplets coated by a surfactant monolayer using the contrast variation small-angle neutron scattering technique. In the first part, we explain the idea of how to extract a relatively model free structure factor from the scattering data, which is called the relative form factor method. In the second part, the experimental results for the shape of the droplets (form factor) are described. In the third part the relatively model free structure factor is shown, and finally the concentration dependence of the interaction potential between droplets is discussed. The result indicates the validity of the relative form factor method, and the importance of the estimation of the model free structure factor to discuss the nature of structure formation in microemulsion systems

  4. Simultaneous sound velocity and thickness measurement by the ultrasonic pitch-catch method for corrosion-layer-forming polymeric materials.

    Science.gov (United States)

    Kusano, Masahiro; Takizawa, Shota; Sakai, Tetsuya; Arao, Yoshihiko; Kubouchi, Masatoshi

    2018-01-01

    Since thermosetting resins have excellent resistance to chemicals, fiber reinforced plastics composed of such resins and reinforcement fibers are widely used as construction materials for equipment in chemical plants. Such equipment is usually used for several decades under severe corrosive conditions so that failure due to degradation may result. One of the degradation behaviors in thermosetting resins under chemical solutions is "corrosion-layer-forming" degradation. In this type of degradation, surface resins in contact with a solution corrode, and some of them remain asa corrosion layer on the pristine part. It is difficult to precisely measure the thickness of the pristine part of such degradation type materials by conventional pulse-echo ultrasonic testing, because the sound velocity depends on the degree of corrosion of the polymeric material. In addition, the ultrasonic reflection interface between the pristine part and the corrosion layer is obscure. Thus, we propose a pitch-catch method using a pair of normal and angle probes to measure four parameters: the thicknesses of the pristine part and the corrosion layer, and their respective sound velocities. The validity of the proposed method was confirmed by measuring a two-layer sample and a sample including corroded parts. The results demonstrate that the pitch-catch method can successfully measure the four parameters and evaluate the residual thickness of the pristine part in the corrosion-layer-forming sample. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Optimization of instruction and training process through content, form and methods

    International Nuclear Information System (INIS)

    Rozinek, P.

    1983-01-01

    The content orientation and development of forms and methods of nuclear power plant personnel training are described. The subject matter content consisted of two units: group and professional. Professional was divided into specialized sub-units: the primary circuit part, secondary circuit part, electric, chemistry, dosimetry. The system of final examinations is described. (J.P.)

  6. Method for forming nuclear fuel containers of a composite construction and the product thereof

    International Nuclear Information System (INIS)

    Cheng, B.-C.; Rosenbaum, H.S.; Armijo, J.S.

    1981-01-01

    An improved method of producing a composite nuclear fuel container is described which comprises a casing or fuel sheath of zirconium or its alloy with a lining cladding of deposited copper superimposed over the inside surface of the zirconium or alloy and a layer of oxide of the zirconium or alloy formed on the inside surface of the casing or sheath. (U.K.)

  7. Catalyst support structure, catalyst including the structure, reactor including a catalyst, and methods of forming same

    Science.gov (United States)

    Van Norman, Staci A.; Aston, Victoria J.; Weimer, Alan W.

    2017-05-09

    Structures, catalysts, and reactors suitable for use for a variety of applications, including gas-to-liquid and coal-to-liquid processes and methods of forming the structures, catalysts, and reactors are disclosed. The catalyst material can be deposited onto an inner wall of a microtubular reactor and/or onto porous tungsten support structures using atomic layer deposition techniques.

  8. A numerical method for the design of free-form reflectors for lighting applications

    NARCIS (Netherlands)

    Prins, C.R.; Thije Boonkkamp, ten J.H.M.; Roosmalen, van J.; IJzerman, W.L.; Tukker, T.W.

    2013-01-01

    In this article we present a method for the design of fully free-form reflectors for illumination systems. We derive an elliptic partial differential equation of the Monge-Ampère type for the surface of a reflector that converts an arbitrary parallel beam of light into a desired intensity output

  9. AN ELECTROPLATING METHOD OF FORMING PLATINGS OF NICKEL, COBALT, NICKEL ALLOYS OR COBALT ALLOYS

    DEFF Research Database (Denmark)

    1997-01-01

    An electroplating method of forming platings of nickel, cobalt, nickel alloys or cobalt alloys with reduced stresses in an electrodepositing bath of the type: Watt's bath, chloride bath or a combination thereof, by employing pulse plating with periodic reverse pulse and a sulfonated naphthalene...

  10. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    Science.gov (United States)

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  11. Generalization of opinions for forms and methods of teaching used od lower stages of universities

    Directory of Open Access Journals (Sweden)

    Rudolf Šrámek

    2005-01-01

    Full Text Available Competent persons show, that more than 90% of teachers of Czech elementary schools and secondary schools until and since of 90-s of last century used and still use stereotype forms and methods of teaching; these procedures, which are little autonomic, formative and social. Procedures of sharing know- ledge to students and underestimating of deducing of knowledge with students.

  12. Distance Determination Method for Normally Distributed Obstacle Avoidance of Mobile Robots in Stochastic Environments

    Directory of Open Access Journals (Sweden)

    Jinhong Noh

    2016-04-01

    Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.

  13. Novel Approach to Design Ultra Wideband Microwave Amplifiers: Normalized Gain Function Method

    Directory of Open Access Journals (Sweden)

    R. Kopru

    2013-09-01

    Full Text Available In this work, we propose a novel approach called as “Normalized Gain Function (NGF method” to design low/medium power single stage ultra wide band microwave amplifiers based on linear S parameters of the active device. Normalized Gain Function TNGF is defined as the ratio of T and |S21|^2, desired shape or frequency response of the gain function of the amplifier to be designed and the shape of the transistor forward gain function, respectively. Synthesis of input/output matching networks (IMN/OMN of the amplifier requires mathematically generated target gain functions to be tracked in two different nonlinear optimization processes. In this manner, NGF not only facilitates a mathematical base to share the amplifier gain function into such two distinct target gain functions, but also allows their precise computation in terms of TNGF=T/|S21|^2 at the very beginning of the design. The particular amplifier presented as the design example operates over 800-5200 MHz to target GSM, UMTS, Wi-Fi and WiMAX applications. An SRFT (Simplified Real Frequency Technique based design example supported by simulations in MWO (MicroWave Office from AWR Corporation is given using a 1400mW pHEMT transistor, TGF2021-01 from TriQuint Semiconductor.

  14. Development and Statistical Validation of Spectrophotometric Methods for the Estimation of Nabumetone in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    A. R. Rote

    2010-01-01

    Full Text Available Three new simple, economic spectrophotometric methods were developed and validated for the estimation of nabumetone in bulk and tablet dosage form. First method includes determination of nabumetone at absorption maxima 330 nm, second method applied was area under curve for analysis of nabumetone in the wavelength range of 326-334 nm and third method was First order derivative spectra with scaling factor 4. Beer law obeyed in the concentration range of 10-30 μg/mL for all three methods. The correlation coefficients were found to be 0.9997, 0.9998 and 0.9998 by absorption maxima, area under curve and first order derivative spectra. Results of analysis were validated statistically and by performing recovery studies. The mean percent recoveries were found satisfactory for all three methods. The developed methods were also compared statistically using one way ANOVA. The proposed methods have been successfully applied for the estimation of nabumetone in bulk and pharmaceutical tablet dosage form.

  15. Application of in situ current normalized PIGE method for determination of total boron and its isotopic composition

    International Nuclear Information System (INIS)

    Chhillar, Sumit; Acharya, R.; Sodaye, S.; Pujari, P.K.

    2014-01-01

    A particle induced gamma-ray emission (PIGE) method using proton beam has been standardized for determination of isotopic composition of natural boron and enriched boron samples. Target pellets of boron standard and samples were prepared in cellulose matrix. The prompt gamma rays of 429 keV, 718 keV and 2125 keV were measured from 10 B(p,αγ) 7 Be, 10 B(p, p'γ) 10 B and 11 B(p, p'γ) 11 B nuclear reactions, respectively. For normalizing the beam current variations in situ current normalization method was used. Validation of method was carried out using synthetic samples of boron carbide, borax, borazine and lithium metaborate in cellulose matrix. (author)

  16. Five-point form of the nodal diffusion method and comparison with finite-difference

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1988-01-01

    Nodal Methods have been derived, implemented and numerically tested for several problems in physics and engineering. In the field of nuclear engineering, many nodal formalisms have been used for the neutron diffusion equation, all yielding results which were far more computationally efficient than conventional Finite Difference (FD) and Finite Element (FE) methods. However, not much effort has been devoted to theoretically comparing nodal and FD methods in order to explain the very high accuracy of the former. In this summary we outline the derivation of a simple five-point form for the lowest order nodal method and compare it to the traditional five-point, edge-centered FD scheme. The effect of the observed differences on the accuracy of the respective methods is established by considering a simple test problem. It must be emphasized that the nodal five-point scheme derived here is mathematically equivalent to previously derived lowest order nodal methods. 7 refs., 1 tab

  17. Staining Methods for Normal and Regenerative Myelin in the Nervous System.

    Science.gov (United States)

    Carriel, Víctor; Campos, Antonio; Alaminos, Miguel; Raimondo, Stefania; Geuna, Stefano

    2017-01-01

    Histochemical techniques enable the specific identification of myelin by light microscopy. Here we describe three histochemical methods for the staining of myelin suitable for formalin-fixed and paraffin-embedded materials. The first method is conventional luxol fast blue (LFB) method which stains myelin in blue and Nissl bodies and mast cells in purple. The second method is a LBF-based method called MCOLL, which specifically stains the myelin as well the collagen fibers and cells, giving an integrated overview of the histology and myelin content of the tissue. Finally, we describe the osmium tetroxide method, which consist in the osmication of previously fixed tissues. Osmication is performed prior the embedding of tissues in paraffin giving a permanent positive reaction for myelin as well as other lipids present in the tissue.

  18. Using stable isotopes to monitor forms of sulfur during desulfurization processes: A quick screening method

    Science.gov (United States)

    Liu, Chao-Li; Hackley, Keith C.; Coleman, D.D.; Kruse, C.W.

    1987-01-01

    A method using stable isotope ratio analysis to monitor the reactivity of sulfur forms in coal during thermal and chemical desulfurization processes has been developed at the Illinois State Geological Survey. The method is based upon the fact that a significant difference exists in some coals between the 34S/32S ratios of the pyritic and organic sulfur. A screening method for determining the suitability of coal samples for use in isotope ratio analysis is described. Making these special coals available from coal sample programs would assist research groups in sorting out the complex sulfur chemistry which accompanies thermal and chemical processing of high sulfur coals. ?? 1987.

  19. USING THE METHOD KINESIOTAPING IN REHABILITATION OF CHILDREN WITH HEMIPARETIC FORM OF CEREBRAL PALSY

    Directory of Open Access Journals (Sweden)

    Vladimir Evgenevich Tuchkov

    2016-08-01

    Full Text Available The study examines the impact of a new kind of impact in the rehabilitation of hemiparetic form of cerebral palsy – a method kinesiotaping «Concept 4 tapes». Within this framework, the receptor patient unit gradually turned on, resulting in a restructuring of the program abnormal movement, the conditions of use of other methods to increase the efficiency and depth of the order of their influence. The advantage of a technique kinesiotaping is the standard approach, allowing you to apply effects diagram method to all patients without loss of efficacy of therapeutic effects.

  20. A Validated RP-HPLC Method for the Determination of Atazanavir in Pharmaceutical Dosage Form

    Directory of Open Access Journals (Sweden)

    K. Srinivasu

    2011-01-01

    Full Text Available A validated RP HPLC method for the estimation of atazanavir in capsule dosage form on YMC ODS 150 × 4.6 mm, 5 μ column using mobile phase composition of ammonium dihydrogen phosphate buffer (pH 2.5 with acetonitrile (55:45 v/v. Flow rate was maintained at 1.5 mL/min with 288 nm UV detection. The retention time obtained for atazanavir was at 4.7 min. The detector response was linear in the concentration range of 30 - 600 μg/mL. This method has been validated and shown to be specific, sensitive, precise, linear, accurate, rugged, robust and fast. Hence, this method can be applied for routine quality control of atazanavir in capsule dosage forms as well as in bulk drug.

  1. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  2. The normalization of surface anisotropy effects present in SEVIRI reflectances by using the MODIS BRDF method

    DEFF Research Database (Denmark)

    Proud, Simon Richard; Zhang, Qingling; Schaaf, Crystal

    2014-01-01

    A modified version of the MODerate resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF) algorithm is presented for use in the angular normalization of surface reflectance data gathered by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI...... acquisition period than the comparable MODIS products while, at the same time, removing many of the angular perturbations present within the original MSG data. The NBAR data are validated against reflectance data from the MODIS instrument and in situ data gathered at a field location in Africa throughout 2008....... It is found that the MSG retrievals are stable and are of high-quality across much of the SEVIRI disk while maintaining a higher temporal resolution than the MODIS BRDF products. However, a number of circumstances are discovered whereby the BRDF model is unable to function correctly with the SEVIRI...

  3. Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models

    Science.gov (United States)

    Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...

  4. Measurement of plasma histamine: description of an improved method and normal values

    International Nuclear Information System (INIS)

    Dyer, J.; Warren, K.; Merlin, S.; Metcalfe, D.D.; Kaliner, M.

    1982-01-01

    The single isotopic-enzymatic assay of histamine was modified to increase its sensitivity and to facilitate measurement of plasma histamine levels. The modification involved extracting 3 H-1-methylhistamine (generated by the enzyme N-methyltransferase acting on histamine in the presence of S-[methyl- 3 H]-adenosyl-L-methionine) into chloroform and isolating the 3 H-1-methylhistamine by thin-layer chromatography (TLC). The TLC was developed in acetone:ammonium hydroxide (95:10), and the methylhistamine spot (Rf . 0.50) was identified with an o-phthalaldehyde spray, scraped from the plate, and assayed in a scintillation counter. The assay in plasma demonstrated a linear relationship from 200 to 5000 pg histamine/ml. Plasma always had higher readings than buffer, and dialysis of plasma returned these values to the same level as buffer, suggesting that the baseline elevations might be attributable to histamine. However, all histamine standard curves were run in dialyzed plasma to negate any additional influences plasma might exert on the assay. The arithmetic mean (+/- SEM) in normal plasma histamine was 318.4 +/- 25 pg/ml (n . 51), and the geometric mean was 280 +/- 35 pg/ml. Plasma histamine was significantly elevated by infusion of histamine at 0.05 to 1.0 micrograms/kg/min or by cold immersion of the hand of a cold-urticaria patient. Therefore this modified isotopic-enzymatic assay of histamine is extremely sensitive, capable of measuring fluctuations in plasma histamine levels within the normal range, and potentially useful in analysis of the role histamine plays in human physiology

  5. A method for detecting nonlinear determinism in normal and epileptic brain EEG signals.

    Science.gov (United States)

    Meghdadi, Amir H; Fazel-Rezai, Reza; Aghakhani, Yahya

    2007-01-01

    A robust method of detecting determinism for short time series is proposed and applied to both healthy and epileptic EEG signals. The method provides a robust measure of determinism through characterizing the trajectories of the signal components which are obtained through singular value decomposition. Robustness of the method is shown by calculating proposed index of determinism at different levels of white and colored noise added to a simulated chaotic signal. The method is shown to be able to detect determinism at considerably high levels of additive noise. The method is then applied to both intracranial and scalp EEG recordings collected in different data sets for healthy and epileptic brain signals. The results show that for all of the studied EEG data sets there is enough evidence of determinism. The determinism is more significant for intracranial EEG recordings particularly during seizure activity.

  6. Evaluating new methods for direct measurement of the moderator temperature coefficient in nuclear power plants during normal operation

    International Nuclear Information System (INIS)

    Makai, M.; Kalya, Z.; Nemes, I.; Pos, I.; Por, G.

    2007-01-01

    Moderator temperature coefficient of reactivity is not monitored during fuel cycles in WWER reactors, because it is not very easy or impossible to measure it without disturbing the normal operation. Two new methods were tested in our WWER type nuclear power plant to try methodologies, which enable to measure that important to safety parameter during the fuel cycle. One is based on small perturbances, and only small changes are requested in operation, the other is based on noise methods, which means it is without interference with reactor operation. Both method is new that aspects that they uses the plant computer data(VERONA) based signals calculated by C P ORCA diffusion code (Authors)

  7. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles...... with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature (Tmax) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative...

  8. Libraries for spectrum identification: Method of normalized coordinates versus linear correlation

    International Nuclear Information System (INIS)

    Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.

    2008-01-01

    In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification

  9. A Gauss-Newton method for the integration of spatial normal fields in shape Space

    KAUST Repository

    Balzer, Jonathan

    2011-01-01

    to solving a nonlinear least-squares problem in shape space. Previously, the corresponding minimization has been performed by gradient descent, which suffers from slow convergence and susceptibility to local minima. Newton-type methods, although significantly

  10. A new method for designing dual foil electron beam forming systems. I. Introduction, concept of the method

    International Nuclear Information System (INIS)

    Adrich, Przemysław

    2016-01-01

    In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.

  11. A new method for designing dual foil electron beam forming systems. I. Introduction, concept of the method

    Energy Technology Data Exchange (ETDEWEB)

    Adrich, Przemysław, E-mail: Przemyslaw.Adrich@ncbj.gov.pl

    2016-05-01

    In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.

  12. Geometric Methods in the Algebraic Theory of Quadratic Forms : Summer School

    CERN Document Server

    2004-01-01

    The geometric approach to the algebraic theory of quadratic forms is the study of projective quadrics over arbitrary fields. Function fields of quadrics have been central to the proofs of fundamental results since the renewal of the theory by Pfister in the 1960's. Recently, more refined geometric tools have been brought to bear on this topic, such as Chow groups and motives, and have produced remarkable advances on a number of outstanding problems. Several aspects of these new methods are addressed in this volume, which includes - an introduction to motives of quadrics by Alexander Vishik, with various applications, notably to the splitting patterns of quadratic forms under base field extensions; - papers by Oleg Izhboldin and Nikita Karpenko on Chow groups of quadrics and their stable birational equivalence, with application to the construction of fields which carry anisotropic quadratic forms of dimension 9, but none of higher dimension; - a contribution in French by Bruno Kahn which lays out a general fra...

  13. Proposed waste form performance criteria and testing methods for low-level mixed waste

    International Nuclear Information System (INIS)

    Franz, E.M.; Fuhrmann, M.; Bowerman, B.

    1995-01-01

    Proposed waste form performance criteria and testing methods were developed as guidance in judging the suitability of solidified waste as a physico-chemical barrier to releases of radionuclides and RCRA regulated hazardous components. The criteria follow from the assumption that release of contaminants by leaching is the single most important property for judging the effectiveness of a waste form. A two-tier regimen is proposed. The first tier consists of a leach test designed to determine the net, forward leach rate of the solidified waste and a leach test required by the Environmental Protection Agency (EPA). The second tier of tests is to determine if a set of stresses (i.e., radiation, freeze-thaw, wet-dry cycling) on the waste form adversely impacts its ability to retain contaminants and remain physically intact. In the absence of site-specific performance assessments (PA), two generic modeling exercises are described which were used to calculate proposed acceptable leachates

  14. Flexible barrier film, method of forming same, and organic electronic device including same

    Science.gov (United States)

    Blizzard, John; Tonge, James Steven; Weidner, William Kenneth

    2013-03-26

    A flexible barrier film has a thickness of from greater than zero to less than 5,000 nanometers and a water vapor transmission rate of no more than 1.times.10.sup.-2 g/m.sup.2/day at 22.degree. C. and 47% relative humidity. The flexible barrier film is formed from a composition, which comprises a multi-functional acrylate. The composition further comprises the reaction product of an alkoxy-functional organometallic compound and an alkoxy-functional organosilicon compound. A method of forming the flexible barrier film includes the steps of disposing the composition on a substrate and curing the composition to form the flexible barrier film. The flexible barrier film may be utilized in organic electronic devices.

  15. A strand specific high resolution normalization method for chip-sequencing data employing multiple experimental control measurements

    DEFF Research Database (Denmark)

    Enroth, Stefan; Andersson, Claes; Andersson, Robin

    2012-01-01

    High-throughput sequencing is becoming the standard tool for investigating protein-DNA interactions or epigenetic modifications. However, the data generated will always contain noise due to e.g. repetitive regions or non-specific antibody interactions. The noise will appear in the form of a backg......, the background is only used to adjust peak calling and not as a pre-processing step that aims at discerning the signal from the background noise. A normalization procedure that extracts the signal of interest would be of universal use when investigating genomic patterns....

  16. A novel method for spectrophotometric determination of pregabalin in pure form and in capsules

    Directory of Open Access Journals (Sweden)

    Gaur Prateek

    2011-10-01

    Full Text Available Abstract Background Pregabalin, a γ-amino-n-butyric acid derivative, is an antiepileptic drug not yet official in any pharmacopeia and development of analytical procedures for this drug in bulk/formulation forms is a necessity. We herein, report a new, simple, extraction free, cost effective, sensitive and reproducible spectrophotometric method for the determination of the pregabalin. Results Pregabalin, as a primary amine was reacted with ninhydrin in phosphate buffer pH 7.4 to form blue violet colored chromogen which could be measured spectrophotometrically at λmax 402.6 nm. The method was validated with respect to linearity, accuracy, precision and robustness. The method showed linearity in a wide concentration range of 50-1000 μg mL-1 with good correlation coefficient (0.992. The limits of assays detection was found to be 6.0 μg mL-1 and quantitation limit was 20.0 μg mL-1. The suggested method was applied to the determination of the drug in capsules. No interference could be observed from the additives in the capsules. The percentage recovery was found to be 100.43 ± 1.24. Conclusion The developed method was successfully validated and applied to the determination of pregabalin in bulk and pharmaceutical formulations without any interference from common excipients. Hence, this method can be potentially useful for routine laboratory analysis of pregabalin.

  17. Reliability of different methods used for forming of working samples in the laboratory for seed testing

    Directory of Open Access Journals (Sweden)

    Opra Branislava

    2000-01-01

    Full Text Available The testing of seed quality starts from the moment a sample is formed in a warehouse during processing or packaging of the seed. The seed sampling as the process of obtaining the working sample also assumes each step undertaken during its testing in the laboratory. With the aim of appropriate forming of a seed sample in the laboratory, the usage of seed divider is prescribed for large seeded species (such as seed the size of wheat or larger (ISTA Rules, 1999. The aim of this paper was the comparison of different methods used for obtaining the working samples of maize and wheat seeds using conical, soil and centrifugal dividers. The number of seed of added admixtures confirmed the reliability of working samples formation. To each maize sample (1000 g 10 seeds of the following admixtures were added: Zea mays L. (red pericarp, Hordeum vulgäre L., Triticum aestivum L., and Glycine max (L. Merr. Two methods were used for formation of maze seed working sample. To wheat samples (1000 g 10 seeds of each of the following species were added: Avena saliva (hulled seeds, Hordeum vulgäre L., Galium tricorne Stokes, and Polygonum lapatifolmm L. For formation of wheat seed working samples four methods were used. Optimum of 9, but not less than 7 seeds of admixture were due to be determined in the maize seed working sample, while for wheat, at least one seed of admixture was expected to be found in the working sample. The obtained results confirmed that the formation of the maize seed working samples was the most reliable when centrifugal divider, the first method was used (average of admixture - 9.37. From the observed admixtures the seed of Triticum aestivum L. was the most uniformly distributed, the first method also being used (6.93. The second method gains high average values satisfying the given criterion, but it should be used with previous homogenization of the sample being tested. The forming of wheat seed working samples is the most reliable if the

  18. Normal boundary intersection method for suppliers' strategic bidding in electricity markets: An environmental/economic approach

    International Nuclear Information System (INIS)

    Vahidinasab, V.; Jadid, S.

    2010-01-01

    In this paper the problem of developing optimal bidding strategies for the participants of oligopolistic energy markets is studied. Special attention is given to the impacts of suppliers' emission of pollutants on their bidding strategies. The proposed methodology employs supply function equilibrium (SFE) model to represent the strategic behavior of each supplier and locational marginal pricing mechanism for the market clearing. The optimal bidding strategies are developed mathematically using a bilevel optimization problem where the upper-level subproblem maximizes individual supplier payoff and the lower-level subproblem solves the Independent System Operator's market clearing problem. In order to solve market clearing mechanism the multiobjective optimal power flow is used with supplier emission of pollutants, as an extra objective, subject to the supplier physical constraints. This paper uses normal boundary intersection (NBI) approach for generating Pareto optimal set and then fuzzy decision making to select the best compromise solution. The developed algorithm is applied to an IEEE 30-bus test system. Numerical results demonstrate the potential and effectiveness of the proposed multiobjective approach to develop successful bidding strategies in those energy markets that minimize generation cost and emission of pollutants simultaneously.

  19. The Normalization of Surface Anisotropy Effects Present in SEVIRI Reflectances by Using the MODIS BRDF Method

    Science.gov (United States)

    Proud, Simon Richard; Zhang, Qingling; Schaaf, Crystal; Fensholt, Rasmus; Rasmussen, Mads Olander; Shisanya, Chris; Mutero, Wycliffe; Mbow, Cheikh; Anyamba, Assaf; Pak, Ed; hide

    2014-01-01

    A modified version of the MODerate resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF) algorithm is presented for use in the angular normalization of surface reflectance data gathered by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) aboard the geostationary Meteosat Second Generation (MSG) satellites. We present early and provisional daily nadir BRDFadjusted reflectance (NBAR) data in the visible and near-infrared MSG channels. These utilize the high temporal resolution of MSG to produce BRDF retrievals with a greatly reduced acquisition period than the comparable MODIS products while, at the same time, removing many of the angular perturbations present within the original MSG data. The NBAR data are validated against reflectance data from the MODIS instrument and in situ data gathered at a field location in Africa throughout 2008. It is found that the MSG retrievals are stable and are of high-quality across much of the SEVIRI disk while maintaining a higher temporal resolution than the MODIS BRDF products. However, a number of circumstances are discovered whereby the BRDF model is unable to function correctly with the SEVIRI observations-primarily because of an insufficient spread of angular data due to the fixed sensor location or localized cloud contamination.

  20. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Directory of Open Access Journals (Sweden)

    Statovci Driton

    2006-01-01

    Full Text Available We present a practical solution for dynamic spectrum management (DSM in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA. Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA, and the bidirectional IWFA (bi-IWFA. We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  1. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Science.gov (United States)

    Statovci, Driton; Nordström, Tomas; Nilsson, Rickard

    2006-12-01

    We present a practical solution for dynamic spectrum management (DSM) in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA). Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA) for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA), and the bidirectional IWFA (bi-IWFA). We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  2. General form of the Euler-Poisson-Darboux equation and application of the transmutation method

    Directory of Open Access Journals (Sweden)

    Elina L. Shishkina

    2017-07-01

    Full Text Available In this article, we find solution representations in the compact integral form to the Cauchy problem for a general form of the Euler-Poisson-Darboux equation with Bessel operators via generalized translation and spherical mean operators for all values of the parameter k, including also not studying before exceptional odd negative values. We use a Hankel transform method to prove results in a unified way. Under additional conditions we prove that a distributional solution is a classical one too. A transmutation property for connected generalized spherical mean is proved and importance of applying transmutation methods for differential equations with Bessel operators is emphasized. The paper also contains a short historical introduction on differential equations with Bessel operators and a rather detailed reference list of monographs and papers on mathematical theory and applications of this class of differential equations.

  3. Influences of rolling method on deformation force in cold roll-beating forming process

    Science.gov (United States)

    Su, Yongxiang; Cui, Fengkui; Liang, Xiaoming; Li, Yan

    2018-03-01

    In process, the research object, the gear rack was selected to study the influence law of rolling method on the deformation force. By the mean of the cold roll forming finite element simulation, the variation regularity of radial and tangential deformation was analysed under different rolling methods. The variation of deformation force of the complete forming racks and the single roll during the steady state under different rolling modes was analyzed. The results show: when upbeating and down beating, radial single point average force is similar, the tangential single point average force gap is bigger, the gap of tangential single point average force is relatively large. Add itionally, the tangential force at the time of direct beating is large, and the dire ction is opposite with down beating. With directly beating, deformation force loading fast and uninstall slow. Correspondingly, with down beating, deformat ion force loading slow and uninstall fast.

  4. Radial arrays of nano-electrospray ionization emitters and methods of forming electrosprays

    Science.gov (United States)

    Kelly, Ryan T [West Richland, WA; Tang, Keqi [Richland, WA; Smith, Richard D [Richland, WA

    2010-10-19

    Electrospray ionization emitter arrays, as well as methods for forming electrosprays, are described. The arrays are characterized by a radial configuration of three or more nano-electrospray ionization emitters without an extractor electrode. The methods are characterized by distributing fluid flow of the liquid sample among three or more nano-electrospray ionization emitters, forming an electrospray at outlets of the emitters without utilizing an extractor electrode, and directing the electrosprays into an entrance to a mass spectrometry device. Each of the nano-electrospray ionization emitters can have a discrete channel for fluid flow. The nano-electrospray ionization emitters are circularly arranged such that each is shielded substantially equally from an electrospray-inducing electric field.

  5. Rare Earth Oxide Fluoride Nanoparticles And Hydrothermal Method For Forming Nanoparticles

    Science.gov (United States)

    Fulton, John L.; Hoffmann, Markus M.

    2003-12-23

    A hydrothermal method for forming nanoparticles of a rare earth element, oxygen and fluorine has been discovered. Nanoparticles comprising a rare earth element, oxygen and fluorine are also described. These nanoparticles can exhibit excellent refractory properties as well as remarkable stability in hydrothermal conditions. The nanoparticles can exhibit excellent properties for numerous applications including fiber reinforcement of ceramic composites, catalyst supports, and corrosion resistant coatings for high-temperature aqueous solutions.

  6. DIAGNOSTIC CHARACTERISTICS OF THE COMPUTER TESTS FORMED BY METHOD OF RESTORED FRAGMENTS

    OpenAIRE

    Oleksandr O. Petkov

    2013-01-01

    Definition of validity and reliability of tests which are formed by a method of restored fragments is considered in the article. The structure of the controlled theoretical material of limit field of knowledge, language expressions that describe the subject of control, and reliability of test, is analyzed. The technique of definition of the most important components of reliability of the considered tests is given: reliability of quantitative determination of coefficient of assimilation and te...

  7. A Simple Method for Forming Hybrid Core-Shell Nanoparticles Suspended in Water

    Directory of Open Access Journals (Sweden)

    Jean-Christophe Daigle

    2008-01-01

    addition fragmentation chain transfer (RAFT polymerization as dispersant. Then, the resulting dispersion is engaged in a radical emulsion polymerization process whereby a hydrophobic organic monomer (styrene and butyl acrylate is polymerized to form the shell of the hybrid nanoparticle. This method is extremely versatile, allowing the preparation of a variety of nanocomposites with metal oxides (alumina, rutile, anatase, barium titanate, zirconia, copper oxide, metals (Mo, Zn, and even inorganic nitrides (Si3N4.

  8. On the asymptotic form of the recursion method basis vectors for periodic Hamiltonians

    International Nuclear Information System (INIS)

    O'Reilly, E.P.; Weaire, D.

    1984-01-01

    The authors present the first detailed study of the recursion method basis vectors for the case of a periodic Hamiltonian. In the examples chosen, the probability density scales linearly with n as n → infinity, whenever the local density of states is bounded. Whenever it is unbounded and the recursion coefficients diverge, different scaling behaviour is found. These findings are explained and a scaling relationship between the asymptotic forms of the recursion coefficients and basis vectors is proposed. (author)

  9. A method for autoradiographic studies of single clones of plaque forming cells

    International Nuclear Information System (INIS)

    Andersen, V.; Lefkovits, I.; Rigshospitalet, Copenhagen

    1977-01-01

    By limiting dilution of B lymphocytes from spleens of immunized mice, microcultures were obtained that contained only one clone of plaque forming cells (PFC). The cultured cells were labelled with [ 14 C]thymidine for varying period of time. Plaques were obtained in monolayers of sheep erythrocytes in plastic dishes. After fixation with glutaraldehyde, the bottoms of the dishes were stripped off and autoradiograms prepared. By this method, it is possible to determine the proportion of labelled PFC within a given clone and to quantitate the incorporation of label. The method described can be applied to study the incorporation of other labelled molecules and for cytochemical investigations

  10. Normal Science and the Paranormal: The Effect of a Scientific Method Course on Students' Beliefs.

    Science.gov (United States)

    Morier, Dean; Keeports, David

    1994-01-01

    A study investigated the effects of an interdisciplinary course on the scientific method on the attitudes of 34 college students toward the paranormal. Results indicated that the course substantially reduced belief in the paranormal, relative to a control group. Student beliefs in their own paranormal powers, however, did not change. (Author/MSE)

  11. The Stochastic Galerkin Method for Darcy Flow Problem with Log-Normal Random

    Czech Academy of Sciences Publication Activity Database

    Beres, Michal; Domesová, Simona

    2017-01-01

    Roč. 15, č. 2 (2017), s. 267-279 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Darcy flow * Gaussian random field * Karhunen-Loeve decomposition * polynomial chaos * Stochastic Galerkin method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2280

  12. A method for unsupervised change detection and automatic radiometric normalization in multispectral data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton John

    2011-01-01

    Based on canonical correlation analysis the iteratively re-weighted multivariate alteration detection (MAD) method is used to successfully perform unsupervised change detection in bi-temporal Landsat ETM+ images covering an area with villages, woods, agricultural fields and open pit mines in North...... to carry out the analyses is available from the authors' websites....

  13. Numerical form-finding method for large mesh reflectors with elastic rim trusses

    Science.gov (United States)

    Yang, Dongwu; Zhang, Yiqun; Li, Peng; Du, Jingli

    2018-06-01

    Traditional methods for designing a mesh reflector usually treat the rim truss as rigid. Due to large aperture, light weight and high accuracy requirements on spaceborne reflectors, the rim truss deformation is indeed not negligible. In order to design a cable net with asymmetric boundaries for the front and rear nets, a cable-net form-finding method is firstly introduced. Then, the form-finding method is embedded into an iterative approach for designing a mesh reflector considering the elasticity of the supporting rim truss. By iterations on form-findings of the cable-net based on the updated boundary conditions due to the rim truss deformation, a mesh reflector with a fairly uniform tension distribution in its equilibrium state could be finally designed. Applications on offset mesh reflectors with both circular and elliptical rim trusses are illustrated. The numerical results show the effectiveness of the proposed approach and that a circular rim truss is more stable than an elliptical rim truss.

  14. Normal mode analysis as a method to derive protein dynamics information from the Protein Data Bank.

    Science.gov (United States)

    Wako, Hiroshi; Endo, Shigeru

    2017-12-01

    Normal mode analysis (NMA) can facilitate quick and systematic investigation of protein dynamics using data from the Protein Data Bank (PDB). We developed an elastic network model-based NMA program using dihedral angles as independent variables. Compared to the NMA programs that use Cartesian coordinates as independent variables, key attributes of the proposed program are as follows: (1) chain connectivity related to the folding pattern of a polypeptide chain is naturally embedded in the model; (2) the full-atom system is acceptable, and owing to a considerably smaller number of independent variables, the PDB data can be used without further manipulation; (3) the number of variables can be easily reduced by some of the rotatable dihedral angles; (4) the PDB data for any molecule besides proteins can be considered without coarse-graining; and (5) individual motions of constituent subunits and ligand molecules can be easily decomposed into external and internal motions to examine their mutual and intrinsic motions. Its performance is illustrated with an example of a DNA-binding allosteric protein, a catabolite activator protein. In particular, the focus is on the conformational change upon cAMP and DNA binding, and on the communication between their binding sites remotely located from each other. In this illustration, NMA creates a vivid picture of the protein dynamics at various levels of the structures, i.e., atoms, residues, secondary structures, domains, subunits, and the complete system, including DNA and cAMP. Comparative studies of the specific protein in different states, e.g., apo- and holo-conformations, and free and complexed configurations, provide useful information for studying structurally and functionally important aspects of the protein.

  15. Development of polymer film dosage forms of lidocaine for buccal administration: II. Comparison of preparation methods.

    Science.gov (United States)

    Okamoto, Hirokazu; Nakamori, Takahiko; Arakawa, Yotaro; Iida, Kotaro; Danjo, Kazumi

    2002-11-01

    In previous studies, we prepared film dosage forms of lidocaine (LC) with hydroxypropylcellulose (HPC) as a film base using the solvent evaporation (SE) method. However, from the viewpoint of environmental issues, a reduction in organic solvent use in pharmaceutical and other industries is required. In this study, we prepared the LC films by direct compression of the physical mixture (DCPM method) and direct compression of the spray dried powder (DCSD method). Magnesium stearate, which was required as a lubricant for direct compression, showed no effect on the LC release rate. The LC release rate (%/h) was independent of the compression pressure, but a higher pressure was preferable to easily remove the film from the punches. An increase in the film weight decreased the LC release rate expressed in %/h, whereas no significant effect of film weight was observed on the LC release rate from unit surface area expressed in mg/h/cm(2). The LC release rate (%/h) was independent of the LC content, suggesting that the LC release rate (mg/h) can be quantitatively controlled by changing the LC content in the formulation. The LC release rate and penetration rate were affected by the preparation method; that is, DCPM method dosage form. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91:2424-2432, 2002

  16. Kinetic spectrophotometric method for the determination of perindopril erbumine in pure and commercial dosage forms

    Directory of Open Access Journals (Sweden)

    Nafisur Rahman

    2017-02-01

    Full Text Available A kinetic spectrophotometric method has been developed for the determination of perindopril erbumine in pure and commercial dosage forms. The method is based on the reaction of drug with potassium permanganate in alkaline medium at room temperature (30 ± 1 °C. The reaction was followed spectrophotometrically by measuring the increase in absorbance with time at 603 nm and the initial rate, fixed time (at 8.0 min and equilibrium time (at 90.0 min methods were adopted for constructing the calibration graphs. All the calibration graphs are linear in the concentration range of 5.0–50.0 μg/ml. The limits of detection for initial rate, fixed time and equilibrium time methods were 0.752, 0.882 and 1.091 μg/ml, respectively. The activation parameters such as Ea, ΔH‡, ΔS‡ and ΔG‡ were also determined for the reaction and found to be 60.93 kJ/mol, 56.45 kJ/mol, 74.16 J/K mol and −6.53 kJ/mol, respectively. The variables were optimized and the proposed methods are validated as per ICH guidelines. The method has been further applied to the determination of perindopril erbumine in commercial dosage forms. The analytical results of the proposed methods when compared with those of the reference method show no significant difference in accuracy and precision and have acceptable bias.

  17. Single-Phase Full-Wave Rectifier as an Effective Example to Teach Normalization, Conduction Modes, and Circuit Analysis Methods

    Directory of Open Access Journals (Sweden)

    Predrag Pejovic

    2013-12-01

    Full Text Available Application of a single phase rectifier as an example in teaching circuit modeling, normalization, operating modes of nonlinear circuits, and circuit analysis methods is proposed.The rectifier supplied from a voltage source by an inductive impedance is analyzed in the discontinuous as well as in the continuous conduction mode. Completely analytical solution for the continuous conduction mode is derived. Appropriate numerical methods are proposed to obtain the circuit waveforms in both of the operating modes, and to compute the performance parameters. Source code of the program that performs such computation is provided.

  18. Normalization of shielding structure quality and the method of its studying

    International Nuclear Information System (INIS)

    Bychkov, Ya.A.; Lavdanskij, P.A.

    1987-01-01

    Method for evaluation of nuclear facility radiation shield quality is suggested. Indexes of shielding structure radiation efficiency and face efficiency are used as the shielding structure quality indexes. The first index is connected with radiation dose rate during personnel irradiation behind the shield, and the second one - with the stresses in shielding structure introduction of the indexes presented allows to evaluate objectively the quality of nuclear facility shielding structure quality design construction and operation and to economize labour and material resources

  19. Low flow measurement for infusion pumps: implementation and uncertainty determination of the normalized method

    International Nuclear Information System (INIS)

    Cebeiro, J; Musacchio, A; Sardá, E Fernández

    2011-01-01

    Intravenous drug delivery is a standard practice in hospitalized patients. As the blood concentration reached depends directly on infusion rate, it is important to use safe devices that guarantee output accuracy. In pediatric intensive care units, low infusion rates (i.e. lower than 10.0 ml/h) are frequently used. Thus, it would be necessary to use control programs to search for deviations at this flow range. We describe the implementation of a gravimetric method to test infusion pumps in low flow delivery. The procedure recommended by the ISO/IEC 60601-2-24 standard was used being a reasonable option among the methods frequently used in hospitals, such as infusion pumps analyzers and volumetric cylinders. The main uncertainty sources affecting this method are revised and a numeric and graphic uncertainty analysis is presented in order to show its dependence on flow. Additionally, the obtained uncertainties are compared to those presented by an automatic flow analyzer. Finally, the results of a series of tests performed on a syringe infusion pump operating at low rates are shown.

  20. Estimating the carbohydrate content of various forms of tobacco by phenol-sulfuric acid method.

    Science.gov (United States)

    Jain, Vardhaman Mulchand; Karibasappa, Gundabaktha Nagappa; Dodamani, Arun Suresh; Mali, Gaurao Vasant

    2017-01-01

    Due to consumption of various forms of tobacco in large amounts by Indian population, it has become a cause of concern for major oral diseases. In 2008, the WHO named tobacco as the world's single greatest cause of preventable death. It is also known that certain amount of carbohydrates are incorporated in processed tobacco to make it acceptable for consumption. Thus, its role in oral diseases becomes an important question at this point of time. Through this study, it is attempted to find out the carbohydrate content of various forms of tobacco by phenol-sulfuric acid method. Tobacco products selected for the study were Nandi hookah tambakhu (A), photo brand budhaa Punjabi snuff (B), Miraj (C), Gai-chhap tambakhu (D), Hanuman-chhap Pandharpuri tambakhu (E), and Hathi-chhap Bidi (F). The samples were decoded and transported to laboratory and tested at various concentrations by phenol-sulfuric acid method followed by ultraviolet spectrophotometry to determine their absorbance. The present study showed Hathi-chhap bidi/sample F had a maximum absorbance (1.995) at 10 μg/ml which is a smoking form of tobacco followed by rest all smokeless forms of tobacco, i.e. sample C (0.452), sample B (0.253), sample D (0.077), sample E (-0.018), and sample A (-0.127), respectively. As the concentration of tobacco sample increases, their absorbance increases which in turn is suggestive of increase in its carbohydrate concentration. Carbohydrates in the form of sugars, either inherently present or added in it during manufacturing can serve as a risk factor for higher incidence of dental caries.

  1. Detection of normal plantar fascia thickness in adults via the ultrasonographic method.

    Science.gov (United States)

    Abul, Kadir; Ozer, Devrim; Sakizlioglu, Secil Sezgin; Buyuk, Abdul Fettah; Kaygusuz, Mehmet Akif

    2015-01-01

    Heel pain is a prevalent concern in orthopedic clinics, and there are numerous pathologic abnormalities that can cause heel pain. Plantar fasciitis is the most common cause of heel pain, and the plantar fascia thickens in this process. It has been found that thickening to greater than 4 mm in ultrasonographic measurements can be accepted as meaningful in diagnoses. Herein, we aimed to measure normal plantar fascia thickness in adults using ultrasonography. We used ultrasonography to measure the plantar fascia thickness of 156 healthy adults in both feet between April 1, 2011, and June 30, 2011. These adults had no previous heel pain. The 156 participants comprised 88 women (56.4%) and 68 men (43.6%) (mean age, 37.9 years; range, 18-65 years). The weight, height, and body mass index of the participants were recorded, and statistical analyses were conducted. The mean ± SD (range) plantar fascia thickness measurements for subgroups of the sample were as follows: 3.284 ± 0.56 mm (2.4-5.1 mm) for male right feet, 3.3 ± 0.55 mm (2.5-5.0 mm) for male left feet, 2.842 ± 0.42 mm (1.8-4.1 mm) for female right feet, and 2.8 ± 0.44 mm (1.8-4.3 mm) for female left feet. The overall mean ± SD (range) thickness for the right foot was 3.035 ± 0.53 mm (1.8-5.1 mm) and for the left foot was 3.053 ± 0.54 mm (1.8-5.0 mm). There was a statistically significant and positive correlation between plantar fascia thickness and participant age, weight, height, and body mass index. The plantar fascia thickness of adults without heel pain was measured to be less than 4 mm in most participants (~92%). There was no statistically significant difference between the thickness of the right and left foot plantar fascia.

  2. A new method for designing dual foil electron beam forming systems. II. Feasibility of practical implementation of the method

    International Nuclear Information System (INIS)

    Adrich, Przemysław

    2016-01-01

    In Part I of this work a new method for designing dual foil electron beam forming systems was introduced. In this method, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of system performance in function of its parameters. At each point of the scan, Monte Carlo method is used to calculate the off-axis dose profile in water taking into account detailed and complete geometry of the system. The new method, while being computationally intensive, minimizes the involvement of the designer. In this Part II paper, feasibility of practical implementation of the new method is demonstrated. For this, a prototype software tools were developed and applied to solve a real life design problem. It is demonstrated that system optimization can be completed within few hours time using rather moderate computing resources. It is also demonstrated that, perhaps for the first time, the designer can gain deep insight into system behavior, such that the construction can be simultaneously optimized in respect to a number of functional characteristics besides the flatness of the off-axis dose profile. In the presented example, the system is optimized in respect to both, flatness of the off-axis dose profile and the beam transmission. A number of practical issues related to application of the new method as well as its possible extensions are discussed.

  3. Spectrophotometric method for simultaneous estimation of atazanavir sulfate and ritonavir in tablet dosage form

    Directory of Open Access Journals (Sweden)

    Disha A Patel

    2015-01-01

    Full Text Available Background: Ritonavir (RTV and atazanavir sulfate (ATV are protease inhibitor and RTV mostly used as a booster for increasing the bioavailability of other protease inhibitors like ATV. Aims: Quality assessment of the new dosage form of RTV and ATV i.e., tablets is very essential and hence this work deals with to develop sensitive, simple and precise method for simultaneous estimation of ATV and RTV in tablet dosage form by absorbance correction method. Materials and Methods: The present work was carried out on Shimadzu Ultraviolate(UV-1700 double beam spectrophotometer with 1 cm path length supported by S Shimadzu, model-1700(Japan, UV-Probe software, version 2.31 was used for spectral measurements with 10 mm matched quartz cells. Standard ATV and RTV were supplied by Cipla Pharmaceutical Ltd. Methanol was purchased from Finar Chemicals Pvt. Ltd. Results and Conclusion: The λmax or the absorption maxima for ATV and RTV were found to be 279 and 240 nm, respectively in methanol as solvent. The drugs follow Beer-Lambert′s law in the concentration range 30-90 and 10-30 μg/mL for ATV and RTV, respectively. The percentage recovery was found to be 100-100.33% and 100-101.5% for ATV and RTV, respectively. The method was validated for different parameters as per the International Conference for Harmonization Guidelines.

  4. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  5. Validated spectrophotometric methods for simultaneous determination of troxerutin and carbazochrome in dosage form

    Science.gov (United States)

    Khattab, Fatma I.; Ramadan, Nesrin K.; Hegazy, Maha A.; Al-Ghobashy, Medhat A.; Ghoniem, Nermine S.

    2015-03-01

    Four simple, accurate, sensitive and precise spectrophotometric methods were developed and validated for simultaneous determination of Troxerutin (TXN) and Carbazochrome (CZM) in their bulk powders, laboratory prepared mixtures and pharmaceutical dosage forms. Method A is first derivative spectrophotometry (D1) where TXN and CZM were determined at 294 and 483.5 nm, respectively. Method B is first derivative of ratio spectra (DD1) where the peak amplitude at 248 for TXN and 439 nm for CZM were used for their determination. Method C is ratio subtraction (RS); in which TXN was determined at its λmax (352 nm) in the presence of CZM which was determined by D1 at 483.5 nm. While, method D is mean centering of the ratio spectra (MCR) in which the mean centered values at 300 nm and 340.0 nm were used for the two drugs in a respective order. The two compounds were simultaneously determined in the concentration ranges of 5.00-50.00 μg mL-1 and 0.5-10.0 μg mL-1 for TXN and CZM, respectively. The methods were validated according to the ICH guidelines and the results were statistically compared to the manufacturer's method.

  6. Comparative analyses reveal discrepancies among results of commonly used methods for Anopheles gambiaemolecular form identification

    Directory of Open Access Journals (Sweden)

    Pinto João

    2011-08-01

    Full Text Available Abstract Background Anopheles gambiae M and S molecular forms, the major malaria vectors in the Afro-tropical region, are ongoing a process of ecological diversification and adaptive lineage splitting, which is affecting malaria transmission and vector control strategies in West Africa. These two incipient species are defined on the basis of single nucleotide differences in the IGS and ITS regions of multicopy rDNA located on the X-chromosome. A number of PCR and PCR-RFLP approaches based on form-specific SNPs in the IGS region are used for M and S identification. Moreover, a PCR-method to detect the M-specific insertion of a short interspersed transposable element (SINE200 has recently been introduced as an alternative identification approach. However, a large-scale comparative analysis of four widely used PCR or PCR-RFLP genotyping methods for M and S identification was never carried out to evaluate whether they could be used interchangeably, as commonly assumed. Results The genotyping of more than 400 A. gambiae specimens from nine African countries, and the sequencing of the IGS-amplicon of 115 of them, highlighted discrepancies among results obtained by the different approaches due to different kinds of biases, which may result in an overestimation of MS putative hybrids, as follows: i incorrect match of M and S specific primers used in the allele specific-PCR approach; ii presence of polymorphisms in the recognition sequence of restriction enzymes used in the PCR-RFLP approaches; iii incomplete cleavage during the restriction reactions; iv presence of different copy numbers of M and S-specific IGS-arrays in single individuals in areas of secondary contact between the two forms. Conclusions The results reveal that the PCR and PCR-RFLP approaches most commonly utilized to identify A. gambiae M and S forms are not fully interchangeable as usually assumed, and highlight limits of the actual definition of the two molecular forms, which might

  7. Proposed waste form performance criteria and testing methods for low-level mixed waste

    International Nuclear Information System (INIS)

    Franz, E.M.; Fuhrmann, M.; Bowerman, B.; Bates, S.; Peters, R.

    1994-08-01

    This document describes proposed waste form performance criteria and testing method that could be used as guidance in judging viability of a waste form as a physico-chemical barrier to releases of radionuclides and RCRA regulated hazardous components. It is assumed that release of contaminants by leaching is the single most important property by which the effectiveness of a waste form is judged. A two-tier regimen is proposed. The first tier includes a leach test required by the Environmental Protection Agency and a leach test designed to determine the net forward leach rate for a variety of materials. The second tier of tests are to determine if a set of stresses (i.e., radiation, freeze-thaw, wet-dry cycling) on the waste form adversely impact its ability to retain contaminants and remain physically intact. It is recommended that the first tier tests be performed first to determine acceptability. Only on passing the given specifications for the leach tests should other tests be performed. In the absence of site-specific performance assessments (PA), two generic modeling exercises are described which were used to calculate proposed acceptable leach rates

  8. A high pressure liquid chromatography method for separation of prolactin forms.

    Science.gov (United States)

    Bell, Damon A; Hoad, Kirsten; Leong, Lillian; Bakar, Juwaini Abu; Sheehan, Paul; Vasikaran, Samuel D

    2012-05-01

    Prolactin has multiple forms and macroprolactin, which is thought not to be bioavailable, can cause a raised serum prolactin concentration. Gel filtration chromatography (GFC) is currently the gold standard method for separating macroprolactin, but is labour-intensive. Polyethylene glycol (PEG) precipitation is suitable for routine use but may not always be accurate. We developed a high pressure liquid chromatography (HPLC) assay for macroprolactin measurement. Chromatography was carried out using an Agilent Zorbax GF-250 (9.4 × 250 mm, 4 μm) size exclusion column and 50 mmol/L Tris buffer with 0.15 mmol/L NaCl at pH 7.2 as mobile phase, with a flow rate of 1 mL/min. Serum or plasma was diluted 1:1 with mobile phase and filtered and 100 μL injected. Fractions of 155 μL were collected for prolactin measurement and elution profile plotted. The area under the curve of each prolactin peak was calculated to quantify each prolactin form, and compared with GFC. Clear separation of monomeric-, big- and macroprolactin forms was achieved. Quantification was comparable to GFC and precision was acceptable. Total time from injection to collection of the final fraction was 16 min. We have developed an HPLC method for quantification of macroprolactin, which is rapid and easy to perform and therefore can be used for routine measurement.

  9. Development and Validation of UV Spectrophotometric Method For Estimation of Dolutegravir Sodium in Tablet Dosage Form

    International Nuclear Information System (INIS)

    Balasaheb, B.G.

    2015-01-01

    A simple, rapid, precise and accurate spectrophotometric method has been developed for quantitative analysis of Dolutegravir sodium in tablet formulations. The initial stock solution of Dolutegravir sodium was prepared in methanol solvent and subsequent dilution was done in water. The standard solution of Dolutegravir sodium in water showed maximum absorption at wavelength 259.80 nm. The drug obeyed Beer-Lamberts law in the concentration range of 5-40 μg/ mL with coefficient of correlation (R"2) was 0.9992. The method was validated as per the ICH guidelines. The developed method can be adopted in routine analysis of Dolutegravir sodium in bulk or tablet dosage form and it involves relatively low cost solvents and no complex extraction techniques. (author)

  10. Development of alternative methods for the determination of raloxifene hydrochloride in tablet dosage form

    Directory of Open Access Journals (Sweden)

    Fernanda Rodrigues Salazar

    2015-06-01

    Full Text Available Three methods are proposed for the quantitative determination of raloxifene hydrochloride in pharmaceutical dosage form: ultraviolet method (UV high performance liquid chromatography (HPLC and micellar capillary electrophoresis (MEKC. These methods were developed and validated and showed good linearity, precision and accuracy. Also they demonstrated to be specific and robust. The HPLC and MEKC methods were tested in regards to be stability indicating methods and they showed to have this attribute. The UV method used methanol as solvent and optimal wavelength at 284 nm, obeying Lambert-Beer law in these conditions. The chromatographic conditions for the HPLC method included: NST column C18 (250 x 4.6 mm x 5 µm, mobile phase water:acetonitrile:triethylamine (67:33:0,3 v/v, pH 3.5, flow rate 1.0 mL min-1, injection volume 20.0 µl, UV detection 287 nm and analysis temperature 30 °C. The MEKC method was performed on a fused-silica capillary (40 cm effective length x 50 µm i.d. using as background electrolyte 35.0 mmol L-1 borate buffer and 50.0 mmol L-1 anionic detergent sodium dodecyl sulfate (SDS at pH 8.8. The capillary temperature was 32°C, applied voltage 25 kV, UV detection at 280 nm and injection was perfomed at 45 mBar for 4 s, hydrodimanic mode. In this MEKC method, potassium diclofenac (200.0 µg mL-1 was used as internal standard. All these methods were statistically analyzed and demonstrated to be equivalent for quantitative analysis of RLX in tablets and were successfully applied for the determination of the drug.

  11. Performance improvement of two-dimensional EUV spectroscopy based on high frame rate CCD and signal normalization method

    International Nuclear Information System (INIS)

    Zhang, H.M.; Morita, S.; Ohishi, T.; Goto, M.; Huang, X.L.

    2014-01-01

    In the Large Helical Device (LHD), the performance of two-dimensional (2-D) extreme ultraviolet (EUV) spectroscopy with wavelength range of 30-650A has been improved by installing a high frame rate CCD and applying a signal intensity normalization method. With upgraded 2-D space-resolved EUV spectrometer, measurement of 2-D impurity emission profiles with high horizontal resolution is possible in high-density NBI discharges. The variation in intensities of EUV emission among a few discharges is significantly reduced by normalizing the signal to the spectral intensity from EUV_—Long spectrometer which works as an impurity monitor with high-time resolution. As a result, high resolution 2-D intensity distribution has been obtained from CIV (384.176A), CV(2x40.27A), CVI(2x33.73A) and HeII(303.78A). (author)

  12. METHOD OF GROUP OBJECTS FORMING FOR SPACE-BASED REMOTE SENSING OF THE EARTH

    Directory of Open Access Journals (Sweden)

    A. N. Grigoriev

    2015-07-01

    Full Text Available Subject of Research. Research findings of the specific application of space-based optical-electronic and radar means for the Earth remote sensing are considered. The subject matter of the study is the current planning of objects survey on the underlying surface in order to increase the effectiveness of sensing system due to the rational use of its resources. Method. New concept of a group object, stochastic swath and stochastic length of the route is introduced. The overview of models for single, group objects and their parameters is given. The criterion for the existence of the group object based on two single objects is formulated. The method for group objects formation while current survey planning has been developed and its description is presented. The method comprises several processing stages for data about objects with the calculation of new parameters, the stochastic characteristics of space means and validates the spatial size of the object value of the stochastic swath and stochastic length of the route. The strict mathematical description of techniques for model creation of a group object based on data about a single object and onboard special complex facilities in difficult conditions of registration of spatial data is given. Main Results. The developed method is implemented on the basis of modern geographic information system in the form of a software tool layout with advanced tools of processing and analysis of spatial data in vector format. Experimental studies of the forming method for the group of objects were carried out on a different real object environment using the parameters of modern national systems of the Earth remote sensing detailed observation Canopus-B and Resurs-P. Practical Relevance. The proposed models and method are focused on practical implementation using vector spatial data models and modern geoinformation technologies. Practical value lies in the reduction in the amount of consumable resources by means of

  13. Smooth quantile normalization.

    Science.gov (United States)

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  14. RP-HPLC Method for the Estimation of Nebivolol in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    M. K. Sahoo

    2009-01-01

    Full Text Available A reverse phase HPLC method is described for the determination of nebivolol in tablet dosage form. Chromatography was carried on a Hypersil ODS C18 column using a mixture of methanol and water (80:20 v/v as the mobile phase at a flow rate of 1.0 mL/min with detection at 282 nm. Chlorzoxazone was used as the internal standard. The retention times were 3.175 min and 4.158 min for nebivolol and chlorzoxazone respectively. The detector response was linear in the concentration of 1-400 μg/mL. The limit of detection and limit of quantification was 0.0779 and 0.2361 μg/mL respectively. The percentage assay of nebivolol was 99.974%. The method was validated by determining its sensitivity, accuracy and precision. The proposed method is simple, fast, accurate and precise and hence can be applied for routine quality control of nebivolol in bulk and tablet dosage form.

  15. Dynamic analysis of suspension cable based on vector form intrinsic finite element method

    Science.gov (United States)

    Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun

    2017-10-01

    A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.

  16. The study of forms of bonding marshmallow moisture with different composition by method of thermal analysis

    Directory of Open Access Journals (Sweden)

    G. O. Magomedov

    2017-01-01

    Full Text Available Marshmallow is a sugar confectionary product with increased sugar content and energy value because of the significant content of carbohydrates, in particular sugar-sand. The main drawback of marshmallow is the rapid process of its drying during storage due to the crystallization of sucrose and the gradual removal of moisture from the product. A method for obtaining marshmallow without sugar on the basis of high-conversion glucose syrup. In the work, experimental studies were carried out to determine the content and ratio of free and bound forms of moisture in marshmallow on the basis of sugars and on the basis of  high-conversion glucose syrup by Differential Scanning Calorimetry (DSC and Thermogravimetry (TG. To study the patterns of thermal effects on the properties of marshmallow samples, the non-isothermal analysis method and the synchronous thermal analysis instrument (TG-DTA / DSC of the STA 449 F3 Jupiter were used. In the process of thermal exposure, the samples decompose sugars and other organic compounds, as a result of which the sample weight decreases due to evaporation of moisture. The process of dehydration in a control sample of marshmallow using sugar occurs in a less wide temperature range than in a sample of marshmallow on the basis of  high-conversion glucose syrup, which indicates a greater degree of moisture bonding in the developed sample. A quantitative evaluation of the forms of moisture bonding in the samples was carried out using the experimental curves obtained by the TG method. From the temperature curves, the endothermic effects were determined, which correspond to the release of moisture with different forms and energies. Substitution of sugar for treacle in the formula of marshmallow reduces the share of free moisture and increases the safety of the product without signs of staling.

  17. Stability Indicating LC-Method for Estimation of Paracetamol and Lornoxicam in Combined Dosage Form

    OpenAIRE

    Shah, Dimal A.; Patel, Neel J.; Baldania, Sunil L.; Chhalotiya, Usman K.; Bhatt, Kashyap K.

    2011-01-01

    A simple, specific and stability indicating reversed phase high performance liquid chromatographic method was developed for the simultaneous determination of paracetamol and lornoxicam in tablet dosage form. A Brownlee C-18, 5 μm column having 250×4.6 mm i.d. in isocratic mode, with mobile phase containing 0.05 M potassium dihydrogen phosphate:methanol (40:60, v/v) was used. The flow rate was 1.0 ml/min and effluents were monitored at 266 nm. The retention times of paracetamol and lornoxicam ...

  18. Analytical Method Development and Validation of Solifenacin in Pharmaceutical Dosage Forms by RP-HPLC

    OpenAIRE

    Shaik, Rihana Parveen; Puttagunta, Srinivasa Babu; Kothapalli Bannoth, Chandrasekar; Challa, Bala Sekhara Reddy

    2014-01-01

    A new, accurate, precise, and robust HPLC method was developed and validated for the determination of solifenacin in tablet dosage form. The chromatographic separation was achieved on an Inertsil ODS 3V C18 (150 mm × 4.6 mm, 5 μm) stationary phase maintained at ambient temperature with a mobile phase combination of monobasic potassium phosphate (pH 3.5) containing 0.1% triethylamine and methanol (gradient mode) at a flow rate of 1.5 mL/min, and the detection was carried out by using UV detect...

  19. Study by the disco method of critical components of a P.W.R. normal feedwater system

    International Nuclear Information System (INIS)

    Duchemin, B.; Villeneuve, M.J. de; Vallette, F.; Bruna, J.G.

    1983-03-01

    The DISCO (Determination of Importance Sensitivity of COmponents) method objectif is to rank the components of a system in order to obtain the most important ones versus availability. This method uses the fault tree description of the system and the cut set technique. It ranks the components by ordering the importances attributed to each one. The DISCO method was applied to the study of the 900 MWe P.W.R. normal feedwater system with insufficient flow in steam generator. In order to take account of operating experience several data banks were used and the results compared. This study allowed to determine the most critical component (the turbo-pumps) and to propose and quantify modifications of the system in order to improve its availability

  20. A non-Hertzian method for solving wheel-rail normal contact problem taking into account the effect of yaw

    Science.gov (United States)

    Liu, Binbin; Bruni, Stefano; Vollebregt, Edwin

    2016-09-01

    A novel approach is proposed in this paper to deal with non-Hertzian normal contact in wheel-rail interface, extending the widely used Kik-Piotrowski method. The new approach is able to consider the effect of the yaw angle of the wheelset against the rail on the shape of the contact patch and on pressure distribution. Furthermore, the method considers the variation of profile curvature across the contact patch, enhancing the correspondence to CONTACT for highly non-Hertzian contact conditions. The simulation results show that the proposed method can provide more accurate estimation than the original algorithm compared to Kalker's CONTACT, and that the influence of yaw on the contact results is significant under certain circumstances.

  1. ChIPnorm: a statistical method for normalizing and identifying differential regions in histone modification ChIP-seq libraries.

    Science.gov (United States)

    Nair, Nishanth Ulhas; Sahu, Avinash Das; Bucher, Philipp; Moret, Bernard M E

    2012-01-01

    The advent of high-throughput technologies such as ChIP-seq has made possible the study of histone modifications. A problem of particular interest is the identification of regions of the genome where different cell types from the same organism exhibit different patterns of histone enrichment. This problem turns out to be surprisingly difficult, even in simple pairwise comparisons, because of the significant level of noise in ChIP-seq data. In this paper we propose a two-stage statistical method, called ChIPnorm, to normalize ChIP-seq data, and to find differential regions in the genome, given two libraries of histone modifications of different cell types. We show that the ChIPnorm method removes most of the noise and bias in the data and outperforms other normalization methods. We correlate the histone marks with gene expression data and confirm that histone modifications H3K27me3 and H3K4me3 act as respectively a repressor and an activator of genes. Compared to what was previously reported in the literature, we find that a substantially higher fraction of bivalent marks in ES cells for H3K27me3 and H3K4me3 move into a K27-only state. We find that most of the promoter regions in protein-coding genes have differential histone-modification sites. The software for this work can be downloaded from http://lcbb.epfl.ch/software.html.

  2. Validation of HPLC and UV spectrophotometric methods for the determination of meropenem in pharmaceutical dosage form.

    Science.gov (United States)

    Mendez, Andreas S L; Steppe, Martin; Schapoval, Elfrides E S

    2003-12-04

    A high-performance liquid chromatographic method and a UV spectrophotometric method for the quantitative determination of meropenem, a highly active carbapenem antibiotic, in powder for injection were developed in present work. The parameters linearity, precision, accuracy, specificity, robustness, limit of detection and limit of quantitation were studied according to International Conference on Harmonization guidelines. Chromatography was carried out by reversed-phase technique on an RP-18 column with a mobile phase composed of 30 mM monobasic phosphate buffer and acetonitrile (90:10; v/v), adjusted to pH 3.0 with orthophosphoric acid. The UV spectrophotometric method was performed at 298 nm. The samples were prepared in water and the stability of meropenem in aqueous solution at 4 and 25 degrees C was studied. The results were satisfactory with good stability after 24 h at 4 degrees C. Statistical analysis by Student's t-test showed no significant difference between the results obtained by the two methods. The proposed methods are highly sensitive, precise and accurate and can be used for the reliable quantitation of meropenem in pharmaceutical dosage form.

  3. A validated stability-indicating UPLC method for desloratadine and its impurities in pharmaceutical dosage forms.

    Science.gov (United States)

    Rao, Dantu Durga; Satyanarayana, N V; Malleswara Reddy, A; Sait, Shakil S; Chakole, Dinesh; Mukkanti, K

    2010-02-05

    A novel stability-indicating gradient reverse phase ultra-performance liquid chromatographic (RP-UPLC) method was developed for the determination of purity of desloratadine in presence of its impurities and forced degradation products. The method was developed using Waters Aquity BEH C18 column with mobile phase containing a gradient mixture of solvents A and B. The eluted compounds were monitored at 280nm. The run time was 8min within which desloratadine and its five impurities were well separated. Desloratadine was subjected to the stress conditions of oxidative, acid, base, hydrolytic, thermal and photolytic degradation. Desloratadine was found to degrade significantly in oxidative and thermal stress conditions and stable in acid, base, hydrolytic and photolytic degradation conditions. The degradation products were well resolved from main peak and its impurities, thus proved the stability-indicating power of the method. The developed method was validated as per ICH guidelines with respect to specificity, linearity, limit of detection, limit of quantification, accuracy, precision and robustness. This method was also suitable for the assay determination of desloratadine in pharmaceutical dosage forms.

  4. Methods and data for HTGR fuel performance and radionuclide release modeling during normal operation and accidents for safety analysis

    International Nuclear Information System (INIS)

    Verfondern, K.; Martin, R.C.; Moormann, R.

    1993-01-01

    The previous status report released in 1987 on reference data and calculation models for fission product transport in High-Temperature, Gas-Cooled Reactor (HTGR) safety analyses has been updated to reflect the current state of knowledge in the German HTGR program. The content of the status report has been expanded to include information from other national programs in HTGRs to provide comparative information on methods of analysis and the underlying database for fuel performance and fission product transport. The release and transport of fission products during normal operating conditions and during the accident scenarios of core heatup, water and air ingress, and depressurization are discussed. (orig.) [de

  5. Method for the determination of the equation of state of advanced fuels based on the properties of normal fluids

    International Nuclear Information System (INIS)

    Hecht, M.J.; Catton, I.; Kastenberg, W.E.

    1976-12-01

    An equation of state based on the properties of normal fluids, the law of rectilinear averages, and the second law of thermodynamics can be derived for advanced LMFBR fuels on the basis of the vapor pressure, enthalpy of vaporization, change in heat capacity upon vaporization, and liquid density at the melting point. The method consists of estimating an equation of state by means of the law of rectilinear averages and the second law of thermodynamics, integrating by means of the second law until an instability is reached, and then extrapolating by means of a self-consistent estimation of the enthalpy of vaporization

  6. Differentiation between spore-forming and asporogenic bacteria using a PCR and southern hybridization based method

    Energy Technology Data Exchange (ETDEWEB)

    Brill, J.A.; Wiegel, J. [Univ. of Georgia, Athens, GA (United States)

    1997-12-31

    A set of molecular probes was devised to develop a method for screening for the presence of sequences homologous to three representative genes exclusively involved in endosporulation. Based on known gene sequences, degenerate PCR primers were designed against spo0A and ssp. Experimental conditions were devised under which homologs of both genes were consistently detected in endospore-forming bacteria, but not in asporogenic bacteria. The PCR amplification products and dpaA/B from Bacillus subtilis were used as hybridization probes for Southern blots. Identical conditions were used with the genomic DNA from endospore-forming and asporogenic bacteria. We therefore concluded that the probes specifically detect the targeted sporulation genes and we obtained no indication that genes homologous to ssp, spo0A and dpaA/B are present in asporogenic bacteria. Thus, this assay can potentially be used to detect spore-forming bacteria in various kinds of samples and to distinguish between bacteria containing sporulation genes and those who do not regardless of whether sporulation is observed or not. 43 refs., 3 figs., 1 tab.

  7. CEMS Investigations of Fe-Silicide Phases Formed by the Method of Concentration Controlled Phase Selection

    Energy Technology Data Exchange (ETDEWEB)

    Moodley, M. K.; Bharuth-Ram, K. [University of Durban-Westville, Physics Department (South Africa); Waal, H. de; Pretorius, R. [University of Stellenbosch, Physics Department (South Africa)

    2002-03-15

    Conversion electron Moessbauer spectroscopy (CEMS) measurements have been made on Fe-silicide samples formed using the method of concentration controlled phase selection. To prepare the samples a 10 nm layer of Fe{sub 30}M{sub 70} (M=Cr, Ni) was evaporated onto Si(100) surfaces, followed by evaporation of a 60 nm Fe layer. Diffusion of the Fe into the Si substrate and the formation of different Fe-Si phases was achieved by subjecting the evaporated samples to a series of heating stages, which consisted of (a) a 10 min anneal at 800 deg. C plus etch of the residual surface layer, (b) a further 3 hr anneal at 800 deg. C, (c) a 60 mJ excimer laser anneal to an energy density of 0.8 J/cm{sup 2}, and (d) a final 3 hr anneal at 800 deg. C. CEMS measurements were used to track the Fe-silicide phases formed. The CEMS spectra consisted of doublets which, based on established hyperfine parameters, could be assigned to {alpha}- or {beta}-FeSi{sub 2} or cubic FeSi. The spectra showed that {beta}-FeSi{sub 2} had formed already at the first annealing stage. Excimer laser annealing resulted in the formation of a phase with hyperfine parameters consistent with those of {alpha}-FeSi{sub 2}. A further 3 hr anneal at 800 deg. C resulted in complete reversal to the semiconducting {beta}-FeSi{sub 2} phase.

  8. Methods of acicular ferrite forming in the weld bead metal (Brief analysis

    Directory of Open Access Journals (Sweden)

    Володимир Олександрович Лебедєв

    2016-11-01

    Full Text Available A brief analysis of the methods of acicular ferrite formation as the most preferable structural component in the weld metal has been presented. The term «acicular ferrite» is meant as a structure that forms during pearlite and martensite transformation and austenite decomposition. Acicular ferrite is a packet structure consisting of battens of bainitic ferrite, there being no cementite particles inside these battens at all. The chemical elements most effectively influencing on the formation of acicular ferrite have been considered and their combined effect as well. It has been shown in particular, that the most effective chemical element in terms of impact toughness and cost relation is manganese. Besides, the results of multipass surfacing with impulse and constant feed of low-alloy steel wire electrode have been considered. According to these results acicular ferrite forms in both cases. However, at impulse feed of the electrode wire high mechanical properties of surfacing layer were got in the first passes, the form of the acicular ferrite crystallite has been improved and volume shares of polygonal and lamellar ferrite have been reduced. An assumption has been made, according to which acicular ferrite in the surfacing layer may be obtained through superposition of mechanical low-frequency oscillation on the welding torch or on the welding pool instead of periodic thermal effect due to electrode wire periodic feed

  9. Method of forming a package for MEMS-based fuel cell

    Science.gov (United States)

    Morse, Jeffrey D; Jankowski, Alan F

    2013-05-21

    A MEMS-based fuel cell package and method thereof is disclosed. The fuel cell package comprises seven layers: (1) a sub-package fuel reservoir interface layer, (2) an anode manifold support layer, (3) a fuel/anode manifold and resistive heater layer, (4) a Thick Film Microporous Flow Host Structure layer containing a fuel cell, (5) an air manifold layer, (6) a cathode manifold support structure layer, and (7) a cap. Fuel cell packages with more than one fuel cell are formed by positioning stacks of these layers in series and/or parallel. The fuel cell package materials such as a molded plastic or a ceramic green tape material can be patterned, aligned and stacked to form three dimensional microfluidic channels that provide electrical feedthroughs from various layers which are bonded together and mechanically support a MEMS-based miniature fuel cell. The package incorporates resistive heating elements to control the temperature of the fuel cell stack. The package is fired to form a bond between the layers and one or more microporous flow host structures containing fuel cells are inserted within the Thick Film Microporous Flow Host Structure layer of the package.

  10. Selective laser pyrolysis of metallo-organics as a method of forming patterned thin film superconductors

    International Nuclear Information System (INIS)

    Mantese, J.V.; Catalan, A.B.; Sell, J.A.; Meyer, M.S.; Mance, A.M.

    1990-01-01

    This patent describes a method for forming patterned films of superconductive materials forming a solution from the neodecanoates of yttrium, barium and copper. The neodecanoates forming an oxide mixture exhibiting superconductive properties upon subsequent thermal decompositions wherein the oxide mixture is characterized by a ratio of yttrium:barium:copper of approximately 1:2:4, the solution comprising an organic solvent such as xylene; adding to the solution an appropriate dye, depositing a film of the solution having the dye onto a strontium titanate substrate; exposing selective regions of the film with an Argon laser emitting the wavelength of light, such that the exposed regions of the film become insoluble in the xylene; immersing the film into the xylene so that the soluble; unexposed regions of the film are removed from the substrate; heating the film to thermally decompose the neodecanoates into a film containing yttrium, barium and copper oxides; to promote recrystallization and grain growth of the metal oxides within the film and induce a change therein by which the film exhibits superconducting properties

  11. A simple identification method for spore-forming bacteria showing high resistance against γ-rays

    International Nuclear Information System (INIS)

    Koshikawa, Tomihiko; Sone, Koji; Kobayashi, Toshikazu

    1993-01-01

    A simple identification method was developed for spore-forming bacteria which are highly resistant against γ-rays. Among 23 species of Bacillus studied, the spores of Bacillus megaterium, B. cereus, B. thuringiensis, B. pumilus and B. aneurinolyticus showed high resistance against γ-rays as compared with other spores of Bacillus species. Combination of the seven kinds of biochemical tests, namely, the citrate utilization test, nitrate reduction test, starch hydrolysis test, Voges-Proskauer reaction test, gelatine hydrolysis test, mannitol utilization test and xylose utilization test showed a characteristic pattern for each species of Bacillus. The combination pattern of each the above tests with a few supplementary test, if necessary, was useful to identify Bacillus species showing high radiation resistance against γ-rays. The method is specific for B. megaterium, B. thuringiensis and B. pumilus, and highly selective for B. aneurinolyticus and B. cereus. (author)

  12. Thermal and Isothermal Methods in Development of Sustained Release Dosage Forms of Ketorolac Tromethamine

    Directory of Open Access Journals (Sweden)

    Dimple Chopra

    2008-01-01

    Full Text Available Differential scanning calorimetry (DSC is a rapid and convenient and conclusive method of screening drug-polymer blend during preformulation studies as it allows polymer incompatibility to be established instantaneously. Various batches of matrix tablets of ketorolac tromethamine (KTM with a series of compatible polymers were prepared. Batches of tablets which gave desired sustained release profile were subjected to stability testing according to ICH guidelines. The analysis for drug content was done using high performance liquid chromatography (HPLC method. The results revealed that there was no statistically significant change in drug content after storage of matrix tablets at elevated temperature of 40°C and 75% relative humidity. From our study we conclude that with careful selection of different polymers and their combinations, a stable sustained release oral dosage form of ketorolac tromethamine can be achieved.

  13. DIAGNOSTIC CHARACTERISTICS OF THE COMPUTER TESTS FORMED BY METHOD OF RESTORED FRAGMENTS

    Directory of Open Access Journals (Sweden)

    Oleksandr O. Petkov

    2013-03-01

    Full Text Available Definition of validity and reliability of tests which are formed by a method of restored fragments is considered in the article. The structure of the controlled theoretical material of limit field of knowledge, language expressions that describe the subject of control, and reliability of test, is analyzed. The technique of definition of the most important components of reliability of the considered tests is given: reliability of quantitative determination of coefficient of assimilation and technological reliability. Results of the lead pedagogical experiments have proved, that tests of the given class allow to make the control of mastering of a theoretical material over a level of reproduction in any field of knowledge with high reliability. It is shown, that validity tests with restored fragments basically caused by a degree of structurization and methodical study of a controllable material and can achieve beforehand set parameters, down to a level of absolute validity.

  14. Linking the Organizational Forms Teachers and Teaching Methods in a Class Instructional Methodology

    Directory of Open Access Journals (Sweden)

    Graciela Nápoles-Quiñones

    2016-05-01

    Full Text Available A descriptive study was conducted to show the link between the organizational forms teachers and teaching methods, to expose the pedagogical theory, to deepen the teaching-learning process through methodological class. The main content of the work of teachers is the preparation and level rise; which requires the selection and use of working methods, ways and procedures in accordance with the real and objective conditions of staff who have received the action and conducive to teaching work. Teachers should be aware that you need to master the content they teach, be aware of the level of development of its students, the specific characteristics of the group and of each student, and competent to reciprocate the content they teach with reality.

  15. Plasma spraying method for forming diamond and diamond-like coatings

    Science.gov (United States)

    Holcombe, Cressie E.; Seals, Roland D.; Price, R. Eugene

    1997-01-01

    A method and composition for the deposition of a thick layer (10) of diamond or diamond-like material. The method includes high temperature processing wherein a selected composition (12) including at least glassy carbon is heated in a direct current plasma arc device to a selected temperature above the softening point, in an inert atmosphere, and is propelled to quickly quenched on a selected substrate (20). The softened or molten composition (18) crystallizes on the substrate (20) to form a thick deposition layer (10) comprising at least a diamond or diamond-like material. The selected composition (12) includes at least glassy carbon as a primary constituent (14) and may include at least one secondary constituent (16). Preferably, the secondary constituents (16) are selected from the group consisting of at least diamond powder, boron carbide (B.sub.4 C) powder and mixtures thereof.

  16. Influences of Normalization Method on Biomarker Discovery in Gas Chromatography-Mass Spectrometry-Based Untargeted Metabolomics: What Should Be Considered?

    Science.gov (United States)

    Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo

    2017-05-16

    Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.

  17. Method for Forming Pulp Fibre Yarns Developed by a Design-driven Process

    Directory of Open Access Journals (Sweden)

    Tiia-Maria Tenhunen

    2016-01-01

    Full Text Available A simple and inexpensive method for producing water-stable pulp fibre yarns using a deep eutectic mixture composed of choline chloride and urea (ChCl/urea was developed in this work. Deep eutectic solvents (DESs are eutectic mixtures consisting of two or more components that together have a lower melting point than the individual components. DESs have been previously studied with respect to cellulose dissolution, functionalisation, and pre-treatment. This new method uses a mixture of choline chloride and urea, which is used as a swelling and dispersing agent for the pulp fibres in the yarn-forming process. Although the pulp seemed to form a gel when dispersed in ChCl/urea, the ultrastructure of the pulp was not affected. To enable water stability, pulp fibres were crosslinked by esterification using polyacrylic acid. ChCl/urea could be easily recycled and reused by distillation. The novel process described in this study enables utilisation of pulp fibres in textile production without modification or dissolution and shortening of the textile value chain. An interdisciplinary approach was used, where potential applications were explored simultaneously with material development from process development to the early phase prototyping.

  18. A method for the quantification of model form error associated with physical systems.

    Energy Technology Data Exchange (ETDEWEB)

    Wallen, Samuel P.; Brake, Matthew Robert

    2014-03-01

    In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.

  19. A method of LED free-form tilted lens rapid modeling based on scheme language

    Science.gov (United States)

    Dai, Yidan

    2017-10-01

    According to nonimaging optical principle and traditional LED free-form surface lens, a new kind of LED free-form tilted lens was designed. And a method of rapid modeling based on Scheme language was proposed. The mesh division method was applied to obtain the corresponding surface configuration according to the character of the light source and the desired energy distribution on the illumination plane. Then 3D modeling software and the Scheme language programming are used to generate lens model respectively. With the help of optical simulation software, a light source with the size of 1mm*1mm*1mm in volume is used in experiment, and the lateral migration distance of illumination area is 0.5m, in which total one million rays are computed. We could acquire the simulated results of both models. The simulated output result shows that the Scheme language can prevent the model deformation problems caused by the process of the model transfer, and the degree of illumination uniformity is reached to 82%, and the offset angle is 26°. Also, the efficiency of modeling process is greatly increased by using Scheme language.

  20. A Simple RP-HPLC Method for Quantitation of Itopride HCl in Tablet Dosage Form.

    Science.gov (United States)

    Thiruvengada, Rajan Vs; Mohamed, Saleem Ts; Ramkanth, S; Alagusundaram, M; Ganaprakash, K; Madhusudhana, Chetty C

    2010-10-01

    An isocratic reversed phase high-performance liquid chromatographic method with ultraviolet detection at 220 nm has been developed for the quantification of itopride hydrochloride in tablet dosage form. The quantification was carried out using C(8) column (250 mm × 4.6 mm), 5-μm particle size SS column. The mobile phase comprised of two solvents (Solvent A: buffer 1.4 mL ortho-phosphoric acid adjusted to pH 3.0 with triethyl amine and Solvent B: acetonitrile). The ratio of Solvent A: Solvent B was 75:25 v/v. The flow rate was 1.0 mL (-1)with UV detection at 220 nm. The method has been validated and proved to be robust. The calibration curve was linear in the concentration range of 80-120% with coefficient of correlation 0.9995. The percentage recovery for itopride HCl was 100.01%. The proposed method was validated for its selectivity, linearity, accuracy, and precision. The method was found to be suitable for the quality control of itopride HCl in tablet dosage formulation.

  1. Radioactive waste immobilization in protective ceramic forms by the HIP method at high pressures

    International Nuclear Information System (INIS)

    Sayenko, S.Yu.; Kantsedal, V.P.; Tarasov, R.V.; Starchenko, V.A.; Lyubtsev, R.I.

    1993-01-01

    Intense research activities have been carried out in recent years at the Kharkov Institute of Physics and Technology (KIPT) to develop the method of hot isostatic pressing (HIP) for immobilizing radioactive (primarily, high-level) wastes. With this method, the radioactive material is immobilized in a matrix under the simultaneous action of high pressures (up to 6,000 atm) and appropriate temperatures. The process has 2 variants: (1) radioactive wastes are treated as powders of oxides resulting from calcination during chemical treatment of spent fuel. In this case the radioactive material enters into the crystalline structure of the immobilized matrix or is distributed in the matrix as a homogeneous mixture; (2) protective barrier layers are pressed on spent fuel rods or their pieces as radioactive wastes, by the HIP method (fuel rod encapsulation in a protective form). Based on numerous results from various studies, the authors suggest that various ceramic compositions should be used as protective materials. Here the authors report two trends of their investigations: (1) development of ecologically clean process equipments for radioactive waste treatment by the HIP method; (2) manufacture of promising protective ceramic compositions and investigation of their physico-mechanical properties

  2. Probing the effect of human normal sperm morphology rate on cycle outcomes and assisted reproductive methods selection.

    Directory of Open Access Journals (Sweden)

    Bo Li

    Full Text Available Sperm morphology is the best predictor of fertilization potential, and the critical predictive information for supporting assisted reproductive methods selection. Given its important predictive value and the declining reality of semen quality in recent years, the threshold of normal sperm morphology rate (NSMR is being constantly corrected and controversial, from the 4th edition (14% to the 5th version (4%. We retrospectively analyzed 4756 cases of infertility patients treated with conventional-IVF(c-IVF or ICSI, which were divided into three groups according to NSMR: ≥14%, 4%-14% and <4%. Here, we demonstrate that, with decrease in NSMR(≥14%, 4%-14%, <4%, in the c-IVF group, the rate of fertilization, normal fertilization, high-quality embryo, multi-pregnancy and birth weight of twins gradually decreased significantly (P<0.05, while the miscarriage rate was significantly increased (p<0.01 and implantation rate, clinical pregnancy rate, ectopic pregnancy rate, preterm birth rate, live birth rate, sex ratio, and birth weight(Singleton showed no significant change. In the ICSI group, with decrease in NSMR (≥14%, 4%-14%, <4%, high-quality embryo rate, multi-pregnancy rate and birth weight of twins were gradually decreased significantly (p<0.05, while other parameters had no significant difference. Considering the clinical assisted methods selection, in the NFMR ≥14% group, normal fertilization rate of c-IVF was significantly higher than the ICSI group (P<0.05, in the 4%-14% group, birth weight (twins of c-IVF were significantly higher than the ICSI group, in the <4% group, miscarriage of IVF was significantly higher than the ICSI group. Therefore, we conclude that NSMR is positively related to embryo reproductive potential, and when NSMR<4% (5th edition, ICSI should be considered first, while the NSMR≥4%, c-IVF assisted reproduction might be preferred.

  3. Spectrophotometric methods for the simultaneous estimation of ofloxacin and tinidazole in bulk and pharmaceutical dosage form

    Directory of Open Access Journals (Sweden)

    Kareti Srinivasa Rao

    2011-01-01

    Full Text Available Aim: This work deals with the simultaneous estimation of Ofloxacin (OFL and Tinidazole (TNZ in in bulk and pharmaceutical dosage form, without prior separation, by three different techniques (Simultaneous equation, Absorbance ratio method and First order derivative method. Materials and Methods: The present work was carried out on Shimadzu electron UV1800 double beam UV-Visible spectrophotometer. The absorption spectra of reference and test solutions were carried out in 1 cm matched quartz cell over the range of 200 - 400 nm. Standard gift sample of OFL and TNZ obtain from Torrent pharmaceuticals Ltd., Baddi, Himachal Pradesh. Combined OFL and TNZ tablets were purchased from local market. Methanol from Merck Ltd and distilled water are used as solvent. Results: The first method is the application of simultaneous equation. Where the linearity ranges for OFL and TNZ were 5-30 μg/ml and 10-50 μg/ml respectively. The second method is the determination of ratio of absorbance at 278nm, the maximum absorption of TNZ and isobestic wavelength 283 nm, the linearity ranges for OFL and TNZ were 5-30 μg/ml and 10-50μg/ml respectively. The third method is the first order derivative method, where the linearity ranges for OFL and TNZ were 5-30 μg/ml and 10-50 μg/ml respectively. The results of the analysis have been validated statistically and by recovery studies, where the percentage recovery was found to be 100.9±0.49 and 97.30±0.20 using the simultaneous equation method, 98±0.45 and 100.4±0.48 using the graphical absorbance ratio method and 99.10±0.40 and 84.70±0.70 using first derivative method, for OFL and TNZ respectively. Conclusions: The proposed procedures are rapid, simple, require no preliminary separation steps and can be used for routine analysis of both drugs in quality control laboratories.

  4. Model-free methods of analyzing domain motions in proteins from simulation : A comparison of normal mode analysis and molecular dynamics simulation of lysozyme

    NARCIS (Netherlands)

    Hayward, S.; Kitao, A.; Berendsen, H.J.C.

    Model-free methods are introduced to determine quantities pertaining to protein domain motions from normal mode analyses and molecular dynamics simulations, For the normal mode analysis, the methods are based on the assumption that in low frequency modes, domain motions can be well approximated by

  5. Three forms of relativity

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1992-01-01

    The physical sense of three forms of the relativity is discussed. The first - instant from - respects in fact the traditional approach based on the concept of instant distance. The normal form corresponds the radar formulation which is based on the light or retarded distances. The front form in the special case is characterized by 'observable' variables, and the known method of k-coefficient is its obvious expression. 16 refs

  6. Ultra-low power thin film transistors with gate oxide formed by nitric acid oxidation method

    International Nuclear Information System (INIS)

    Kobayashi, H.; Kim, W. B.; Matsumoto, T.

    2011-01-01

    We have developed a low temperature fabrication method of SiO 2 /Si structure by use of nitric acid, i.e., nitric acid oxidation of Si (NAOS) method, and applied it to thin film transistors (TFT). A silicon dioxide (SiO 2 ) layer formed by the NAOS method at room temperature possesses 1.8 nm thickness, and its leakage current density is as low as that of thermally grown SiO 2 layer with the same thickness formed at ∼900 deg C. The fabricated TFTs possess an ultra-thin NAOS SiO 2 /CVD SiO 2 stack gate dielectric structure. The ultrathin NAOS SiO 2 layer effectively blocks a gate leakage current, and thus, the thickness of the gate oxide layer can be decreased from 80 to 20 nm. The thin gate oxide layer enables to decrease the operation voltage to 2 V (cf. the conventional operation voltage of TFTs with 80 nm gate oxide: 12 V) because of the low threshold voltages, i.e., -0.5 V for P-ch TFTs and 0.5 V for N-ch TFTs, and thus the consumed power decreases to 1/36 of that of the conventional TFTs. The drain current increases rapidly with the gate voltage, and the sub-threshold voltage is ∼80 mV/dec. The low sub-threshold swing is attributable to the thin gate oxide thickness and low interface state density of the NAOS SiO 2 layer. (authors)

  7. Susceptibility screening of hyphae-forming fungi with a new, easy, and fast inoculum preparation method.

    Science.gov (United States)

    Schmalreck, Arno; Willinger, Birgit; Czaika, Viktor; Fegeler, Wolfgang; Becker, Karsten; Blum, Gerhard; Lass-Flörl, Cornelia

    2012-12-01

    In vitro susceptibility testing of clinically important fungi becomes more and more essential due to the rising number of fungal infections in patients with impaired immune system. Existing standardized microbroth dilution methods for in vitro testing of molds (CLSI, EUCAST) are not intended for routine testing. These methods are very time-consuming and dependent on sporulating of hyphomycetes. In this multicentre study, a new (independent of sporulation) inoculum preparation method (containing a mixture of vegetative cells, hyphae, and conidia) was evaluated. Minimal inhibitory concentrations (MIC) of amphotericin B, posaconazole, and voriconazole of 180 molds were determined with two different culture media (YST and RPMI 1640) according to the DIN (Deutsches Institut für Normung) microdilution assay. 24 and 48 h MIC of quality control strains, tested per each test run, prepared with the new inoculum method were in the range of DIN. YST and RPMI 1640 media showed similar MIC distributions for all molds tested. MIC readings at 48 versus 24 h yield 1 log(2) higher MIC values and more than 90 % of the MICs read at 24 and 48 h were within ± 2 log(2) dilution. MIC end point reading (log(2 MIC-RPMI 1640)-log(2 MIC-YST)) of both media demonstrated a tendency to slightly lower MICs with RPMI 1640 medium. This study reports the results of a new, time-saving, and easy-to-perform method for inoculum preparation for routine susceptibility testing that can be applied for all types of spore-/non-spore and hyphae-forming fungi.

  8. INFORMATION SUPPORT OF THE PROJECT AS A METHOD OF FORMING METASUBJECT KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    И Ф Зыкова

    2016-12-01

    Full Text Available One of the fastest growing educational facilities is an online training. it is aimed at the remote receive quality knowledge in different subjects and disciplines. Given the prospects for the development of teaching aids, we have considered the possibility of its integration into the traditional school systems. In particular, we have demonstrated the implementation of this method of learning through project- based learning - students create educational materials that will be posted on the Internet.In the article the different software tools were analyzed, allowing to implement all stages of the project development with the help of cloud computing, that is, providing a collaborative and interactive way to work on the project. There are online boards for the initial stages of project development, software for management and structured storage of documents, as well as platforms that enable aesthetically and creatively presentation of the project.Given the metasubject nature of topology, the integrative component of a method of projects, we have created an online tutorial on “A method of solving mazes.” Which answers our goal - to investigate the possibility of forming metasubject knowledge by studying the topological elements in the school course.

  9. Method for distinctive estimation of stored acidity forms in acid mine wastes.

    Science.gov (United States)

    Li, Jun; Kawashima, Nobuyuki; Fan, Rong; Schumann, Russell C; Gerson, Andrea R; Smart, Roger St C

    2014-10-07

    Jarosites and schwertmannite can be formed in the unsaturated oxidation zone of sulfide-containing mine waste rock and tailings together with ferrihydrite and goethite. They are also widely found in process wastes from electrometallurgical smelting and metal bioleaching and within drained coastal lowland soils (acid-sulfate soils). These secondary minerals can temporarily store acidity and metals or remove and immobilize contaminants through adsorption, coprecipitation, or structural incorporation, but release both acidity and toxic metals at pH above about 4. Therefore, they have significant relevance to environmental mineralogy through their role in controlling pollutant concentrations and dynamics in contaminated aqueous environments. Most importantly, they have widely different acid release rates at different pHs and strongly affect drainage water acidity dynamics. A procedure for estimation of the amounts of these different forms of nonsulfide stored acidity in mining wastes is required in order to predict acid release rates at any pH. A four-step extraction procedure to quantify jarosite and schwertmannite separately with various soluble sulfate salts has been developed and validated. Corrections to acid potentials and estimation of acid release rates can be reliably based on this method.

  10. Forms And Methods Of Modern Russian Youth Involvement Into The Electoral Process

    Directory of Open Access Journals (Sweden)

    Aleksey D. Maslov

    2015-03-01

    Full Text Available In the present article authors analyzes forms and methods of modern Russian youth involvement in the electoral process. Involving young people in the electoral process is directly related to the problem of increasing the level of political culture in the society. This article presents the main forms of work to attract young people to participate in elections in our country, according to the Central Election Commission (CEC of Russia, some of the regional election commissions, the Russian Public Opinion Research Center (WCIOM. Authors note that at present there are more than one hundred and sixty legislative acts of the Russian Federation, which reflect certain aspects of the state youth policy. All these measures stimulate the political activity of young people, but in our opinion, that is not enough. The fundamental change in the attitude of young people to politics, to the institution of elections is possible only when young people feel like a real part and the subject of transformation processes in our country. In conclusion authors summarizes, that a fundamental change in the relationship of young people to politics, the institution of elections is possible only, when very young feel a real party and the subject of transformation processes in our country. This is possible only when the state is really and not formally prioritizes youth policy. Young people should have a daily state support for education, starting a business, implementation of acquired skills for a decent fee, starting a family, buying a house, etc.

  11. Aqueous sulfomethylated melamine gel-forming compositions and methods of use

    Energy Technology Data Exchange (ETDEWEB)

    Meltz, C.N.; Guetzmacher, G.D.; Chang, P.W.

    1989-04-18

    A method is described for the selective modification of the permeability of the strata of a subterranean bydrocarbon-containing reservoir consisting of introducing into a well in, communication with the reservoir; an aqueous gel-forming composition, comprising a 1.0-60.0 weight percent sulfomethylated melamine polymer solution. The solution is prepared with a 1.0 molar equivalent of a malemine, reacted with 3.0-6.7 molar equivalents of formaldehyde or a 2-6 carbon atom containing dialdehyde; 0.25-1.25 molar equivalents of an alkali metal or ammonium salt of surfurous acid; and 0.01-1.5 molar equivalents of a gel-modifying agent.

  12. Pedagogical terms of forming of healthy method of life of modern pupils

    Directory of Open Access Journals (Sweden)

    Odarchenko V.I.

    2012-10-01

    Full Text Available The questions of conditioning forming of healthy method of life of pupils of general educational educational establishments are probed. In an experiment 156 pupils took part in age from 6 to 17 years. It is set that the characteristic feature of bodily condition of health of pupils is a result of the protracted unfavorable influence of socio-economic, ecological and pedagogical factors. An idea speaks out, that search of the new going near organization of an educational educate process at school it is necessary to send educations to humanizing. It will provide creation of optimum terms for spiritual growth of personality, valuable realization of psychophysical possibilities, maintainance and strengthening of health. It is well-proven that realization of the personality oriented approach taking into account basic valeological principles positively influences on the process of education of responsible attitude toward an own health as to the greatest individual and public value.

  13. Optical characterization of Er-implanted ZnO films formed by sol-gel method

    International Nuclear Information System (INIS)

    Fukudome, T.; Kaminaka, A.; Isshiki, H.; Saito, R.; Yugo, S.; Kimura, T.

    2003-01-01

    In this paper, we report on the 1.54 μm photoluminescence (PL) of Er-implanted ZnO thin films formed by a sol-gel method on Si substrates. In spite of the polycrystalline structure of the sol-gel ZnO thin films, they showed strong PL emissions due to the near band edge recombination at 375 nm as well as the Er-related luminescence at 1.54 μm. The Er-related luminescence showed no decrease (quench) in the intensity up to the Er concentration of 1.5 x 10 21 cm -3 . The PL intensity of Er-implanted ZnO at 1.54 μm was found to be as strong as Er-doped PS (porous Si) at 20 K, and the intensity reduced to 1/3 at room temperature

  14. Method of forming a continuous polymeric skin on a cellular foam material

    Science.gov (United States)

    Duchane, David V.; Barthell, Barry L.

    1985-01-01

    Hydrophobic cellular material is coated with a thin hydrophilic polymer skin which stretches tightly over the outer surface of the foam but which does not fill the cells of the foam, thus resulting in a polymer-coated foam structure having a smoothness which was not possible in the prior art. In particular, when the hydrophobic cellular material is a specially chosen hydrophobic polymer foam and is formed into arbitrarily chosen shapes prior to the coating with hydrophilic polymer, inertial confinement fusion (ICF) targets of arbitrary shapes can be produced by subsequently coating the shapes with metal or with any other suitable material. New articles of manufacture are produced, including improved ICF targets, improved integrated circuits, and improved solar reflectors and solar collectors. In the coating method, the cell size of the hydrophobic cellular material, the viscosity of the polymer solution used to coat, and the surface tensin of the polymer solution used to coat are all very important to the coating.

  15. Non normal and non quadratic anisotropic plasticity coupled with ductile damage in sheet metal forming: Application to the hydro bulging test

    International Nuclear Information System (INIS)

    Badreddine, Houssem; Saanouni, Khemaies; Dogui, Abdelwaheb

    2007-01-01

    In this work an improved material model is proposed that shows good agreement with experimental data for both hardening curves and plastic strain ratios in uniaxial and equibiaxial proportional loading paths for steel metal until the final fracture. This model is based on non associative and non normal flow rule using two different orthotropic equivalent stresses in both yield criterion and plastic potential functions. For the plastic potential the classical Hill 1948 quadratic equivalent stress is considered while for the yield criterion the Karafillis and Boyce 1993 non quadratic equivalent stress is used taking into account the non linear mixed (kinematic and isotropic) hardening. Applications are made to hydro bulging tests using both circular and elliptical dies. The results obtained with different particular cases of the model such as the normal quadratic and the non normal non quadratic cases are compared and discussed with respect to the experimental results

  16. Depth Estimates for Slingram Electromagnetic Anomalies from Dipping Sheet-like Bodies by the Normalized Full Gradient Method

    Science.gov (United States)

    Dondurur, Derman

    2005-11-01

    The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.

  17. The research of moisture forms in the baking yeast by the thermogravimetric analysis method

    Directory of Open Access Journals (Sweden)

    S. V. Lavrov

    2016-01-01

    Full Text Available The thermogravimetry method is one of the few absolute methods of analysis, that makes it one of the most accurate methods. In this research, thermogravimetric analysis of baking yeast (Saccharomyces cerevisiae was carried out. It allowed to identify temperature zones, which correspond to dripping with various link energy, as well as to predict operating parameters of the process of dehumidification and to choose their most effective dehydration method. The studies were conducted in the laboratory of the collective use center "Control and management of energy efficient projects" of the "Voronezh state university of engineering technologies" on the simultaneous thermal analysis device STA 449 F3 model (NETZSCH, Germany. The device records the change in a substance mass and the difference of the heat flow inside the crucible containing the sample and the crucible containing the standard analyte. The analyzer's working principle is based on continuous recording of the dependence of the material mass on time or temperature and its being heated to the selected temperature program in a specified gas atmosphere. The release or absorption of heat by the sample due to phase transitions or chemical reactions is recorded simultaneously. The study was performed in the following modes: the pressure is atmospheric, the maximum temperature is 588 K, the rate of temperature change is 5 K/min. The experiments were performed in aluminum crucibles with a total weight of 12 mg. The software NETZSCH Proteus was used for processing of the obtained TG and DTG curves. The analysis of the obtained data allowed to identify periods of water dehydration and solids transformation by thermal effect on baking yeast, and to identify temperature zones, which correspond to the release of moisture with different link form and energy.

  18. Stability Indicating Reverse Phase HPLC Method for Estimation of Rifampicin and Piperine in Pharmaceutical Dosage Form.

    Science.gov (United States)

    Shah, Umang; Patel, Shraddha; Raval, Manan

    2018-01-01

    High performance liquid chromatography is an integral analytical tool in assessing drug product stability. HPLC methods should be able to separate, detect, and quantify the various drug-related degradants that can form on storage or manufacturing, plus detect any drug-related impurities that may be introduced during synthesis. A simple, economic, selective, precise, and stability-indicating HPLC method has been developed and validated for analysis of Rifampicin (RIFA) and Piperine (PIPE) in bulk drug and in the formulation. Reversed-phase chromatography was performed on a C18 column with Buffer (Potassium Dihydrogen Orthophosphate) pH 6.5 and Acetonitrile, 30:70), (%, v/v), as mobile phase at a flow rate of 1 mL min-1. The detection was performed at 341 nm and sharp peaks were obtained for RIFA and PIPE at retention time of 3.3 ± 0.01 min and 5.9 ± 0.01 min, respectively. The detection limits were found to be 2.385 ng/ml and 0.107 ng/ml and quantification limits were found to be 7.228ng/ml and 0.325ng/ml for RIFA and PIPE, respectively. The method was validated for accuracy, precision, reproducibility, specificity, robustness, and detection and quantification limits, in accordance with ICH guidelines. Stress study was performed on RIFA and PIPE and it was found that these degraded sufficiently in all applied chemical and physical conditions. Thus, the developed RP-HPLC method was found to be suitable for the determination of both the drugs in bulk as well as stability samples of capsule containing various excipients. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. APPLICATION OF FORMS AND METHODS OF COMMERCIALIZATION OF INNOVATIONS THROUGH THE INTERACTION OF UNIVERSITIES AND ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Svetlana E. Sitnikova

    2018-03-01

    Full Text Available In the conditions of formation of economy based on knowledge, a key condition for successful development is the effective integration of science, education and business. Currently, the commercialization of innovation becomes a necessary factor for sustainable development of universities, one of their main tools to increase its competitiveness in the market of educational services and products. The efficiency of the process of innovation commercialization is conditioned by the rational choice of forms and methods of implementing this process. The interaction of universities and enterprises in the innovative clusters provides benefits such as effective dissemination of information about the creation of innovations, the relevant areas of research, the granting of preferences to cluster participants. The creation of technoparks suggests the presence of a specially allocated site, which houses many new businesses, so that existing institutions can only be partners of the technopark. From the point of view of commercialization of University innovation, the cluster seems to be a more attractive method compared with the creation of technology parks, but its application is necessary to adjust the approach to conducting the state policy in the field of innovation management. The interaction of universities and enterprises via an intermediary – the chamber of commerce and industry – can be described as an obstacle to the commercialization of university innovations. On the contrary, technological innovation center protects the interests of universities, rather than enterprises, as it is created to facilitate the commercialization of university innovation. Direct interaction of universities and enterprises becomes possible with the help of methods of a single transaction, a regular cooperation and a contract for the supply of innovation. The choice of methods of commercialization of innovations should be made on the basis of existing university

  20. A new plan-scoring method using normal tissue complication probability for personalized treatment plan decisions in prostate cancer

    Science.gov (United States)

    Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie; Chang, Kyung Hwan

    2018-01-01

    The aim of this study was to derive a new plan-scoring index using normal tissue complication probabilities to verify different plans in the selection of personalized treatment. Plans for 12 patients treated with tomotherapy were used to compare scoring for ranking. Dosimetric and biological indexes were analyzed for the plans for a clearly distinguishable group ( n = 7) and a similar group ( n = 12), using treatment plan verification software that we developed. The quality factor ( QF) of our support software for treatment decisions was consistent with the final treatment plan for the clearly distinguishable group (average QF = 1.202, 100% match rate, n = 7) and the similar group (average QF = 1.058, 33% match rate, n = 12). Therefore, we propose a normal tissue complication probability (NTCP) based on the plan scoring index for verification of different plans for personalized treatment-plan selection. Scoring using the new QF showed a 100% match rate (average NTCP QF = 1.0420). The NTCP-based new QF scoring method was adequate for obtaining biological verification quality and organ risk saving using the treatment-planning decision-support software we developed for prostate cancer.

  1. Development of the Parent Form of the Preschool Children's Communication Skills Scale and Comparison of the Communication Skills of Children with Normal Development and with Autism Spectrum Disorder

    Science.gov (United States)

    Aydin, Aydan

    2016-01-01

    This study aims at developing an assessment scale for identifying preschool children's communication skills, at distinguishing children with communication deficiencies and at comparing the communication skills of children with normal development (ND) and those with autism spectrum disorder (ASD). Participants were 427 children of up to 6 years of…

  2. Novel two-step method to form silk fibroin fibrous hydrogel

    International Nuclear Information System (INIS)

    Ming, Jinfa; Li, Mengmeng; Han, Yuhui; Chen, Ying; Li, Han; Zuo, Baoqi; Pan, Fukui

    2016-01-01

    Hydrogels prepared by silk fibroin solution have been studied. However, mimicking the nanofibrous structures of extracellular matrix for fabricating biomaterials remains a challenge. Here, a novel two-step method was applied to prepare fibrous hydrogels using regenerated silk fibroin solution containing nanofibrils in a range of tens to hundreds of nanometers. When the gelation process of silk solution occurred, it showed a top-down type gel within 30 min. After gelation, silk fibroin fibrous hydrogels exhibited nanofiber network morphology with β-sheet structure. Moreover, the compressive stress and modulus of fibrous hydrogels were 31.9 ± 2.6 and 2.8 ± 0.8 kPa, respectively, which was formed using 2.0 wt.% concentration solutions. In addition, fibrous hydrogels supported BMSCs attachment and proliferation over 12 days. This study provides important insight in the in vitro processing of silk fibroin into useful new materials. - Highlights: • SF fibrous hydrogel was prepared by a novel two-step method. • SF solution containing nanofibrils in a range of tens to hundreds of nanometers was prepared. • Gelation process was top-down type gel with several minutes. • SF fibrous hydrogels exhibited nanofiber network morphology with β-sheet structure. • Fibrous hydrogels had higher compressive stresses superior to porous hydrogels.

  3. Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.

    Energy Technology Data Exchange (ETDEWEB)

    Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair

    2014-09-01

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters

  4. Transport methods: general. 8. Formulation of Transport Equation in a Split Form

    International Nuclear Information System (INIS)

    Stancic, V.

    2001-01-01

    The singular eigenfunction expansion method has enabled the application of functional analysis methods in transport theory. However, when applying it, the users were discouraged, since in most problems, including slab problems, an extra problem has occurred. It appears necessary to solve the Fredholm integral equation in order to determine the expansion coefficients. There are several reasons for this difficulty. One reason might be the use of the full-range expansion techniques even in the regions where the function is singular. Such an example is the free boundary condition that requires the distribution to be equal to zero. Moreover, at μ = 0, the transport equation becomes an integral one. Both reasons motivated us to redefine the transport equation in a more natural way. Similar to scattering theory, here we define the flux distribution as a direct sum of forward- and backward-directed neutrons, e.g., μ ≥ 0 and μ < 0, respectively. As a result, the plane geometry transport equation is being split into coupled-pair equations. Further, using an appropriate transformation, this pair of equations reduces to a self-adjoint one having the same form as the known full-range single flux. It is interesting that all the methods of full-range theory are applicable here provided the flux as well as the transformed transport operator are two-dimensional matrices. Applying this to the slab problem, we find explicit expressions for reflected and transmitted particles caused by an arbitrary plane source. That is the news in this paper. Because of space constraints, only fundamentals of this approach will be presented here. We assume that the reader is familiar with this field; therefore, the applications are noted only at the end. (author)

  5. A New Quantitative Method for the Non-Invasive Documentation of Morphological Damage in Paintings Using RTI Surface Normals

    Directory of Open Access Journals (Sweden)

    Marcello Manfredi

    2014-07-01

    Full Text Available In this paper we propose a reliable surface imaging method for the non-invasive detection of morphological changes in paintings. Usually, the evaluation and quantification of changes and defects results mostly from an optical and subjective assessment, through the comparison of the previous and subsequent state of conservation and by means of condition reports. Using quantitative Reflectance Transformation Imaging (RTI we obtain detailed information on the geometry and morphology of the painting surface with a fast, precise and non-invasive method. Accurate and quantitative measurements of deterioration were acquired after the painting experienced artificial damage. Morphological changes were documented using normal vector images while the intensity map succeeded in highlighting, quantifying and describing the physical changes. We estimate that the technique can detect a morphological damage slightly smaller than 0.3 mm, which would be difficult to detect with the eye, considering the painting size. This non-invasive tool could be very useful, for example, to examine paintings and artwork before they travel on loan or during a restoration. The method lends itself to automated analysis of large images and datasets. Quantitative RTI thus eases the transition of extending human vision into the realm of measuring change over time.

  6. Spin-orbit coupling calculations with the two-component normalized elimination of the small component method

    Science.gov (United States)

    Filatov, Michael; Zou, Wenli; Cremer, Dieter

    2013-07-01

    A new algorithm for the two-component Normalized Elimination of the Small Component (2cNESC) method is presented and tested in the calculation of spin-orbit (SO) splittings for a series of heavy atoms and their molecules. The 2cNESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac SO splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000), 10.1103/PhysRevB.62.7809]. The use of the screened nucleus potential for the two-electron SO interaction leads to accurate spinor energy splittings, for which the deviations from the accurate Dirac Fock-Coulomb values are on the average far below the deviations observed for other effective one-electron SO operators. For hydrogen halides HX (X = F, Cl, Br, I, At, and Uus) and mercury dihalides HgX2 (X = F, Cl, Br, I) trends in spinor energies and SO splittings as obtained with the 2cNESC method are analyzed and discussed on the basis of coupling schemes and the electronegativity of X.

  7. A novel vector-based method for exclusive overexpression of star-form microRNAs.

    Directory of Open Access Journals (Sweden)

    Bo Qu

    Full Text Available The roles of microRNAs (miRNAs as important regulators of gene expression have been studied intensively. Although most of these investigations have involved the highly expressed form of the two mature miRNA species, increasing evidence points to essential roles for star-form microRNAs (miRNA*, which are usually expressed at much lower levels. Owing to the nature of miRNA biogenesis, it is challenging to use plasmids containing miRNA coding sequences for gain-of-function experiments concerning the roles of microRNA* species. Synthetic microRNA mimics could introduce specific miRNA* species into cells, but this transient overexpression system has many shortcomings. Here, we report that specific miRNA* species can be overexpressed by introducing artificially designed stem-loop sequences into short hairpin RNA (shRNA overexpression vectors. By our prototypic plasmid, designed to overexpress hsa-miR-146b-3p, we successfully expressed high levels of hsa-miR-146b-3p without detectable change of hsa-miR-146b-5p. Functional analysis involving luciferase reporter assays showed that, like natural miRNAs, the overexpressed hsa-miR-146b-3p inhibited target gene expression by 3'UTR seed pairing. Our demonstration that this method could overexpress two other miRNAs suggests that the approach should be broadly applicable. Our novel strategy opens the way for exclusively stable overexpression of miRNA* species and analyzing their unique functions both in vitro and in vivo.

  8. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  9. Method of thermally processing superplastically formed aluminum-lithium alloys to obtain optimum strengthening

    Science.gov (United States)

    Anton, Claire E. (Inventor)

    1993-01-01

    Optimum strengthening of a superplastically formed aluminum-lithium alloy structure is achieved via a thermal processing technique which eliminates the conventional step of solution heat-treating immediately following the step of superplastic forming of the structure. The thermal processing technique involves quenching of the superplastically formed structure using static air, forced air or water quenching.

  10. A novel mean-centering method for normalizing microRNA expression from high-throughput RT-qPCR data

    Directory of Open Access Journals (Sweden)

    Wylie Dennis

    2011-12-01

    Full Text Available Abstract Background Normalization is critical for accurate gene expression analysis. A significant challenge in the quantitation of gene expression from biofluids samples is the inability to quantify RNA concentration prior to analysis, underscoring the need for robust normalization tools for this sample type. In this investigation, we evaluated various methods of normalization to determine the optimal approach for quantifying microRNA (miRNA expression from biofluids and tissue samples when using the TaqMan® Megaplex™ high-throughput RT-qPCR platform with low RNA inputs. Findings We compared seven normalization methods in the analysis of variation of miRNA expression from biofluid and tissue samples. We developed a novel variant of the common mean-centering normalization strategy, herein referred to as mean-centering restricted (MCR normalization, which is adapted to the TaqMan Megaplex RT-qPCR platform, but is likely applicable to other high-throughput RT-qPCR-based platforms. Our results indicate that MCR normalization performs comparable to or better than both standard mean-centering and other normalization methods. We also propose an extension of this method to be used when migrating biomarker signatures from Megaplex to singleplex RT-qPCR platforms, based on the identification of a small number of normalizer miRNAs that closely track the mean of expressed miRNAs. Conclusions We developed the MCR method for normalizing miRNA expression from biofluids samples when using the TaqMan Megaplex RT-qPCR platform. Our results suggest that normalization based on the mean of all fully observed (fully detected miRNAs minimizes technical variance in normalized expression values, and that a small number of normalizer miRNAs can be selected when migrating from Megaplex to singleplex assays. In our study, we find that normalization methods that focus on a restricted set of miRNAs tend to perform better than methods that focus on all miRNAs, including

  11. Automated method to compute Evans index for diagnosis of idiopathic normal pressure hydrocephalus on brain CT images

    Science.gov (United States)

    Takahashi, Noriyuki; Kinoshita, Toshibumi; Ohmura, Tomomi; Matsuyama, Eri; Toyoshima, Hideto

    2017-03-01

    The early diagnosis of idiopathic normal pressure hydrocephalus (iNPH) considered as a treatable dementia is important. The iNPH causes enlargement of lateral ventricles (LVs). The degree of the enlargement of the LVs on CT or MR images is evaluated by using a diagnostic imaging criterion, Evans index. Evans index is defined as the ratio of the maximal width of frontal horns (FH) of the LVs to the maximal width of the inner skull (IS). Evans index is the most commonly used parameter for the evaluation of ventricular enlargement. However, manual measurement of Evans index is a time-consuming process. In this study, we present an automated method to compute Evans index on brain CT images. The algorithm of the method consisted of five major steps: standardization of CT data to an atlas, extraction of FH and IS regions, the search for the outmost points of bilateral FH regions, determination of the maximal widths of both the FH and the IS, and calculation of Evans index. The standardization to the atlas was performed by using linear affine transformation and non-linear wrapping techniques. The FH regions were segmented by using a three dimensional region growing technique. This scheme was applied to CT scans from 44 subjects, including 13 iNPH patients. The average difference in Evans index between the proposed method and manual measurement was 0.01 (1.6%), and the correlation coefficient of these data for the Evans index was 0.98. Therefore, this computerized method may have the potential to accurately compute Evans index for the diagnosis of iNPH on CT images.

  12. Macron Formed Liner Compression as a Practical Method for Enabling Magneto-Inertial Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John

    2011-12-10

    The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. The main impediment for current nuclear fusion concepts is the complexity and large mass associated with the confinement systems. To take advantage of the smaller scale, higher density regime of magnetic fusion, an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. The very compact, high energy density plasmoid commonly referred to as a Field Reversed Configuration (FRC) provides for an ideal target for this purpose. To make fusion with the FRC practical, an efficient method for repetitively compressing the FRC to fusion gain conditions is required. A novel approach to be explored in this endeavor is to remotely launch a converging array of small macro-particles (macrons) that merge and form a more massive liner inside the reactor which then radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target FRC plasmoid suppresses the thermal transport to the confining liner significantly lowering the imploding power needed to compress the target. With the momentum flux being delivered by an assemblage of low mass, but high velocity macrons, many of the difficulties encountered with the liner implosion power technology are eliminated. The undertaking to be described in this proposal is to evaluate the feasibility achieving fusion conditions from this simple and low cost approach to fusion. During phase I the design and testing of the key components for the creation of the macron formed liner have been successfully carried out. Detailed numerical calculations of the merging, formation and radial implosion of the Macron Formed Liner (MFL) were also performed. The phase II effort will focus on an experimental demonstration of the macron launcher at full power, and the demonstration

  13. Spectrophotometric methods for the determination of benazepril hydrochloride in its single and multi-component dosage forms.

    Science.gov (United States)

    El-Yazbi, F A; Abdine, H H; Shaalan, R A

    1999-06-01

    Three sensitive and accurate methods are presented for the determination of benazepril in its dosage forms. The first method uses derivative spectrophotometry to resolve the interference due to formulation matrix. The second method depends on the color formed by the reaction of the drug with bromocresol green (BCG). The third one utilizes the reaction of benazepril, after alkaline hydrolysis, with 3-methylbenzothialozone (MBTH) hydrazone where the produced color is measured at 593 nm. The latter method was extended to develop a stability-indicating method for this drug. Moreover, the derivative method was applied for the determination of benazepril in its combination with hydrochlorothiazide. The proposed methods were applied for the analysis of benazepril in the pure form and in tablets. The coefficient of variation was less than 2%.

  14. Substantion of Choosing the Method of Surgical Treatment of Complicated Forms of Chronic Pancreatitis

    Directory of Open Access Journals (Sweden)

    I.Ya. Budzak

    2013-04-01

    Full Text Available In the Institute’s clinic during 2010–2012, 43 patients were operated for complicated forms of chronic pancreatitis. Based on the data of computed tomography and endoscopic retrograde cholangiopancreatography, the variants of pathology of the pancreas, which are significant for the selection of operation method, were chosen. Evaluation of intraoperative biopsies showed that the main manifestation of chronic pancreatitis in all cases was evident fibrosis of gland tissue. In this evaluation, in patients with III degree of fibrosis, fibrous tissue was 68.2–76.4 % of the area of pancreas, and exocrine one — 16.2–24.8 %, in patients with IV degree of fibrosis, respectively 79.5–95.5 and 2.3–10.8 %. Indications for organ-preserving resection, resection-draining interventions and isolated, draining pancreatic ductal system operations have been indicated. Specific weight of combined resection-draining interventions with preservation of the duodenum was 30.2 %, the overall mortality rate — 2.3 %, duration of postoperative bed-day (9.1 ± 0.8.

  15. Obtaining tetracalcium phosphate and hydroxyapatite in powder form by wet method

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Sara Verusca de; Fook, Marcus Vinicius Lia; Araujo, Elaine Patricia; Medeiros, Keila Machado; Rabello, Guilherme Portela; Barbosa Renata; Araujo, Edcleide Maria, E-mail: saraveruscadeoliveira@yahoo.com.br, E-mail: marcusvinicius@dema.ufcg.edu.br, E-mail: elainepatriciaaraujo@yahoo.com.br, E-mail: keilamm@ig.com.br, E-mail: guilhermeportel@hotmail.com, E-mail: rrenatabarbosa@yahoo.com, E-mail: edcleide@dema.ufcg.edu.br [Universidade Federal de Campina Grande (UAEMa/CCT/UFCG), Campina Grande, PB (Brazil)

    2009-07-01

    The development of research in the area of advanced materials and tissue engineering has increased greatly in recent years found that bioceramics are outstanding in the replacement and regeneration of bone tissue, mainly formed by the calcium phosphate ceramics. The objective of this research is to obtain the calcium phosphate where Ca/P = 1.67 and 2.0, to observe the formation of phases after having subjected these materials to heat treatment. The calcium phosphate was produced by the wet method using a direct reaction of neutralization and characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and X-ray microanalysis (EDS). The XRD results confirm the presence of hydroxyapatite phase in the sample with Ca/P = 1.67, whereas the phosphates prepared with Ca/P = 2.0 ratio show a combination of hydroxyapatite and phase β- tricalcium phosphate. The micrographs obtained are characteristic of ceramic material called calcium phosphate. EDS confirmed the presence of Ca, P and O in the material. (author)

  16. Obtaining tetracalcium phosphate and hydroxyapatite in powder form by wet method

    International Nuclear Information System (INIS)

    Oliveira, Sara Verusca de; Fook, Marcus Vinicius Lia; Araujo, Elaine Patricia; Medeiros, Keila Machado; Rabello, Guilherme Portela; Barbosa Renata; Araujo, Edcleide Maria

    2009-01-01

    The development of research in the area of advanced materials and tissue engineering has increased greatly in recent years found that bioceramics are outstanding in the replacement and regeneration of bone tissue, mainly formed by the calcium phosphate ceramics. The objective of this research is to obtain the calcium phosphate where Ca/P = 1.67 and 2.0, to observe the formation of phases after having subjected these materials to heat treatment. The calcium phosphate was produced by the wet method using a direct reaction of neutralization and characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and X-ray microanalysis (EDS). The XRD results confirm the presence of hydroxyapatite phase in the sample with Ca/P = 1.67, whereas the phosphates prepared with Ca/P = 2.0 ratio show a combination of hydroxyapatite and phase β- tricalcium phosphate. The micrographs obtained are characteristic of ceramic material called calcium phosphate. EDS confirmed the presence of Ca, P and O in the material. (author)

  17. Forming a method mindset: The role of knowledge and preference in facilitating heuristic method usage in design

    DEFF Research Database (Denmark)

    Daalhuizen, Jaap; Person, Oscar; Gattol, Valentin

    2013-01-01

    Both systematic and heuristic methods are common practice when designing. Yet, in teaching students how to design, heuristic methods are typically only granted a secondary role. So, how do designers and students develop a mindset for using heuristic methods? In this paper, we study how prior...... knowledge (about heuristic methods and their usage) and preference (for using heuristic methods) relate to the reported use of heuristic methods when designing. Drawing on a survey among 304 students enrolled in a master-level course on design theory and methodology, we investigated method usage for five...... indirectly influenced method usage through a 'complementary' mediation of method preference....

  18. Polymer compositions, polymer films and methods and precursors for forming same

    Science.gov (United States)

    Klaehn, John R; Peterson, Eric S; Orme, Christopher J

    2013-09-24

    Stable, high performance polymer compositions including polybenzimidazole (PBI) and a melamine-formaldehyde polymer, such as methylated, poly(melamine-co-formaldehyde), for forming structures such as films, fibers and bulky structures. The polymer compositions may be formed by combining polybenzimidazole with the melamine-formaldehyde polymer to form a precursor. The polybenzimidazole may be reacted and/or intertwined with the melamine-formaldehyde polymer to form the polymer composition. For example, a stable, free-standing film having a thickness of, for example, between about 5 .mu.m and about 30 .mu.m may be formed from the polymer composition. Such films may be used as gas separation membranes and may be submerged into water for extended periods without crazing and cracking. The polymer composition may also be used as a coating on substrates, such as metal and ceramics, or may be used for spinning fibers. Precursors for forming such polymer compositions are also disclosed.

  19. Method for making a low density polyethylene waste form for safe disposal of low level radioactive material

    Science.gov (United States)

    Colombo, P.; Kalb, P.D.

    1984-06-05

    In the method of the invention low density polyethylene pellets are mixed in a predetermined ratio with radioactive particulate material, then the mixture is fed through a screw-type extruder that melts the low density polyethylene under a predetermined pressure and temperature to form a homogeneous matrix that is extruded and separated into solid monolithic waste forms. The solid waste forms are adapted to be safely handled, stored for a short time, and safely disposed of in approved depositories.

  20. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software.

    Science.gov (United States)

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-05

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Development and Validation of a UV Spectrophotometric and a RP-HPLC Methods for Moexipril Hydrochloride in Pure Form and Pharmaceutical Dosage Form

    International Nuclear Information System (INIS)

    Mastiholimath, V.S.; Gupte, P.P.; Mannur, V.S.

    2012-01-01

    A simple and reliable UV spectrophotometric and high-performance liquid chromatography (HPLC) methods were developed and validated for Moexipril hydrochloride in pure form and pharmaceutical dosage form. The RP-HPLC method was developed on agilant eclipse C 18 , (150 mm x 4.6 mm, 5 μm) with a mobile phase gradient system of 60 % (methanol:acetonitrile (70:30 % v/v)) : 40 % 20 mM ammonium acetate buffer pH 4.5 (v/v) and UV spectrophotometric method was developed in phosphate buffer pH 6.8. The effluent was monitored by SPD-M20A, prominence PDA detector at 210 nm. Calibration curve was linear over the concentration range of 10-35 μg/ml and 1-9 μg/ml for RP-HPLC and UV with a regression coefficient of 0.999. For RP-HPLC method Inter-day and intra-day precision % RSD values were found to be 1.00078 % and 1.49408 % respectively. For UV method 0.73386 % to 1.44111 % for inter day 0.453864 to 1.15542 intra-day precision. Recovery of Moexipril hydrochloride was found to be in the range of 99.8538 % to 101.5614 % and 100.5297586 % to 100.6431587 % for UV and RP-HPLC respectively. The limits of detection (LOD) and quantification (LOQ) for HPLC were 0.98969 and 2.99907 μg/ml, respectively. The developed RP-HPLC and UV spectrophotometric method was successfully applied for the quantitative determination of Moexipril hydrochloride in pharmaceutical dosage. (author)

  2. A comparison of methods used to calculate normal background concentrations of potentially toxic elements for urban soil

    Energy Technology Data Exchange (ETDEWEB)

    Rothwell, Katherine A., E-mail: k.rothwell@ncl.ac.uk; Cooke, Martin P., E-mail: martin.cooke@ncl.ac.uk

    2015-11-01

    To meet the requirements of regulation and to provide realistic remedial targets there is a need for the background concentration of potentially toxic elements (PTEs) in soils to be considered when assessing contaminated land. In England, normal background concentrations (NBCs) have been published for several priority contaminants for a number of spatial domains however updated regulatory guidance places the responsibility on Local Authorities to set NBCs for their jurisdiction. Due to the unique geochemical nature of urban areas, Local Authorities need to define NBC values specific to their area, which the national data is unable to provide. This study aims to calculate NBC levels for Gateshead, an urban Metropolitan Borough in the North East of England, using freely available data. The ‘median + 2MAD’, boxplot upper whisker and English NBC (according to the method adopted by the British Geological Survey) methods were compared for test PTEs lead, arsenic and cadmium. Due to the lack of systematically collected data for Gateshead in the national soil chemistry database, the use of site investigation (SI) data collected during the planning process was investigated. 12,087 SI soil chemistry data points were incorporated into a database and 27 comparison samples were taken from undisturbed locations across Gateshead. The SI data gave high resolution coverage of the area and Mann–Whitney tests confirmed statistical similarity for the undisturbed comparison samples and the SI data. SI data was successfully used to calculate NBCs for Gateshead and the median + 2MAD method was selected as most appropriate by the Local Authority according to the precautionary principle as it consistently provided the most conservative NBC values. The use of this data set provides a freely available, high resolution source of data that can be used for a range of environmental applications. - Highlights: • The use of site investigation data is proposed for land contamination studies

  3. Forms and methods of stimulation of innovative activities in the restructuring of production program

    Directory of Open Access Journals (Sweden)

    I. I. Emtcova

    2016-01-01

    Full Text Available In the Russian economy not every business entity, implements innovative business activities. The situation generated by the complexity of perception and practical transition to an innovative economic system. On the development of innovative activities affects the overall condition of the economy, condition of material production. The research demonstrates that resource potential of innovative activities in recent years had a tendency towards absolute quantitative reduction and quality deterioration. The decrease in the level and quality of resource provision of innovative activity due to the lack of necessary financial resources. Currently, innovation has become the primary means of increasing the profit of economic entities at the expense of better meet market demand, reduce production costs compared to competitors. Given the complexity of businesses, there is a need of the state stimulation of innovative activity, which is carried out the main directions, forms and methods. In the system of direct effects of the state on business innovation is the stimulation of development of Technopark structures. Creating the most favourable conditions for innovative enterprises, the provision of various services is their main goal. For the food processing industry currently, the largest share in the investments in the investment activities have their own sources of funding, including the use of depreciation. To Finance industry-wide, cross-sectoral and regional scientific and technical problems you can create extra-budgetary funds for financing R & d and innovation support. To encourage regional interests, one of which is that innovation is available to local authorities. In the financial provision of innovative activity is given credit. A Bank loan allows you to increase the efficiency of innovation activity. The article concludes that these measures to stimulate innovative-innovative activity can effectively influence the activity of the company: will

  4. Chemical bridges for enhancing hydrogen storage by spillover and methods for forming the same

    Science.gov (United States)

    Yang, Ralph T.; Li, Yingwei; Qi, Gongshin; Lachawiec, Jr., Anthony J.

    2012-12-25

    A composition for hydrogen storage includes a source of hydrogen atoms, a receptor, and a chemical bridge formed between the source and the receptor. The chemical bridge is formed from a precursor material. The receptor is adapted to receive hydrogen spillover from the source.

  5. Optimal assignment methods in three-form planned missing data designs for longitudinal panel studies

    NARCIS (Netherlands)

    Jorgensen, T.D.; Rhemtulla, M.; Schoemann, A.; McPherson, B.; Wu, W.; Little, T.D.

    2014-01-01

    Planned missing designs are becoming increasingly popular, but because there is no consensus on how to implement them in longitudinal research, we simulated longitudinal data to distinguish between strategies of assigning items to forms and of assigning forms to participants across measurement

  6. Validated sensitive spectrofluorimetric method for determination of antihistaminic drug azelastine HCl in pure form and in pharmaceutical dosage forms: application to stability study.

    Science.gov (United States)

    El-Masry, Amal A; Hammouda, Mohammed E A; El-Wasseef, Dalia R; El-Ashry, Saadia M

    2017-03-01

    A highly sensitive, simple and rapid spectrofluorimetric method was developed for the determination of azelastine HCl (AZL) in either its pure state or pharmaceutical dosage form. The proposed method was based on measuring the native fluorescence of the studied drug in 0.2 M H 2 SO 4 at λ em  = 364 nm after excitation at λ ex  = 275 nm. Different experimental parameters were studied and optimized carefully to obtain the highest fluorescence intensity. The proposed method showed a linear dependence of the fluorescence intensity on drug concentration over a concentration range of 10-250 ng/mL, with a limit of detection of 1.52 ng/mL and limit of quantitation of 4.61 ng/mL. Moreover, the method was successfully applied to pharmaceutical preparations, with percent recovery values (± SD) of 99.96 (± 0.4) and 100.1 (± 0.52) for nasal spray and eye drops, respectively. The results were in good agreement with those obtained by the comparison method, as revealed by Student's t-test and the variance ratio F-test. The method was extended to study the stability of AZL under stress conditions, where the drug was exposed to neutral, acidic, alkaline, oxidative and photolytic degradation according to International Conference on Harmonization (ICH) guidelines. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Calculations of atomic magnetic nuclear shielding constants based on the two-component normalized elimination of the small component method

    Science.gov (United States)

    Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter

    2017-04-01

    A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).

  8. Effect of psychological intervention in the form of relaxation and guided imagery on cellular immune function in normal healthy subjects. An overview

    DEFF Research Database (Denmark)

    Zachariae, R; Kristensen, J S; Hokland, P

    1991-01-01

    The present study measured the effects of relaxation and guided imagery on cellular immune function. During a period of 10 days 10 healthy subjects were given one 1-hour relaxation procedure and one combined relaxation and guided imagery procedure, instructing the subjects to imagine their immune...... on the immune defense and could form the basis of further studies on psychological intervention and immunological status. Udgivelsesdato: 1990-null...

  9. Matrix forming characteristics of inner and outer human meniscus cells on 3D collagen scaffolds under normal and low oxygen tensions.

    Science.gov (United States)

    Croutze, Roger; Jomha, Nadr; Uludag, Hasan; Adesida, Adetola

    2013-12-13

    Limited intrinsic healing potential of the meniscus and a strong correlation between meniscal injury and osteoarthritis have prompted investigation of surgical repair options, including the implantation of functional bioengineered constructs. Cell-based constructs appear promising, however the generation of meniscal constructs is complicated by the presence of diverse cell populations within this heterogeneous tissue and gaps in the information concerning their response to manipulation of oxygen tension during cell culture. Four human lateral menisci were harvested from patients undergoing total knee replacement. Inner and outer meniscal fibrochondrocytes (MFCs) were expanded to passage 3 in growth medium supplemented with basic fibroblast growth factor (FGF-2), then embedded in porous collagen type I scaffolds and chondrogenically stimulated with transforming growth factor β3 (TGF-β3) under 21% (normal or normoxic) or 3% (hypoxic) oxygen tension for 21 days. Following scaffold culture, constructs were analyzed biochemically for glycosaminoglycan production, histologically for deposition of extracellular matrix (ECM), as well as at the molecular level for expression of characteristic mRNA transcripts. Constructs cultured under normal oxygen tension expressed higher levels of collagen type II (p = 0.05), aggrecan (p oxygen tension. There was no significant difference in expression of these genes between scaffolds seeded with MFCs isolated from inner or outer regions of the tissue following 21 days chondrogenic stimulation (p > 0.05). Cells isolated from inner and outer regions of the human meniscus demonstrated equivalent differentiation potential toward chondrogenic phenotype and ECM production. Oxygen tension played a key role in modulating the redifferentiation of meniscal fibrochondrocytes on a 3D collagen scaffold in vitro.

  10. Normalized Tritium Quantification Approach (NoTQA) a Method for Quantifying Tritium Contaminated Trash and Debris at LLNL

    International Nuclear Information System (INIS)

    Dominick, J.L.; Rasmussen, C.L.

    2008-01-01

    Several facilities and many projects at LLNL work exclusively with tritium. These operations have the potential to generate large quantities of Low-Level Radioactive Waste (LLW) with the same or similar radiological characteristics. A standardized documented approach to characterizing these waste materials for disposal as radioactive waste will enhance the ability of the Laboratory to manage them in an efficient and timely manner while ensuring compliance with all applicable regulatory requirements. This standardized characterization approach couples documented process knowledge with analytical verification and is very conservative, overestimating the radioactivity concentration of the waste. The characterization approach documented here is the Normalized Tritium Quantification Approach (NoTQA). This document will serve as a Technical Basis Document which can be referenced in radioactive waste characterization documentation packages such as the Information Gathering Document. In general, radiological characterization of waste consists of both developing an isotopic breakdown (distribution) of radionuclides contaminating the waste and using an appropriate method to quantify the radionuclides in the waste. Characterization approaches require varying degrees of rigor depending upon the radionuclides contaminating the waste and the concentration of the radionuclide contaminants as related to regulatory thresholds. Generally, as activity levels in the waste approach a regulatory or disposal facility threshold the degree of required precision and accuracy, and therefore the level of rigor, increases. In the case of tritium, thresholds of concern for control, contamination, transportation, and waste acceptance are relatively high. Due to the benign nature of tritium and the resulting higher regulatory thresholds, this less rigorous yet conservative characterization approach is appropriate. The scope of this document is to define an appropriate and acceptable

  11. DEVELOPMENT OF THE METHOD AND U.S. NORMALIZATION DATABASE FOR LIFE CYCLE IMPACT ASSESSMENT AND SUSTAINABILITY METRICS

    Science.gov (United States)

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as, life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relati...

  12. Reconstructing Normality

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Fristed, Peter Billeskov

    2012-01-01

    Forensic psychiatry is an area of priority for the Danish Government. As the field expands, this calls for increased knowledge about mental health nursing practice, as this is part of the forensic psychiatry treatment offered. However, only sparse research exists in this area. The aim of this study...... was to investigate the characteristics of forensic mental health nursing staff interaction with forensic mental health inpatients and to explore how staff give meaning to these interactions. The project included 32 forensic mental health staff members, with over 307 hours of participant observations, 48 informal....... The intention is to establish a trusting relationship to form behaviour and perceptual-corrective care, which is characterized by staff's endeavours to change, halt, or support the patient's behaviour or perception in relation to staff's perception of normality. The intention is to support and teach the patient...

  13. Turbine component having surface cooling channels and method of forming same

    Science.gov (United States)

    Miranda, Carlos Miguel; Trimmer, Andrew Lee; Kottilingam, Srikanth Chandrudu

    2017-09-05

    A component for a turbine engine includes a substrate that includes a first surface, and an insert coupled to the substrate proximate the substrate first surface. The component also includes a channel. The channel is defined by a first channel wall formed in the substrate and a second channel wall formed by at least one coating disposed on the substrate first surface. The component further includes an inlet opening defined in flow communication with the channel. The inlet opening is defined by a first inlet wall formed in the substrate and a second inlet wall defined by the insert.

  14. Method of forming electronically conducting polymers on conducting and nonconducting substrates

    Science.gov (United States)

    Murphy, Oliver J. (Inventor); Hitchens, G. Duncan (Inventor); Hodko, Dalibor (Inventor); Clarke, Eric T. (Inventor); Miller, David L. (Inventor); Parker, Donald L. (Inventor)

    2001-01-01

    The present invention provides electronically conducting polymer films formed from photosensitive formulations of pyrrole and an electron acceptor that have been selectively exposed to UV light, laser light, or electron beams. The formulations may include photoinitiators, flexibilizers, solvents and the like. These solutions can be used in applications including printed circuit boards and through-hole plating and enable direct metallization processes on non-conducting substrates. After forming the conductive polymer patterns, a printed wiring board can be formed by sensitizing the polymer with palladium and electrolytically depositing copper.

  15. Forming a method mindset : The role of knowledge and preference in facilitating heuristic method usage in design

    NARCIS (Netherlands)

    Daalhuizen, J.J.; Person, F.E.O.K.; Gattol, V.

    2013-01-01

    Both systematic and heuristic methods are common practice when designing. Yet, in teaching students how to design, heuristic methods are typically only granted a secondary role. So, how do designers and students develop a mindset for using heuristic methods? In this paper, we study how prior

  16. Method for fabricating five-level microelectromechanical structures and microelectromechanical transmission formed

    Science.gov (United States)

    Rodgers, M. Steven; Sniegowski, Jeffry J.; Miller, Samuel L.; McWhorter, Paul J.

    2000-01-01

    A process for forming complex microelectromechanical (MEM) devices having five layers or levels of polysilicon, including four structural polysilicon layers wherein mechanical elements can be formed, and an underlying polysilicon layer forming a voltage reference plane. A particular type of MEM device that can be formed with the five-level polysilicon process is a MEM transmission for controlling or interlocking mechanical power transfer between an electrostatic motor and a self-assembling structure (e.g. a hinged pop-up mirror for use with an incident laser beam). The MEM transmission is based on an incomplete gear train and a bridging set of gears that can be moved into place to complete the gear train to enable power transfer. The MEM transmission has particular applications as a safety component for surety, and for this purpose can incorporate a pin-in-maze discriminator responsive to a coded input signal.

  17. Method for fabricating five-level microelectromechanical structures and microelectromechanical transmission formed

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, M.S.; Sniegowski, J.J.; Miller, S.L.; McWhorter, P.J.

    2000-07-04

    A process is disclosed for forming complex microelectromechanical (MEM) devices having five layers or levels of polysilicon, including four structural polysilicon layers wherein mechanical elements can be formed, and an underlying polysilicon layer forming a voltage reference plane. A particular type of MEM device that can be formed with the five-level polysilicon process is a MEM transmission for controlling or interlocking mechanical power transfer between an electrostatic motor and a self-assembling structure (e.g. a hinged pop-up mirror for use with an incident laser beam). The MEM transmission is based on an incomplete gear train and a bridging set of gears that can be moved into place to complete the gear train to enable power transfer. The MEM transmission has particular applications as a safety component for surety, and for this purpose can incorporate a pin-in-maze discriminator responsive to a coded input signal.

  18. A task specific uncertainty analysis method for least-squares-based form characterization of ultra-precision freeform surfaces

    International Nuclear Information System (INIS)

    Ren, M J; Cheung, C F; Kong, L B

    2012-01-01

    In the measurement of ultra-precision freeform surfaces, least-squares-based form characterization methods are widely used to evaluate the form error of the measured surfaces. Although many methodologies have been proposed in recent years to improve the efficiency of the characterization process, relatively little research has been conducted on the analysis of associated uncertainty in the characterization results which may result from those characterization methods being used. As a result, this paper presents a task specific uncertainty analysis method with application in the least-squares-based form characterization of ultra-precision freeform surfaces. That is, the associated uncertainty in the form characterization results is estimated when the measured data are extracted from a specific surface with specific sampling strategy. Three factors are considered in this study which include measurement error, surface form error and sample size. The task specific uncertainty analysis method has been evaluated through a series of experiments. The results show that the task specific uncertainty analysis method can effectively estimate the uncertainty of the form characterization results for a specific freeform surface measurement

  19. MO-F-CAMPUS-I-04: Characterization of Fan Beam Coded Aperture Coherent Scatter Spectral Imaging Methods for Differentiation of Normal and Neoplastic Breast Structures

    Energy Technology Data Exchange (ETDEWEB)

    Morris, R; Albanese, K; Lakshmanan, M; Greenberg, J; Kapadia, A [Duke University Medical Center, Durham, NC, Carl E Ravin Advanced Imaging Laboratories, Durham, NC (United States)

    2015-06-15

    Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality for breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded

  20. Design and Selection of Machine Learning Methods Using Radiomics and Dosiomics for Normal Tissue Complication Probability Modeling of Xerostomia

    Directory of Open Access Journals (Sweden)

    Hubert S. Gabryś

    2018-03-01

    Full Text Available PurposeThe purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP models based on the mean radiation dose to parotid glands.Material and methodsA cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0–6 months (early, 6–15 months (late, 15–24 months (long-term, and at any time (a longitudinal model after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis.ResultsNTCP models based on the parotid mean dose failed to predict xerostomia (AUCs < 0.60. The most informative predictors were found for late and long-term xerostomia. Late xerostomia correlated with the contralateral dose gradient in the anterior–posterior (AUC = 0.72 and the right–left (AUC = 0.68 direction, whereas long-term xerostomia was associated with parotid volumes (AUCs > 0.85, dose gradients in the right–left (AUCs > 0.78, and the anterior–posterior (AUCs > 0.72 direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose–volume histogram (DVH spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing

  1. Probability distribution of atmospheric pollutants: comparison among four methods for the determination of the log-normal distribution parameters; La distribuzione di probabilita` degli inquinanti atmosferici: confronto tra quattro metodi per la determinazione dei parametri della distribuzione log-normale

    Energy Technology Data Exchange (ETDEWEB)

    Bellasio, R [Enviroware s.r.l., Agrate Brianza, Milan (Italy). Centro Direzionale Colleoni; Lanzani, G; Ripamonti, M; Valore, M [Amministrazione Provinciale, Como (Italy)

    1998-04-01

    This work illustrates the possibility to interpolate the measured concentrations of CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} during one year (1995) at the 13 stations of the air quality monitoring station network of the Provinces of Como and Lecco (Italy) by means of a log-normal distribution. Particular attention was given in choosing the method for the determination of the log-normal distribution parameters among four possible methods: I natural, II percentiles, III moments, IV maximum likelihood. In order to evaluate the goodness of fit a ranking procedure was carried out over the values of four indices: absolute deviation, weighted absolute deviation, Kolmogorov-Smirnov index and Cramer-von Mises-Smirnov index. The capability of the log-normal distribution to fit the measured data is then discussed as a function of the pollutant and of the monitoring station. Finally an example of application is given: the effect of an emission reduction strategy in Lombardy Region (the so called `bollino blu`) is evaluated using a log-normal distribution. [Italiano] In questo lavoro si discute la possibilita` di interpolare le concentrazioni misurate di CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} durante un anno solare (il 1995) nelle 13 stazioni della Rete di Rilevamento della qualita` dell`aria delle Provincie di Como e di Lecco mediante una funzione log-normale. In particolare si discute quale metodo e` meglio usare per l`individuazione dei 2 parametri caratteristici della log-normale, tra 4 teoreticamente possibili: I naturale, II dei percentili, III dei momenti, IV della massima verosimiglianza. Per valutare i risultati ottenuti si usano: la deviazione assoluta, la deviazione pesata, il parametro di Kolmogorov-Smirnov e quello di Cramer-von Mises-Smirnov effettuando un ranking tra i metodi in funzione degli inquinanti e della stazione di misura. Ancora in funzione degli inquinanti e delle diverse stazioni di misura si discute poi la capacita` della funzione log-normale di

  2. Solitary wave solutions to the modified form of Camassa-Holm equation by means of the homotopy analysis method

    International Nuclear Information System (INIS)

    Abbasbandy, S.

    2009-01-01

    Solitary wave solutions to the modified form of Camassa-Holm (CH) equation are sought. In this work, the homotopy analysis method (HAM), one of the most effective method, is applied to obtain the soliton wave solutions with and without continuity of first derivatives at crest

  3. New Spectrophotometric and Conductometric Methods for Macrolide Antibiotics Determination in Pure and Pharmaceutical Dosage Forms Using Rose Bengal

    Directory of Open Access Journals (Sweden)

    Rania A. Sayed

    2013-01-01

    Full Text Available Two Simple, accurate, precise, and rapid spectrophotometric and conductometric methods were developed for the estimation of erythromycin thiocyanate (I, clarithromycin (II, and azithromycin dihydrate (III in both pure and pharmaceutical dosage forms. The spectrophotometric procedure depends on the reaction of rose bengal and copper with the cited drugs to form stable ternary complexes which are extractable with methylene chloride, and the absorbances were measured at 558, 557, and 560 nm for (I, (II, and (III, respectively. The conductometric method depends on the formation of an ion-pair complex between the studied drug and rose bengal. For the spectrophotometric method, Beer's law was obeyed. The correlation coefficient ( for the studied drugs was found to be 0.9999. The molar absorptivity (, Sandell’s sensitivity, limit of detection (LOD, and limit of quantification (LOQ were also calculated. The proposed methods were successfully applied for the determination of certain pharmaceutical dosage forms containing the studied drugs

  4. Sequence analysis of annually normalized citation counts: an empirical analysis based on the characteristic scores and scales (CSS) method.

    Science.gov (United States)

    Bornmann, Lutz; Ye, Adam Y; Ye, Fred Y

    2017-01-01

    In bibliometrics, only a few publications have focused on the citation histories of publications, where the citations for each citing year are assessed. In this study, therefore, annual categories of field- and time-normalized citation scores (based on the characteristic scores and scales method: 0 = poorly cited, 1 = fairly cited, 2 = remarkably cited, and 3 = outstandingly cited) are used to study the citation histories of papers. As our dataset, we used all articles published in 2000 and their annual citation scores until 2015. We generated annual sequences of citation scores (e.g., [Formula: see text]) and compared the sequences of annual citation scores of six broader fields (natural sciences, engineering and technology, medical and health sciences, agricultural sciences, social sciences, and humanities). In agreement with previous studies, our results demonstrate that sequences with poorly cited (0) and fairly cited (1) elements dominate the publication set; sequences with remarkably cited (3) and outstandingly cited (4) periods are rare. The highest percentages of constantly poorly cited papers can be found in the social sciences; the lowest percentages are in the agricultural sciences and humanities. The largest group of papers with remarkably cited (3) and/or outstandingly cited (4) periods shows an increasing impact over the citing years with the following orders of sequences: [Formula: see text] (6.01%), which is followed by [Formula: see text] (1.62%). Only 0.11% of the papers ( n  = 909) are constantly on the outstandingly cited level.

  5. Evaluation of Normalization Methods on GeLC-MS/MS Label-Free Spectral Counting Data to Correct for Variation during Proteomic Workflows

    Science.gov (United States)

    Gokce, Emine; Shuford, Christopher M.; Franck, William L.; Dean, Ralph A.; Muddiman, David C.

    2011-12-01

    Normalization of spectral counts (SpCs) in label-free shotgun proteomic approaches is important to achieve reliable relative quantification. Three different SpC normalization methods, total spectral count (TSpC) normalization, normalized spectral abundance factor (NSAF) normalization, and normalization to selected proteins (NSP) were evaluated based on their ability to correct for day-to-day variation between gel-based sample preparation and chromatographic performance. Three spectral counting data sets obtained from the same biological conidia sample of the rice blast fungus Magnaporthe oryzae were analyzed by 1D gel and liquid chromatography-tandem mass spectrometry (GeLC-MS/MS). Equine myoglobin and chicken ovalbumin were spiked into the protein extracts prior to 1D-SDS- PAGE as internal protein standards for NSP. The correlation between SpCs of the same proteins across the different data sets was investigated. We report that TSpC normalization and NSAF normalization yielded almost ideal slopes of unity for normalized SpC versus average normalized SpC plots, while NSP did not afford effective corrections of the unnormalized data. Furthermore, when utilizing TSpC normalization prior to relative protein quantification, t-testing and fold-change revealed the cutoff limits for determining real biological change to be a function of the absolute number of SpCs. For instance, we observed the variance decreased as the number of SpCs increased, which resulted in a higher propensity for detecting statistically significant, yet artificial, change for highly abundant proteins. Thus, we suggest applying higher confidence level and lower fold-change cutoffs for proteins with higher SpCs, rather than using a single criterion for the entire data set. By choosing appropriate cutoff values to maintain a constant false positive rate across different protein levels (i.e., SpC levels), it is expected this will reduce the overall false negative rate, particularly for proteins with

  6. Light extinction method for diagnostics of particles sizes formed in magnetic field

    Science.gov (United States)

    Myshkin, Vyacheslav; Izhoykin, Dmitry; Grigoriev, Alexander; Gamov, Denis; Leonteva, Daria

    2018-03-01

    The results of laser diagnostics of dispersed particles formed upon cooling of Zn vapor are presented. The radiation attenuation in the wavelength range 420-630 nm with a step of 0.3 nm was registered. The attenuation coefficients spectral dependence was processed using known algorithms for integral equation solving. The 10 groups of 8 attenuation coefficients were formed. Each group was processed taking with considering of previous decisions. After processing of the 10th group of data, calculations were repeated from the first one. Data of the particles sizes formed in a magnetic field of 0, 44 and 76 mT are given. A model of physical processes in a magnetic field is discussed.

  7. Solar cell modules with improved backskin and methods for forming same

    Science.gov (United States)

    Hanoka, Jack I.

    1998-04-21

    A laminated solar cell module with a backskin layer that reduces the materials and labor required during the manufacturing process. The solar cell module includes a rigid front support layer formed of light transmitting material having first and second surfaces. A transparent encapsulant layer has a first surface disposed adjacent the second surface of the front support layer. A plurality of interconnected solar cells have a first surface disposed adjacent a second surface of the transparent encapsulant layer. The backskin layer is formed of a thermoplastic olefin, which includes first ionomer, a second ionomer, glass fiber, and carbon black. A first surface of the backskin layer is disposed adjacent a second surface of the interconnected solar cells. The transparent encapsulant layer and the backskin layer, in combination, encapsulate the interconnected solar cells. An end portion of the backskin layer can be wrapped around the edge of the module for contacting the first surface of the front support layer to form an edge seal. A laminated solar cell module with a backskin layer that reduces the materials and labor required during the manufacturing process. The solar cell module includes a rigid front support layer formed of light transmitting material having first and second surfaces. A transparent encapsulant layer has a first surface disposed adjacent the second surface of the front support layer. A plurality of interconnected solar cells have a first surface disposed adjacent a second surface of the transparent encapsulant layer. The backskin layer is formed of a thermoplastic olefin, which includes first ionomer, a second ionomer, glass fiber, and carbon black. A first surface of the backskin layer is disposed adjacent a second surface of the interconnected solar cells. The transparent encapsulant layer and the backskin layer, in combination, encapsulate the interconnected solar cells. An end portion of the backskin layer can be wrapped around the edge of the

  8. Methods of generalizing and classifying layer structures of a special form

    Energy Technology Data Exchange (ETDEWEB)

    Viktorova, N P

    1981-09-01

    An examination is made of the problem of classifying structures represented by weighted multilayer graphs of special form with connections between the vertices of each layer. The classification of structures of such a form is based on the construction of resolving sets of graphs as a result of generalization of the elements of the training sample of each class and the testing of whether an input object is isomorphic (with allowance for the weights) to the structures of the resolving set or not. 4 references.

  9. Validated UV-Spectrophotometric Methods for Determination of Gemifloxacin Mesylate in Pharmaceutical Tablet Dosage Forms

    Directory of Open Access Journals (Sweden)

    R. Rote Ambadas

    2010-01-01

    Full Text Available Two simple, economic and accurate UV spectrophotometric methods have been developed for determination of gemifloxacin mesylate in pharmaceutical tablet formulation. The first UV-spectrophotometric method depends upon the measurement of absorption at the wavelength 263.8 nm. In second area under curve method the wavelength range for detection was selected from 268.5-258.5 nm. Beer’s law was obeyed in the range of 2 to 12 μgmL-1 for both the methods. The proposed methods was validated statistically and applied successfully to determination of gemifloxacin mesylate in pharmaceutical formulation.

  10. Design and Selection of Machine Learning Methods Using Radiomics and Dosiomics for Normal Tissue Complication Probability Modeling of Xerostomia.

    Science.gov (United States)

    Gabryś, Hubert S; Buettner, Florian; Sterzing, Florian; Hauswald, Henrik; Bangert, Mark

    2018-01-01

    The purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP) models based on the mean radiation dose to parotid glands. A cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0-6 months (early), 6-15 months (late), 15-24 months (long-term), and at any time (a longitudinal model) after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC) of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis. NTCP models based on the parotid mean dose failed to predict xerostomia (AUCs  0.85), dose gradients in the right-left (AUCs > 0.78), and the anterior-posterior (AUCs > 0.72) direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose-volume histogram (DVH) spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing methods. We demonstrated that incorporation of organ- and dose-shape descriptors is beneficial for xerostomia prediction in highly conformal radiotherapy treatments. Due to strong reliance on patient-specific, dose-independent factors, our results underscore the need for development of personalized data-driven risk profiles for NTCP models of xerostomia. The facilitated

  11. Leveraging social and digital media for participant recruitment: A review of methods from the Bayley Short Form Formative Study

    OpenAIRE

    Burke-Garcia, Amelia; Mathew, Sunitha

    2017-01-01

    Introduction Social media is increasingly being used in research, including recruitment. Methods For the Bayley Short Form Formative Study, which was conducted under the the National Children’s Study, traditional methods of recruitment proved to be ineffective. Therefore, digital media were identified as potential channels for recruitment. Results Results included successful recruitment of over 1800 infant and toddler participants to the Study. Conclusions This paper outlines the methods, res...

  12. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties

    OpenAIRE

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2014-01-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results s...

  13. Assessment of a novel multi-array normalization method based on spike-in control probes suitable for microRNA datasets with global decreases in expression.

    Science.gov (United States)

    Sewer, Alain; Gubian, Sylvain; Kogel, Ulrike; Veljkovic, Emilija; Han, Wanjiang; Hengstermann, Arnd; Peitsch, Manuel C; Hoeng, Julia

    2014-05-17

    High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the "common reference design" and processed as "pseudo-single-channel". They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription-polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data

  14. Separating bathymetric data representing multiscale rhythmic bed forms : a geostatistical and spectral method compared

    NARCIS (Netherlands)

    van Dijk, Thaiënne A.G.P.; Lindenbergh, Roderik C.; Egberts, Paul J.P.

    2008-01-01

    The superimposition of rhythmic bed forms of different spatial scales is a common and natural phenomenon on sandy seabeds. The dynamics of such seabeds may interfere with different offshore activities and are therefore of interest to both scientists and offshore developers. State-of-the-art echo

  15. Effect of Bottoming on Material Property during Sheet Forming Process through Finite Element Method

    Science.gov (United States)

    Akinlabi, Stephen A.; Fatoba, Olawale S.; Mashinini, Peter M.; Akinlabi, Esther T.

    2018-03-01

    Metal forming is one of the conventional manufacturing processes of immense relevance till date even though modern manufacturing processes have evolved over the years. It is a known fact that material tends to return or spring back to its original form during forming or bending. The phenomena have been well managed through its application in various manufacturing processes by compensating for the spring back through overbending and bottoming. Overbending is bending the material beyond the desired shape to allow the material to spring back to the expected shape. Bottoming, on the other hand, is a process of undergoing plastic deformation at the point of bending. This study reports on the finite element analysis of the effect of bottoming on the material property during the sheet forming process with the aim of optimising the process. The result of the analysis revealed that the generated plastic strains are in the order between 1.750e00-1 at the peak of the bending and 3.604e00-2, which was at the early stage of the bending.

  16. The Optimal Conditions for Form-Focused Instruction: Method, Target Complexity, and Types of Knowledge

    Science.gov (United States)

    Kim, Jeong-eun

    2012-01-01

    This dissertation investigates optimal conditions for form-focused instruction (FFI) by considering effects of internal (i.e., timing and types of FFI) and external (i.e., complexity and familiarity) variables of FFI when it is offered within a primarily meaning-focused context of adult second language (L2) learning. Ninety-two Korean-speaking…

  17. Current algebra method for form factors and strong decays with hard pions and kaons

    International Nuclear Information System (INIS)

    Srivastava, P.P.

    1969-01-01

    The F K /F Π ratio between the kaon and pion decay couplings in one lepton pair, sum rules for Weinberg spectral functions, form factor renormalization of the K l3 decay because of the SU(3) symmetry violation and the calculations of strong decays of the K* and K A strange resonances are presented and discussed. (L.C.) [pt

  18. Adaptive Sampling based 3D Profile Measuring Method for Free-Form Surface

    Science.gov (United States)

    Duan, Xianyin; Zou, Yu; Gao, Qiang; Peng, Fangyu; Zhou, Min; Jiang, Guozhang

    2018-03-01

    In order to solve the problem of adaptability and scanning efficiency of the current surface profile detection device, a high precision and high efficiency detection approach is proposed for surface contour of free-form surface parts based on self- adaptability. The contact mechanical probe and the non-contact laser probe are synthetically integrated according to the sampling approach of adaptive front-end path detection. First, the front-end path is measured by the non-contact laser probe, and the detection path is planned by the internal algorithm of the measuring instrument. Then a reasonable measurement sampling is completed according to the planned path by the contact mechanical probe. The detection approach can effectively improve the measurement efficiency of the free-form surface contours and can simultaneously detect the surface contours of unknown free-form surfaces with different curvatures and even different rate of curvature. The detection approach proposed in this paper also has important reference value for free-form surface contour detection.

  19. Forming method of a functional layer-built film by micro-wave plasma CVD

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Keishi

    1988-11-18

    In forming an amorphous semi-conductor material film, the micro-wave plasma CVD cannot be generally used because of such demerits as film-separation, low yield, columnar structure in the film, and problems in the optical and electrical properties. In this invention, a specific substrate is placed in a layer-built film forming unit which is capable of maintaining vacuum; raw material gas for the film formation is introduced; plasma is generated by a micro-wave energy to decompose the raw material gas, thus forming the layer-built film on the substarte. Then a film is made by adding a specific amount of calcoganide-containing gas to the raw material gas. By this, the utilization efficiency of the raw material gas gets roughly 100% and both the adhesion to the substrate and the structural flexibility of the layer-built film increase, enhancing the yield of forming various functional elements (sensor, solar cell, thin transistor film, etc.), and thus greatly reducing the production cost. 6 figs., 7 tabs.

  20. Investigation into complexing of pentavalent actinide forms with some anions of organic acids by the coprecipitation method

    International Nuclear Information System (INIS)

    Moskvin, A.I.; Poznyakov, A.N.; AN SSSR, Moscow. Inst. Geokhimii i Analiticheskoj Khimii)

    1979-01-01

    Complexing of pentavolent forms of Np, Pu, Am actinides with anions of acetic, oxalic acids and EDTA is studied using the method of coprecipitation with iron hydroxide. Composition and stability constants of the actinide complexes formed are determined. The acids anions are arranged in a row in the order of decrease of complexing tendency that is EDTA anion>C 2 O 4 2- >CH 3 COO -