WorldWideScience

Sample records for qm-based end-point method

  1. Multiscale Free Energy Simulations: An Efficient Method for Connecting Classical MD Simulations to QM or QM/MM Free Energies Using Non-Boltzmann Bennett Reweighting Schemes

    Science.gov (United States)

    2015-01-01

    The reliability of free energy simulations (FES) is limited by two factors: (a) the need for correct sampling and (b) the accuracy of the computational method employed. Classical methods (e.g., force fields) are typically used for FES and present a myriad of challenges, with parametrization being a principle one. On the other hand, parameter-free quantum mechanical (QM) methods tend to be too computationally expensive for adequate sampling. One widely used approach is a combination of methods, where the free energy difference between the two end states is computed by, e.g., molecular mechanics (MM), and the end states are corrected by more accurate methods, such as QM or hybrid QM/MM techniques. Here we report two new approaches that significantly improve the aforementioned scheme; with a focus on how to compute corrections between, e.g., the MM and the more accurate QM calculations. First, a molecular dynamics trajectory that properly samples relevant conformational degrees of freedom is generated. Next, potential energies of each trajectory frame are generated with a QM or QM/MM Hamiltonian. Free energy differences are then calculated based on the QM or QM/MM energies using either a non-Boltzmann Bennett approach (QM-NBB) or non-Boltzmann free energy perturbation (NB-FEP). Both approaches are applied to calculate relative and absolute solvation free energies in explicit and implicit solvent environments. Solvation free energy differences (relative and absolute) between ethane and methanol in explicit solvent are used as the initial test case for QM-NBB. Next, implicit solvent methods are employed in conjunction with both QM-NBB and NB-FEP to compute absolute solvation free energies for 21 compounds. These compounds range from small molecules such as ethane and methanol to fairly large, flexible solutes, such as triacetyl glycerol. Several technical aspects were investigated. Ultimately some best practices are suggested for improving methods that seek to connect

  2. Development and application of QM/MM methods to study the solvation effects and surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Dibya, Pooja Arora [Iowa State Univ., Ames, IA (United States)

    2010-01-01

    Quantum mechanical (QM) calculations have the advantage of attaining high-level accuracy, however QM calculations become computationally inefficient as the size of the system grows. Solving complex molecular problems on large systems and ensembles by using quantum mechanics still poses a challenge in terms of the computational cost. Methods that are based on classical mechanics are an inexpensive alternative, but they lack accuracy. A good trade off between accuracy and efficiency is achieved by combining QM methods with molecular mechanics (MM) methods to use the robustness of the QM methods in terms of accuracy and the MM methods to minimize the computational cost. Two types of QM combined with MM (QM/MM) methods are the main focus of the present dissertation: the application and development of QM/MM methods for solvation studies and reactions on the Si(100) surface. The solvation studies were performed using a discreet solvation model that is largely based on first principles called the effective fragment potential method (EFP). The main idea of combining the EFP method with quantum mechanics is to accurately treat the solute-solvent and solvent-solvent interactions, such as electrostatic, polarization, dispersion and charge transfer, that are important in correctly calculating solvent effects on systems of interest. A second QM/MM method called SIMOMM (surface integrated molecular orbital molecular mechanics) is a hybrid QM/MM embedded cluster model that mimics the real surface.3 This method was employed to calculate the potential energy surfaces for reactions of atomic O on the Si(100) surface. The hybrid QM/MM method is a computationally inexpensive approach for studying reactions on larger surfaces in a reasonably accurate and efficient manner. This thesis is comprised of four chapters: Chapter 1 describes the general overview and motivation of the dissertation and gives a broad background of the computational methods that have been employed in this work

  3. Extracting dimer structures from simulations of organic-based materials using QM/MM methods

    Energy Technology Data Exchange (ETDEWEB)

    Pérez-Jiménez, A.J., E-mail: aj.perez@ua.es; Sancho-García, J.C., E-mail: jc.sancho@ua.es

    2015-09-28

    Highlights: • DFT geometries of isolated dimers in organic crystals differ from experimental ones. • This can be corrected using QM/MM geometry optimizations. • The QM = B3LYP–D3(ZD)/cc-pVDZ and MM = GAFF combination works reasonably well. - Abstract: The functionality of weakly bound organic materials, either in Nanoelectronics or in Materials Science, is known to be strongly affected by their morphology. Theoretical predictions of the underlying structure–property relationships are frequently based on calculations performed on isolated dimers, but the optimized structure of the latter may significantly differ from experimental data even when dispersion-corrected methods are used for it. Here, we address this problem on two organic crystals, namely coronene and 5,6,11,12-tetrachlorotetracene, concluding that it is caused by the absence of the surrounding monomers present in the crystal, and that it can be efficiently cured when the dimer is embedded into a general Quantum Mechanics/Molecular Mechanics (QM/MM) geometry optimization scheme. We also investigate how the size of the MM region affects the results. These findings may be helpful for the simulation of the morphology of active materials in crystalline or glassy samples.

  4. Tuned and Balanced Redistributed Charge Scheme for Combined Quantum Mechanical and Molecular Mechanical (QM/MM) Methods and Fragment Methods: Tuning Based on the CM5 Charge Model.

    Science.gov (United States)

    Wang, Bo; Truhlar, Donald G

    2013-02-12

    Tuned and balanced redistributed charge schemes have been developed for modeling the electrostatic fields of bonds that are cut by a quantum mechanical-molecular mechanical boundary in combined quantum mechanical and molecular mechanical (QM/MM) methods. First, the charge is balanced by adjusting the charge on the MM boundary atom to conserve the total charge of the entire QM/MM system. In the balanced smeared redistributed charge (BSRC) scheme, the adjusted MM boundary charge is smeared with a smearing width of 1.0 Å and is distributed in equal portions to the midpoints of the bonds between the MM boundary atom and the MM atoms bonded to it; in the balanced redistributed charge-2 (BRC2) scheme, the adjusted MM boundary charge is distributed as point charges in equal portions to the MM atoms that are bonded to the MM boundary atom. The QM subsystem is capped by a fluorine atom that is tuned to reproduce the sum of partial atomic charges of the uncapped portion of the QM subsystem. The new aspect of the present study is a new way to carry out the tuning process; in particular, the CM5 charge model, rather than the Mulliken population analysis applied in previous studies, is used for tuning the capping atom that terminates the dangling bond of the QM region. The mean unsigned error (MUE) of the QM/MM deprotonation energy for a 15-system test suite of deprotonation reactions is 2.3 kcal/mol for the tuned BSRC scheme (TBSRC) and 2.4 kcal/mol for the tuned BRC2 scheme (TBRC2). As was the case for the original tuning method based on Mulliken charges, the new tuning method performs much better than using conventional hydrogen link atoms, which have an MUE on this test set of about 7 kcal/mol. However, the new scheme eliminates the need to use small basis sets, which can be problematic, and it allows one to be more consistent by tuning the parameters with whatever basis set is appropriate for applications. (Alternatively, since the tuning parameters and partial charges

  5. Comparison of methods for accurate end-point detection of potentiometric titrations

    Science.gov (United States)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  6. Comparison of methods for accurate end-point detection of potentiometric titrations

    International Nuclear Information System (INIS)

    Villela, R L A; Borges, P P; Vyskočil, L

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper

  7. A simple and effective solution to the constrained QM/MM simulations

    Science.gov (United States)

    Takahashi, Hideaki; Kambe, Hiroyuki; Morita, Akihiro

    2018-04-01

    It is a promising extension of the quantum mechanical/molecular mechanical (QM/MM) approach to incorporate the solvent molecules surrounding the QM solute into the QM region to ensure the adequate description of the electronic polarization of the solute. However, the solvent molecules in the QM region inevitably diffuse into the MM bulk during the QM/MM simulation. In this article, we developed a simple and efficient method, referred to as the "boundary constraint with correction (BCC)," to prevent the diffusion of the solvent water molecules by means of a constraint potential. The point of the BCC method is to compensate the error in a statistical property due to the bias potential by adding a correction term obtained through a set of QM/MM simulations. The BCC method is designed so that the effect of the bias potential completely vanishes when the QM solvent is identical with the MM solvent. Furthermore, the desirable conditions, that is, the continuities of energy and force and the conservations of energy and momentum, are fulfilled in principle. We applied the QM/MM-BCC method to a hydronium ion(H3O+) in aqueous solution to construct the radial distribution function (RDF) of the solvent around the solute. It was demonstrated that the correction term fairly compensated the error and led the RDF in good agreement with the result given by an ab initio molecular dynamics simulation.

  8. Derivation of Reliable Geometries in QM Calculations of DNA Structures: Explicit Solvent QM/MM and Restrained Implicit Solvent QM Optimizations of G-Quadruplexes.

    Science.gov (United States)

    Gkionis, Konstantinos; Kruse, Holger; Šponer, Jiří

    2016-04-12

    Modern dispersion-corrected DFT methods have made it possible to perform reliable QM studies on complete nucleic acid (NA) building blocks having hundreds of atoms. Such calculations, although still limited to investigations of potential energy surfaces, enhance the portfolio of computational methods applicable to NAs and offer considerably more accurate intrinsic descriptions of NAs than standard MM. However, in practice such calculations are hampered by the use of implicit solvent environments and truncation of the systems. Conventional QM optimizations are spoiled by spurious intramolecular interactions and severe structural deformations. Here we compare two approaches designed to suppress such artifacts: partially restrained continuum solvent QM and explicit solvent QM/MM optimizations. We report geometry relaxations of a set of diverse double-quartet guanine quadruplex (GQ) DNA stems. Both methods provide neat structures without major artifacts. However, each one also has distinct weaknesses. In restrained optimizations, all errors in the target geometries (i.e., low-resolution X-ray and NMR structures) are transferred to the optimized geometries. In QM/MM, the initial solvent configuration causes some heterogeneity in the geometries. Nevertheless, both approaches represent a decisive step forward compared to conventional optimizations. We refine earlier computations that revealed sizable differences in the relative energies of GQ stems computed with AMBER MM and QM. We also explore the dependence of the QM/MM results on the applied computational protocol.

  9. Advances in quantum and molecular mechanical (QM/MM) simulations for organic and enzymatic reactions.

    Science.gov (United States)

    Acevedo, Orlando; Jorgensen, William L

    2010-01-19

    Application of combined quantum and molecular mechanical (QM/MM) methods focuses on predicting activation barriers and the structures of stationary points for organic and enzymatic reactions. Characterization of the factors that stabilize transition structures in solution and in enzyme active sites provides a basis for design and optimization of catalysts. Continued technological advances allowed for expansion from prototypical cases to mechanistic studies featuring detailed enzyme and condensed-phase environments with full integration of the QM calculations and configurational sampling. This required improved algorithms featuring fast QM methods, advances in computing changes in free energies including free-energy perturbation (FEP) calculations, and enhanced configurational sampling. In particular, the present Account highlights development of the PDDG/PM3 semi-empirical QM method, computation of multi-dimensional potentials of mean force (PMF), incorporation of on-the-fly QM in Monte Carlo (MC) simulations, and a polynomial quadrature method for efficient modeling of proton-transfer reactions. The utility of this QM/MM/MC/FEP methodology is illustrated for a variety of organic reactions including substitution, decarboxylation, elimination, and pericyclic reactions. A comparison to experimental kinetic results on medium effects has verified the accuracy of the QM/MM approach in the full range of solvents from hydrocarbons to water to ionic liquids. Corresponding results from ab initio and density functional theory (DFT) methods with continuum-based treatments of solvation reveal deficiencies, particularly for protic solvents. Also summarized in this Account are three specific QM/MM applications to biomolecular systems: (1) a recent study that clarified the mechanism for the reaction of 2-pyrone derivatives catalyzed by macrophomate synthase as a tandem Michael-aldol sequence rather than a Diels-Alder reaction, (2) elucidation of the mechanism of action of fatty

  10. Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM).

    Science.gov (United States)

    Sinitskiy, Anton V; Voth, Gregory A

    2018-01-07

    Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.

  11. Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM)

    Science.gov (United States)

    Sinitskiy, Anton V.; Voth, Gregory A.

    2018-01-01

    Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.

  12. Relative Free Energies for Hydration of Monovalent Ions from QM and QM/MM Simulations.

    Science.gov (United States)

    Lev, Bogdan; Roux, Benoît; Noskov, Sergei Yu

    2013-09-10

    Methods directly evaluating the hydration structure and thermodynamics of physiologically relevant cations (Na(+), K(+), Cl(-), etc.) have wide ranging applications in the fields of inorganic, physical, and biological chemistry. All-atom simulations based on accurate potential energy surfaces appear to offer a viable option for assessing the chemistry of ion solvation. Although MD and free energy simulations of ion solvation with classical force fields have proven their usefulness, a number of challenges still remain. One of them is the difficulty of force field benchmarking and validation against structural and thermodynamic data obtained for a condensed phase. Hybrid quantum mechanical/molecular mechanical (QM/MM) models combined with sampling algorithms have the potential to provide an accurate solvation model and to incorporate the effects from the surrounding, which is often missing in gas-phase ab initio computations. Herein, we report the results from QM/MM free energy simulations of Na(+)/K(+) and Cl(-)/Br(-) hydration where we simultaneously characterized the relative thermodynamics of ion solvation and changes in the solvation structure. The Flexible Inner Region Ensemble Separator (FIRES) method was used to impose a spatial separation between QM region and the outer sphere of solvent molecules treated with the CHARMM27 force field. FEP calculations based on QM/MM simulations utilizing the CHARMM/deMon2k interface were performed with different basis set combinations for K(+)/Na(+) and Cl(-)/Br(-) perturbations to establish the dependence of the computed free energies on the basis set level. The dependence of the computed relative free energies on the size of the QM and MM regions is discussed. The current methodology offers an accurate description of structural and thermodynamic aspects of the hydration of alkali and halide ions in neat solvents and can be used to obtain thermodynamic data on ion solvation in condensed phase along with underlying

  13. Gran method for end point anticipation in monosegmented flow titration

    Directory of Open Access Journals (Sweden)

    Aquino Emerson V

    2004-01-01

    Full Text Available An automatic potentiometric monosegmented flow titration procedure based on Gran linearisation approach has been developed. The controlling program can estimate the end point of the titration after the addition of three or four aliquots of titrant. Alternatively, the end point can be determined by the second derivative procedure. In this case, additional volumes of titrant are added until the vicinity of the end point and three points before and after the stoichiometric point are used for end point calculation. The performance of the system was assessed by the determination of chloride in isotonic beverages and parenteral solutions. The system employs a tubular Ag2S/AgCl indicator electrode. A typical titration, performed according to the IUPAC definition, requires only 60 mL of sample and about the same volume of titrant (AgNO3 solution. A complete titration can be carried out in 1 - 5 min. The accuracy and precision (relative standard deviation of ten replicates are 2% and 1% for the Gran and 1% and 0.5% for the Gran/derivative end point determination procedures, respectively. The proposed system reduces the time to perform a titration, ensuring low sample and reagent consumption, and full automatic sampling and titrant addition in a calibration-free titration protocol.

  14. Combined quantum and molecular mechanics (QM/MM).

    Science.gov (United States)

    Friesner, Richard A

    2004-12-01

    We describe the current state of the art of mixed quantum mechanics/molecular mechanics (QM/MM) methodology, with a particular focus on modeling of enzymatic reactions. Over the past decade, the effectiveness of these methods has increased dramatically, based on improved quantum chemical methods, advances in the description of the QM/MM interface, and reductions in the cost/performance of computing hardware. Two examples of pharmaceutically relevant applications, cytochrome P450 and class C β-lactamase, are presented.: © 2004 Elsevier Ltd . All rights reserved.

  15. Systematic Quantum Mechanical Region Determination in QM/MM Simulation.

    Science.gov (United States)

    Karelina, Maria; Kulik, Heather J

    2017-02-14

    Hybrid quantum mechanical-molecular mechanical (QM/MM) simulations are widely used in enzyme simulation. Over ten convergence studies of QM/MM methods have revealed over the past several years that key energetic and structural properties approach asymptotic limits with only very large (ca. 500-1000 atom) QM regions. This slow convergence has been observed to be due in part to significant charge transfer between the core active site and the surrounding protein environment, which cannot be addressed by improvement of MM force fields or the embedding method employed within QM/MM. Given this slow convergence, it becomes essential to identify strategies for the most atom-economical determination of optimal QM regions and to gain insight into the crucial interactions captured only in large QM regions. Here, we extend and develop two methods for quantitative determination of QM regions. First, in the charge shift analysis (CSA) method, we probe the reorganization of electron density when core active site residues are removed completely, as determined by large-QM region QM/MM calculations. Second, we introduce the highly parallelizable Fukui shift analysis (FSA), which identifies how core/substrate frontier states are altered by the presence of an additional QM residue in smaller initial QM regions. We demonstrate that the FSA and CSA approaches are complementary and consistent on three test case enzymes: catechol O-methyltransferase, cytochrome P450cam, and hen eggwhite lysozyme. We also introduce validation strategies and test the sensitivities of the two methods to geometric structure, basis set size, and electronic structure methodology. Both methods represent promising approaches for the systematic, unbiased determination of quantum mechanical effects in enzymes and large systems that necessitate multiscale modeling.

  16. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  17. Condensed phase QM/MM simulations utilizing the exchange core functions to describe exchange repulsions at the QM boundary region

    International Nuclear Information System (INIS)

    Umino, Satoru; Takahashi, Hideaki; Morita, Akihiro

    2016-01-01

    In a recent work, we developed a method [H. Takahashi et al., J. Chem. Phys. 143, 084104 (2015)] referred to as exchange-core function (ECF) approach, to compute exchange repulsion E ex between solute and solvent in the framework of the quantum mechanical (QM)/molecular mechanical (MM) method. The ECF, represented with a Slater function, plays an essential role in determining E ex on the basis of the overlap model. In the work of Takahashi et al. [J. Chem. Phys. 143, 084104 (2015)], it was demonstrated that our approach is successful in computing the hydrogen bond energies of minimal QM/MM systems including a cationic QM solute. We provide in this paper the extension of the ECF approach to the free energy calculation in condensed phase QM/MM systems by combining the ECF and the QM/MM-ER approach [H. Takahashi et al., J. Chem. Phys. 121, 3989 (2004)]. By virtue of the theory of solutions in energy representation, the free energy contribution δμ ex from the exchange repulsion was naturally formulated. We found that the ECF approach in combination with QM/MM-ER gives a substantial improvement on the calculation of the hydration free energy of a hydronium ion. This can be attributed to the fact that the ECF reasonably realizes the contraction of the electron density of the cation due to the deficit of an electron.

  18. QM/MM and classical molecular dynamics simulation of histidine-tagged peptide immobilization on nickel surface

    Energy Technology Data Exchange (ETDEWEB)

    Yang Zhenyu [State Key Laboratory of Nonlinear Mechanics (LNM), Institute of Mechanics, Chinese Academy of Sciences, Beijing 100080(China); Zhao Yapu [State Key Laboratory of Nonlinear Mechanics (LNM), Institute of Mechanics, Chinese Academy of Sciences, Beijing 100080 (China)]. E-mail: yzhao@lnm.imech.ac.cn

    2006-05-15

    The hybrid quantum mechanics (QM) and molecular mechanics (MM) method is employed to simulate the His-tagged peptide adsorption to ionized region of nickel surface. Based on the previous experiments, the peptide interaction with one Ni ion is considered. In the QM/MM calculation, the imidazoles on the side chain of the peptide and the metal ion with several neighboring water molecules are treated as QM part calculated by 'GAMESS', and the rest atoms are treated as MM part calculated by 'TINKER'. The integrated molecular orbital/molecular mechanics (IMOMM) method is used to deal with the QM part with the transitional metal. By using the QM/MM method, we optimize the structure of the synthetic peptide chelating with a Ni ion. Different chelate structures are considered. The geometry parameters of the QM subsystem we obtained by QM/MM calculation are consistent with the available experimental results. We also perform a classical molecular dynamics (MD) simulation with the experimental parameters for the synthetic peptide adsorption on a neutral Ni(1 0 0) surface. We find that half of the His-tags are almost parallel with the substrate, which enhance the binding strength. Peeling of the peptide from the Ni substrate is simulated in the aqueous solvent and in vacuum, respectively. The critical peeling forces in the two environments are obtained. The results show that the imidazole rings are attached to the substrate more tightly than other bases in this peptide.

  19. Condensed phase QM/MM simulations utilizing the exchange core functions to describe exchange repulsions at the QM boundary region

    Energy Technology Data Exchange (ETDEWEB)

    Umino, Satoru; Takahashi, Hideaki, E-mail: hideaki@m.tohoku.ac.jp; Morita, Akihiro [Department of Chemistry, Graduate School of Science, Tohoku University, Sendai, Miyagi 980-8578 (Japan)

    2016-08-28

    In a recent work, we developed a method [H. Takahashi et al., J. Chem. Phys. 143, 084104 (2015)] referred to as exchange-core function (ECF) approach, to compute exchange repulsion E{sub ex} between solute and solvent in the framework of the quantum mechanical (QM)/molecular mechanical (MM) method. The ECF, represented with a Slater function, plays an essential role in determining E{sub ex} on the basis of the overlap model. In the work of Takahashi et al. [J. Chem. Phys. 143, 084104 (2015)], it was demonstrated that our approach is successful in computing the hydrogen bond energies of minimal QM/MM systems including a cationic QM solute. We provide in this paper the extension of the ECF approach to the free energy calculation in condensed phase QM/MM systems by combining the ECF and the QM/MM-ER approach [H. Takahashi et al., J. Chem. Phys. 121, 3989 (2004)]. By virtue of the theory of solutions in energy representation, the free energy contribution δμ{sub ex} from the exchange repulsion was naturally formulated. We found that the ECF approach in combination with QM/MM-ER gives a substantial improvement on the calculation of the hydration free energy of a hydronium ion. This can be attributed to the fact that the ECF reasonably realizes the contraction of the electron density of the cation due to the deficit of an electron.

  20. The mean photon energy anti E{sub F} at the point of measurement determines the detector-specific radiation quality correction factor k{sub Q,M} in {sup 192}Ir brachytherapy dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Chofor, Ndimofor; Harder, Dietrich; Selbach, Hans-Joachim; Poppe, Bjoern [University of Oldenburg and Pius-Hospital Oldenburg (Germany). Medical Radiation Physics Group

    2016-11-01

    The application of various radiation detectors for brachytherapy dosimetry has motivated this study of the energy dependence of radiation quality correction factor k{sub Q,M}, the quotient of the detector responses under calibration conditions at a {sup 60}Co unit and under the given non-reference conditions at the point of measurement, M, occurring in photon brachytherapy. The investigated detectors comprise TLD, radiochromic film, ESR, Si diode, plastic scintillator and diamond crystal detectors as well as ionization chambers of various sizes, whose measured response-energy relationships, taken from the literature, served as input data. Brachytherapy photon fields were Monte-Carlo simulated for an ideal isotropic {sup 192}Ir point source, a model spherical {sup 192}Ir source with steel encapsulation and a commercial HDR GammaMed Plus source. The radial source distance was varied within cylindrical water phantoms with outer radii ranging from 10 to 30 cm and heights from 20 to 60 cm. By application of this semiempirical method - originally developed for teletherapy dosimetry - it has been shown that factor k{sub Q,M} is closely correlated with a single variable, the fluence-weighted mean photon energy anti E{sub F} at the point of measurement. The radial profiles of anti E{sub F} obtained with either the commercial {sup 192}Ir source or the two simplified source variants show little variation. The observed correlations between parameters k{sub Q,M} and anti E{sub F} are represented by fitting formulae for all investigated detectors, and further variation of the detector type is foreseen. The herewith established close correlation of radiation quality correction factor k{sub Q,M} with local mean photon energy anti E{sub F} can be regarded as a simple regularity, facilitating the practical application of correction factor k{sub Q,M} for in-phantom dosimetry around {sup 192}Ir brachytherapy sources. anti E{sub F} values can be assessed by Monte Carlo simulation or

  1. QM/MM hybrid calculation of biological macromolecules using a new interface program connecting QM and MM engines

    Energy Technology Data Exchange (ETDEWEB)

    Hagiwara, Yohsuke; Tateno, Masaru [Graduate School of Pure and Applied Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba Science City, Ibaraki 305-8571 (Japan); Ohta, Takehiro [Center for Computational Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba Science City, Ibaraki 305-8577 (Japan)], E-mail: tateno@ccs.tsukuba.ac.jp

    2009-02-11

    An interface program connecting a quantum mechanics (QM) calculation engine, GAMESS, and a molecular mechanics (MM) calculation engine, AMBER, has been developed for QM/MM hybrid calculations. A protein-DNA complex is used as a test system to investigate the following two types of QM/MM schemes. In a 'subtractive' scheme, electrostatic interactions between QM/MM regions are truncated in QM calculations; in an 'additive' scheme, long-range electrostatic interactions within a cut-off distance from QM regions are introduced into one-electron integration terms of a QM Hamiltonian. In these calculations, 338 atoms are assigned as QM atoms using Hartree-Fock (HF)/density functional theory (DFT) hybrid all-electron calculations. By comparing the results of the additive and subtractive schemes, it is found that electronic structures are perturbed significantly by the introduction of MM partial charges surrounding QM regions, suggesting that biological processes occurring in functional sites are modulated by the surrounding structures. This also indicates that the effects of long-range electrostatic interactions involved in the QM Hamiltonian are crucial for accurate descriptions of electronic structures of biological macromolecules.

  2. QM/MM free energy simulations: recent progress and challenges

    Science.gov (United States)

    Lu, Xiya; Fang, Dong; Ito, Shingo; Okamoto, Yuko; Ovchinnikov, Victor

    2016-01-01

    Due to the higher computational cost relative to pure molecular mechanical (MM) simulations, hybrid quantum mechanical/molecular mechanical (QM/MM) free energy simulations particularly require a careful consideration of balancing computational cost and accuracy. Here we review several recent developments in free energy methods most relevant to QM/MM simulations and discuss several topics motivated by these developments using simple but informative examples that involve processes in water. For chemical reactions, we highlight the value of invoking enhanced sampling technique (e.g., replica-exchange) in umbrella sampling calculations and the value of including collective environmental variables (e.g., hydration level) in metadynamics simulations; we also illustrate the sensitivity of string calculations, especially free energy along the path, to various parameters in the computation. Alchemical free energy simulations with a specific thermodynamic cycle are used to probe the effect of including the first solvation shell into the QM region when computing solvation free energies. For cases where high-level QM/MM potential functions are needed, we analyze two different approaches: the QM/MM-MFEP method of Yang and co-workers and perturbative correction to low-level QM/MM free energy results. For the examples analyzed here, both approaches seem productive although care needs to be exercised when analyzing the perturbative corrections. PMID:27563170

  3. Extended representations of observables and states for a noncontextual reinterpretation of QM

    International Nuclear Information System (INIS)

    Garola, Claudio; Sozzo, Sandro

    2012-01-01

    A crucial and problematical feature of quantum mechanics (QM) is nonobjectivity of properties. The ESR model restores objectivity reinterpreting quantum probabilities as conditional on detection and embodying the mathematical formalism of QM into a broader noncontextual (hence local) framework. We propose here an improved presentation of the ESR model containing a more complete mathematical representation of the basic entities of the model. We also extend the model to mixtures showing that the mathematical representations of proper mixtures do not coincide with the mathematical representation of mixtures provided by QM, while the representation of improper mixtures does. This feature of the ESR model entails that some interpretative problems raising in QM when dealing with mixtures are avoided. From an empirical point of view, the predictions of the ESR model depend on some parameters which may be such that they are very close to the predictions of QM in most cases. But the nonstandard representation of proper mixtures allows us to propose the scheme of an experiment that could check whether the predictions of QM or the predictions of the ESR model are correct. (paper)

  4. Determination of excited states of quantum systems by finite difference time domain method (FDTD) with supersymmetric quantum mechanics (SUSY-QM)

    Energy Technology Data Exchange (ETDEWEB)

    Sudiarta, I. Wayan; Angraini, Lily Maysari, E-mail: lilyangraini@unram.ac.id [Physics Study Program, University of Mataram, Jln. Majapahit 62 Mataram, NTB (Indonesia)

    2016-04-19

    We have applied the finite difference time domain (FDTD) method with the supersymmetric quantum mechanics (SUSY-QM) procedure to determine excited energies of one dimensional quantum systems. The theoretical basis of FDTD, SUSY-QM, a numerical algorithm and an illustrative example for a particle in a one dimensional square-well potential were given in this paper. It was shown that the numerical results were in excellent agreement with theoretical results. Numerical errors produced by the SUSY-QM procedure was due to errors in estimations of superpotentials and supersymmetric partner potentials.

  5. A QM/MM refinement of an experimental DNA structure with metal-mediated base pairs.

    Science.gov (United States)

    Kumbhar, Sadhana; Johannsen, Silke; Sigel, Roland K O; Waller, Mark P; Müller, Jens

    2013-10-01

    A series of hybrid quantum mechanical/molecular mechanical (QM/MM) calculations was performed on models of a DNA duplex with artificial silver(I)-mediated imidazole base pairs. The optimized structures were compared to the original experimental NMR structure (Nat. Chem. 2 (2010) 229-234). The metal⋯metal distances are significantly shorter (~0.5Å) in the QM/MM model than in the original NMR structure. As a result, argentophilic interactions are feasible between the silver(I) ions of neighboring metal-mediated base pairs. Using the computationally determined metal⋯metal distances, a re-refined NMR solution structure of the DNA duplex was obtained. In this new NMR structure, all experimental constraints remain fulfilled. The new NMR structure shows less deviation from the regular B-type conformation than the original one. This investigation shows that the application of QM/MM models to generate additional constraints to be used during NMR structural refinements represents an elegant approach to obtaining high-resolution NMR structures. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. AN IMPROVEMENT ON GEOMETRY-BASED METHODS FOR GENERATION OF NETWORK PATHS FROM POINTS

    Directory of Open Access Journals (Sweden)

    Z. Akbari

    2014-10-01

    Full Text Available Determining network path is important for different purposes such as determination of road traffic, the average speed of vehicles, and other network analysis. One of the required input data is information about network path. Nevertheless, the data collected by the positioning systems often lead to the discrete points. Conversion of these points to the network path have become one of the challenges which different researchers, presents many ways for solving it. This study aims at investigating geometry-based methods to estimate the network paths from the obtained points and improve an existing point to curve method. To this end, some geometry-based methods have been studied and an improved method has been proposed by applying conditions on the best method after describing and illustrating weaknesses of them.

  7. UO3 deactivation end point criteria

    International Nuclear Information System (INIS)

    Stefanski, L.D.

    1994-01-01

    The UO 3 Deactivation End Point Criteria are necessary to facilitate the transfer of the UO 3 Facility from the Office of Facility Transition and Management (EM-60) to the office of Environmental Restoration (EM-40). The criteria were derived from a logical process for determining end points for the systems and spaces at the UO 3 , Facility based on the objectives, tasks, and expected future uses pertinent to that system or space. Furthermore, the established criteria meets the intent and supports the draft guidance for acceptance criteria prepared by EM-40, open-quotes U.S. Department of Energy office of Environmental Restoration (EM-40) Decontamination and Decommissioning Guidance Document (Draft).close quotes For the UO 3 Facility, the overall objective of deactivation is to achieve a safe, stable and environmentally sound condition, suitable for an extended period, as quickly and economically as possible. Once deactivated, the facility is kept in its stable condition by means of a methodical surveillance and maintenance (S ampersand M) program, pending ultimate decontamination and decommissioning (D ampersand D). Deactivation work involves a range of tasks, such as removal of hazardous material, elimination or shielding of radiation fields, partial decontamination to permit access for inspection, installation of monitors and alarms, etc. it is important that the end point of each of these tasks be established clearly and in advance, for the following reasons: (1) End points must be such that the central element of the deactivation objective - to achieve stability - is unquestionably achieved. (2) Much of the deactivation work involves worker exposure to radiation or dangerous materials. This can be minimized by avoiding unnecessary work. (3) Each task is, in effect, competing for resources with other deactivation tasks and other facilities. By assuring that each task is appropriately bounded, DOE's overall resources can be used most fully and effectively

  8. End points for validating early warning scores in the context of rapid response systems

    DEFF Research Database (Denmark)

    Pedersen, N. E.; Oestergaard, D.; Lippert, A.

    2016-01-01

    with optimal treatment. This could pose a limitation to studies using these end points. We studied current expert opinion on end points for validating tools for the identification of patients in hospital wards at risk of imminent critical illness. METHODS: The Delphi consensus methodology was used. We......INTRODUCTION: When investigating early warning scores and similar physiology-based risk stratification tools, death, cardiac arrest and intensive care unit admission are traditionally used as end points. A large proportion of the patients identified by these end points cannot be saved, even...

  9. Grid-Based Projector Augmented Wave (GPAW) Implementation of Quantum Mechanics/Molecular Mechanics (QM/MM) Electrostatic Embedding and Application to a Solvated Diplatinum Complex.

    Science.gov (United States)

    Dohn, A O; Jónsson, E Ö; Levi, G; Mortensen, J J; Lopez-Acevedo, O; Thygesen, K S; Jacobsen, K W; Ulstrup, J; Henriksen, N E; Møller, K B; Jónsson, H

    2017-12-12

    A multiscale density functional theory-quantum mechanics/molecular mechanics (DFT-QM/MM) scheme is presented, based on an efficient electrostatic coupling between the electronic density obtained from a grid-based projector augmented wave (GPAW) implementation of density functional theory and a classical potential energy function. The scheme is implemented in a general fashion and can be used with various choices for the descriptions of the QM or MM regions. Tests on H 2 O clusters, ranging from dimer to decamer show that no systematic energy errors are introduced by the coupling that exceeds the differences in the QM and MM descriptions. Over 1 ns of liquid water, Born-Oppenheimer QM/MM molecular dynamics (MD) are sampled combining 10 parallel simulations, showing consistent liquid water structure over the QM/MM border. The method is applied in extensive parallel MD simulations of an aqueous solution of the diplatinum [Pt 2 (P 2 O 5 H 2 ) 4 ] 4- complex (PtPOP), spanning a total time period of roughly half a nanosecond. An average Pt-Pt distance deviating only 0.01 Å from experimental results, and a ground-state Pt-Pt oscillation frequency deviating by <2% from experimental results were obtained. The simulations highlight a remarkable harmonicity of the Pt-Pt oscillation, while also showing clear signs of Pt-H hydrogen bonding and directional coordination of water molecules along the Pt-Pt axis of the complex.

  10. Efficient approach to obtain free energy gradient using QM/MM MD simulation

    International Nuclear Information System (INIS)

    Asada, Toshio; Koseki, Shiro; Ando, Kanta

    2015-01-01

    The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means of FEG and the nudged elastic band (NEB) method

  11. Atomistic insight into the catalytic mechanism of glycosyltransferases by combined quantum mechanics/molecular mechanics (QM/MM) methods.

    Science.gov (United States)

    Tvaroška, Igor

    2015-02-11

    Glycosyltransferases catalyze the formation of glycosidic bonds by assisting the transfer of a sugar residue from donors to specific acceptor molecules. Although structural and kinetic data have provided insight into mechanistic strategies employed by these enzymes, molecular modeling studies are essential for the understanding of glycosyltransferase catalyzed reactions at the atomistic level. For such modeling, combined quantum mechanics/molecular mechanics (QM/MM) methods have emerged as crucial. These methods allow the modeling of enzymatic reactions by using quantum mechanical methods for the calculation of the electronic structure of the active site models and treating the remaining enzyme environment by faster molecular mechanics methods. Herein, the application of QM/MM methods to glycosyltransferase catalyzed reactions is reviewed, and the insight from modeling of glycosyl transfer into the mechanisms and transition states structures of both inverting and retaining glycosyltransferases are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. End points and assessments in esthetic dental treatment.

    Science.gov (United States)

    Ishida, Yuichi; Fujimoto, Keiko; Higaki, Nobuaki; Goto, Takaharu; Ichikawa, Tetsuo

    2015-10-01

    There are two key considerations for successful esthetic dental treatments. This article systematically describes the two key considerations: the end points of esthetic dental treatments and assessments of esthetic outcomes, which are also important for acquiring clinical skill in esthetic dental treatments. The end point and assessment of esthetic dental treatment were discussed through literature reviews and clinical practices. Before designing a treatment plan, the end point of dental treatment should be established. The section entitled "End point of esthetic dental treatment" discusses treatments for maxillary anterior teeth and the restoration of facial profile with prostheses. The process of assessing treatment outcomes entitled "Assessments of esthetic dental treatment" discusses objective and subjective evaluation methods. Practitioners should reach an agreement regarding desired end points with patients through medical interviews, and continuing improvements and developments of esthetic assessments are required to raise the therapeutic level of esthetic dental treatments. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  13. QM/MM investigations of organic chemistry oriented questions.

    Science.gov (United States)

    Schmidt, Thomas C; Paasche, Alexander; Grebner, Christoph; Ansorg, Kay; Becker, Johannes; Lee, Wook; Engels, Bernd

    2014-01-01

    About 35 years after its first suggestion, QM/MM became the standard theoretical approach to investigate enzymatic structures and processes. The success is due to the ability of QM/MM to provide an accurate atomistic picture of enzymes and related processes. This picture can even be turned into a movie if nuclei-dynamics is taken into account to describe enzymatic processes. In the field of organic chemistry, QM/MM methods are used to a much lesser extent although almost all relevant processes happen in condensed matter or are influenced by complicated interactions between substrate and catalyst. There is less importance for theoretical organic chemistry since the influence of nonpolar solvents is rather weak and the effect of polar solvents can often be accurately described by continuum approaches. Catalytic processes (homogeneous and heterogeneous) can often be reduced to truncated model systems, which are so small that pure quantum-mechanical approaches can be employed. However, since QM/MM becomes more and more efficient due to the success in software and hardware developments, it is more and more used in theoretical organic chemistry to study effects which result from the molecular nature of the environment. It is shown by many examples discussed in this review that the influence can be tremendous, even for nonpolar reactions. The importance of environmental effects in theoretical spectroscopy was already known. Due to its benefits, QM/MM can be expected to experience ongoing growth for the next decade.In the present chapter we give an overview of QM/MM developments and their importance in theoretical organic chemistry, and review applications which give impressions of the possibilities and the importance of the relevant effects. Since there is already a bunch of excellent reviews dealing with QM/MM, we will discuss fundamental ingredients and developments of QM/MM very briefly with a focus on very recent progress. For the applications we follow a similar

  14. QM Automata: A New Class of Restricted Quantum Membrane Automata.

    Science.gov (United States)

    Giannakis, Konstantinos; Singh, Alexandros; Kastampolidou, Kalliopi; Papalitsas, Christos; Andronikos, Theodore

    2017-01-01

    The term "Unconventional Computing" describes the use of non-standard methods and models in computing. It is a recently established field, with many interesting and promising results. In this work we combine notions from quantum computing with aspects of membrane computing to define what we call QM automata. Specifically, we introduce a variant of quantum membrane automata that operate in accordance with the principles of quantum computing. We explore the functionality and capabilities of the QM automata through indicative examples. Finally we suggest future directions for research on QM automata.

  15. Calorimetry end-point predictions

    International Nuclear Information System (INIS)

    Fox, M.A.

    1981-01-01

    This paper describes a portion of the work presently in progress at Rocky Flats in the field of calorimetry. In particular, calorimetry end-point predictions are outlined. The problems associated with end-point predictions and the progress made in overcoming these obstacles are discussed. The two major problems, noise and an accurate description of the heat function, are dealt with to obtain the most accurate results. Data are taken from an actual calorimeter and are processed by means of three different noise reduction techniques. The processed data are then utilized by one to four algorithms, depending on the accuracy desired to determined the end-point

  16. Why the tautomerization of the G·C Watson-Crick base pair via the DPT does not cause point mutations during DNA replication? QM and QTAIM comprehensive analysis.

    Science.gov (United States)

    Brovarets', Ol'ha O; Hovorun, Dmytro M

    2014-01-01

    The ground-state tautomerization of the G·C Watson-Crick base pair by the double proton transfer (DPT) was comprehensively studied in vacuo and in the continuum with a low dielectric constant (ϵ = 4), corresponding to a hydrophobic interface of protein-nucleic acid interactions, using DFT and MP2 levels of quantum-mechanical (QM) theory and quantum theory "Atoms in molecules" (QTAIM). Based on the sweeps of the electron-topological, geometric, polar, and energetic parameters, which describe the course of the G·C ↔ G*·C* tautomerization (mutagenic tautomers of the G and C bases are marked with an asterisk) through the DPT along the intrinsic reaction coordinate (IRC), it was proved that it is, strictly speaking, a concerted asynchronous process both at the DFT and MP2 levels of theory, in which protons move with a small time gap in vacuum, while this time delay noticeably increases in the continuum with ϵ = 4. It was demonstrated using the conductor-like polarizable continuum model (CPCM) that the continuum with ϵ = 4 does not qualitatively affect the course of the tautomerization reaction. The DPT in the G·C Watson-Crick base pair occurs without any intermediates both in vacuum and in the continuum with ϵ = 4 at the DFT/MP2 levels of theory. The nine key points along the IRC of the G·C base pair tautomerization, which could be considered as electron-topological "fingerprints" of a concerted asynchronous process of the tautomerization via the DPT, have been identified and fully characterized. These key points have been used to define the reactant, transition state, and product regions of the DPT reaction in the G·C base pair. Analysis of the energetic characteristics of the H-bonds allows us to arrive at a definite conclusion that the middle N1H⋯N3/N3H⋯N1 and the lower N2H⋯O2/N2H⋯O2 parallel H-bonds in the G·C/G*·C* base pairs, respectively, are anticooperative, that is, the strengthening of the middle H-bond is accompanied

  17. On the difference between additive and subtractive QM/MM calculations

    Science.gov (United States)

    Cao, Lili; Ryde, Ulf

    2018-04-01

    The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e. the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended.

  18. On the Difference Between Additive and Subtractive QM/MM Calculations

    Directory of Open Access Journals (Sweden)

    Lili Cao

    2018-04-01

    Full Text Available The combined quantum mechanical (QM and molecular mechanical (MM approach (QM/MM is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e., the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic, and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended.

  19. Progresses in Ab Initio QM/MM Free Energy Simulations of Electrostatic Energies in Proteins: Accelerated QM/MM Studies of pKa, Redox Reactions and Solvation Free Energies

    Energy Technology Data Exchange (ETDEWEB)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-03-01

    overall perspective of the potential of QM/MM calculations in general evaluations of electrostatic free energies, pointing out that our approach should provide a very powerful and accurate tool to predict the electrostatics of not only solution but also enzymatic reactions, as well as the solvation free energies of even larger systems, such as nucleic acid bases incorporated into DNA.

  20. Radiometric titration of officinal radiopharmaceuticals using radioactive kryptonates as end-point indicators. II

    International Nuclear Information System (INIS)

    Harangozo, M.; Jombik, J.; Schiller, P.; Toelgyessy, J.

    1981-01-01

    A method for the determination of citric, tartaric and undecylenic acids based on radiometric titration with 0.1 or 0.05 mole.l -1 NaOH was developed. As an indicator of the end point, radioactive kryptonate of glass was used. Experimental technique, results of determinations as well as other possible applications of the radioactive kryptonate of glass for end point determination in alkalimetric analyses of officinal pharmaceuticals are discussed. (author)

  1. A step towards standardization: A method for end-point titer determination by fluorescence index of an automated microscope. End-point titer determination by fluorescence index.

    Science.gov (United States)

    Carbone, Teresa; Gilio, Michele; Padula, Maria Carmela; Tramontano, Giuseppina; D'Angelo, Salvatore; Pafundi, Vito

    2018-05-01

    Indirect Immunofluorescence (IIF) is widely considered the Gold Standard for Antinuclear Antibody (ANA) screening. However, the high inter-reader variability remains the major disadvantage associated with ANA testing and the main reason for the increasing demand of the computer-aided immunofluorescence microscope. Previous studies proposed the quantification of the fluorescence intensity as an alternative for the classical end-point titer evaluation. However, the different distribution of bright/dark light linked to the nature of the self-antigen and its location in the cells result in different mean fluorescence intensities. The aim of the present study was to correlate Fluorescence Index (F.I.) with end-point titers for each well-defined ANA pattern. Routine serum samples were screened for ANA testing on HEp-2000 cells using Immuno Concepts Image Navigator System, and positive samples were serially diluted to assign the end-point titer. A comparison between F.I. and end-point titers related to 10 different staining patterns was made. According to our analysis, good technical performance of F.I. (97% sensitivity and 94% specificity) was found. A significant correlation between quantitative reading of F.I. and end-point titer groups was observed using Spearman's test and regression analysis. A conversion scale of F.I. in end-point titers for each recognized ANA-pattern was obtained. The Image Navigator offers the opportunity to improve worldwide harmonization of ANA test results. In particular, digital F.I. allows quantifying ANA titers by using just one sample dilution. It could represent a valuable support for the routine laboratory and an effective tool to reduce inter- and intra-laboratory variability. Copyright © 2018. Published by Elsevier B.V.

  2. Radiometric titration of officinal radiopharmaceuticals using radioactive kryptonates as end-point indicators. I

    International Nuclear Information System (INIS)

    Toelgyessy, J.; Dillinger, P.; Harangozo, M.; Jombik, J.

    1980-01-01

    A method for the determination of salicylic, acetylsalicylic and benzoic acids in officinal pharmaceutical based on radiometric titration with 0.1 mol.l -1 NaOH was developed. The end-point was detected with the aid of radioactive glass kryptonate. After the end-point, the excess titrant attacks the glass surface layers and this results in releasing 85 Kr, and consequently, in decreasing the radioactivity of the kryptonate employed. The radioactive kryptonate used as an indicator was prepared by the bombardment of glass with accelerated 85 Kr ions. The developed method is simple, accurate and correct. (author)

  3. PENGENALAN POLA SIDIK JARI RIDGE ENDING DAN BIFURCATION POINT DENGAN EKSTRAKSI MINUSI METODE CROSSING NUMBER

    Directory of Open Access Journals (Sweden)

    I Putu Dody Lesmana

    2012-09-01

    Full Text Available Abstract: Biometric is a development of basic method of identification using human natural characteristics as its basic. One of the biometric system that is often used is fingerprint. Fingerprint matching system can be obtained by extraction of minutiae information. Information from minutiae extraction generated ridge ending and bifurcation. The technique coffered in this paper is based on the extraction of minutiae from fingerprint image using crossing number (CN method to get ridge ending and bifurcation point by scanning each of ridges point. False identification of minutiae structure may be introduced into the fingerprint image due to hole and spur structure. It is necessary to test the validity of each minutiae point to eliminate false minutiae. Experiments are firstly conducted to assess how well the crossing number method is able to extract the minutiae point. The minutiae validation algorithm is then evaluated to see how effective the algorithm is in detecting the false minutiae. From experiments result using crossing number method, it can be deduced that all ridge points corresponding to ridge ending and bifurcation point have been detected successfully. However, there are a few cases where the extracted minutiae do not correspond to true minutiae points due to hole and spur structure. Applying minutiae validation algorithm is able to cancel out the false ridge endings created by the spur structure and bifurcations created by the hole structures.

  4. Structural analysis of recombinant human protein QM

    International Nuclear Information System (INIS)

    Gualberto, D.C.H.; Fernandes, J.L.; Silva, F.S.; Saraiva, K.W.; Affonso, R.; Pereira, L.M.; Silva, I.D.C.G.

    2012-01-01

    Full text: The ribosomal protein QM belongs to a family of ribosomal proteins, which is highly conserved from yeast to humans. The presence of the QM protein is necessary for joining the 60S and 40S subunits in a late step of the initiation of mRNA translation. Although the exact extra-ribosomal functions of QM are not yet fully understood, it has been identified as a putative tumor suppressor. This protein was reported to interact with the transcription factor c-Jun and thereby prevent c-Jun actives genes of the cellular growth. In this study, the human QM protein was expressed in bacterial system, in the soluble form and this structure was analyzed by Circular Dichroism and Fluorescence. The results of Circular Dichroism showed that this protein has less alpha helix than beta sheet, as described in the literature. QM protein does not contain a leucine zipper region; however the ion zinc is necessary for binding of QM to c-Jun. Then we analyzed the relationship between the removal of zinc ions and folding of protein. Preliminary results obtained by the technique Fluorescence showed a gradual increase in fluorescence with the addition of increasing concentration of EDTA. This suggests that the zinc is important in the tertiary structure of the protein. More studies are being made for better understand these results. (author)

  5. Consensus Statement on Diagnostic End Points for Infant Tuberculosis Vaccine Trials

    NARCIS (Netherlands)

    Hatherill, Mark; Verver, Suzanne; Mahomed, Hassan; Barker, Lew; Behr, Marcel; Cardenas, Vicky; Eisele, Bernd; Douoguih, Macaya; Evans, Thomas G.; Eskola, Juhani; Fourie, Bernard; Grewal, Harleen; Grode, Leander; Hawkridge, Tony; Hesseling, Anneke; Hussey, Gregory; Kiringa, Grace; Landry, Bernard; Lockhart, Stephen; Marais, Ben; Måseide, Kårstein; Mayanja, Harriet; McClain, Bruce; McShane, Helen; Moyo, Sizulu; Ofori, Opokua; Parida, Shreemanta K.; Ryall, Robert P.; Sacarlal, Jahit; Sadoff, Jerry; Shea, Jacqui; Tameris, Michele; van Rie, Annelies; von Reyn, C. Fordham; Wajja, Anne; Walker, Bob; Walzl, Gerhard; Wilkinson, Robert J.

    2012-01-01

    Background. Definition of clinical trial end points for childhood tuberculosis is hindered by lack of a standard case definition. We aimed to identify areas of consensus or debate on potential end points for tuberculosis vaccine trials among human immunodeficiency virus-uninfected children. Methods.

  6. Validation of intermediate end points in cancer research.

    Science.gov (United States)

    Schatzkin, A; Freedman, L S; Schiffman, M H; Dawsey, S M

    1990-11-21

    Investigations using intermediate end points as cancer surrogates are quicker, smaller, and less expensive than studies that use malignancy as the end point. We present a strategy for determining whether a given biomarker is a valid intermediate end point between an exposure and incidence of cancer. Candidate intermediate end points may be selected from case series, ecologic studies, and animal experiments. Prospective cohort and sometimes case-control studies may be used to quantify the intermediate end point-cancer association. The most appropriate measure of this association is the attributable proportion. The intermediate end point is a valid cancer surrogate if the attributable proportion is close to 1.0, but not if it is close to 0. Usually, the attributable proportion is close to neither 1.0 nor 0; in this case, valid surrogacy requires that the intermediate end point mediate an established exposure-cancer relation. This would in turn imply that the exposure effect would vanish if adjusted for the intermediate end point. We discuss the relative advantages of intervention and observational studies for the validation of intermediate end points. This validation strategy also may be applied to intermediate end points for adverse reproductive outcomes and chronic diseases other than cancer.

  7. A QM/MM–Based Computational Investigation on the Catalytic Mechanism of Saccharopine Reductase

    OpenAIRE

    Almasi, Joel N.; Bushnell, Eric A.C.; Gauld, James W.

    2011-01-01

    Saccharopine reductase from Magnaporthe grisea, an NADPH-containing enzyme in the α-aminoadipate pathway, catalyses the formation of saccharopine, a precursor to L-lysine, from the substrates glutamate and α-aminoadipate-δ-semialdehyde. Its catalytic mechanism has been investigated using quantum mechanics/molecular mechanics (QM/MM) ONIOM-based approaches. In particular, the overall catalytic pathway has been elucidated and the effects of electron correlation and the anisotropic polar protein...

  8. Isolation and identification of the immune-relevant ribosomal protein L10 (RPL10/QM-like gene) from the large yellow croaker Pseudosciaena crocea (Pisces: Sciaenidae).

    Science.gov (United States)

    Chen, X; Su, Y Q; Wang, J; Liu, M; Niu, S F; Zhong, S P; Qiu, F

    2012-10-15

    In order to investigate the immune role of ribosomal protein L10 (RPL10/QM-like gene) in marine fish, we challenged the large yellow croaker Pseudosciaena (= Larimichthys) crocea, the most important marine fish culture species in China, by injection with a mixture of the bacteria Vibrio harveyi and V. parahaemolyticus (3:1 in volume). Microarray analysis and real-time PCR were performed 24 and 48 h post-challenge to isolate and identify the QM-like gene from the gill P. crocea (designated PcQM). The expression level of the PcQM gene did not changed significantly at 24 h post-challenge, but was significantly downregulated at 48 h post-challenge, suggesting that the gene had an immune-modulatory effect in P. crocea. Full-length PcQM cDNA and genomic sequences were obtained by rapid amplification of cDNA ends (RACE)-PCR. The sequence of the PcQM gene clustered together with those of other QM-like genes from other aquatic organisms, indicating that the QM-like gene is highly conserved in teleosts.

  9. End-point sharpness in thermometric titrimetry.

    Science.gov (United States)

    Tyrrell, H J

    1967-07-01

    It is shown that the sharpness of an end-point in a thermometric titration where the simple reaction A + B right harpoon over left harpoon AB takes place, depends on Kc(A') where K is the equilibrium constant for the reaction, and c(A') is the total concentration of the titrand (A) in the reaction mixture. The end-point is sharp if, (i) the enthalpy change in the reaction is not negligible, and (ii) Kc(A') > 10(3). This shows that it should, for example, be possible to titrate 0.1 M acid, pK(A) = 10, using a thennometric end-point. Some aspects of thermometric titrimetry when Kc(A') < 10(3) are also considered.

  10. Theoretical study of electron transfer mechanism in biological systems with a QM (MRSCI+DFT)/MM method

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Toshikazu [Research Program for Computational Science, RIKEN 2-1, Hirosawa, Wako, Saitama 351-0198 (Japan)

    2007-07-15

    The goal of this project is to understand the charge separation mechanisms in biological systems using the molecular orbital theories. Specially, the charge separation in the photosynthetic reaction center is focused on, since the efficiency in use of the solar energy is extraordinary and the reason for it is still kept unknown. Here, a QM/MM theoretical scheme is employed to take the effects of the surrounding proteins onto the pigments into account. To describe such excited electronic structures, a unified theory by MRSCI and DFT is newly invented. For atoms in the MM space, a new sampling method has also been created, based on the statistical physics. By using these theoretical framework, the excited and positively charged states of the special pair, that is, chlorophyll dimmer are planning to be calculated this year.

  11. Theoretical study of electron transfer mechanism in biological systems with a QM (MRSCI+DFT)/MM method

    International Nuclear Information System (INIS)

    Takada, Toshikazu

    2007-01-01

    The goal of this project is to understand the charge separation mechanisms in biological systems using the molecular orbital theories. Specially, the charge separation in the photosynthetic reaction center is focused on, since the efficiency in use of the solar energy is extraordinary and the reason for it is still kept unknown. Here, a QM/MM theoretical scheme is employed to take the effects of the surrounding proteins onto the pigments into account. To describe such excited electronic structures, a unified theory by MRSCI and DFT is newly invented. For atoms in the MM space, a new sampling method has also been created, based on the statistical physics. By using these theoretical framework, the excited and positively charged states of the special pair, that is, chlorophyll dimmer are planning to be calculated this year

  12. Radiometric titration of officinal radiopharmaceuticals using radioactive kryptonates as end-point indicators. II. Citric, tartaric, undecylenic acids

    Energy Technology Data Exchange (ETDEWEB)

    Harangozo, M.; Jombik, J.; Schiller, P. (Komenskeho Univ., Bratislava (Czechoslovakia). Farmaceuticka Fakulta); Toelgyessy, J. (Slovenska Vysoka Skola Technicka, Bratislava (Czechoslovakia). Chemickotechnologicka Fakulta)

    1981-01-01

    A method for the determination of citric, tartaric and undecylenic acids based on radiometric titration with 0.1 or 0.05 mole.l/sup -1/ NaOH was developed. As an indicator of the end point, radioactive kryptonate of glass was used. Experimental technique, results of determinations as well as other possible applications of the radioactive kryptonate of glass for end point determination in alkalimetric analyses of officinal pharmaceuticals are discussed.

  13. Grid-Based Projector Augmented Wave (GPAW) Implementation of Quantum Mechanics/Molecular Mechanics (QM/MM) Electrostatic Embedding and Application to a Solvated Diplatinum Complex

    DEFF Research Database (Denmark)

    Dohn, A. O.; Jónsson, E. Ö.; Levi, Gianluca

    2017-01-01

    A multiscale density functional theory-quantum mechanics/molecular mechanics (DFT-QM/MM) scheme is presented, based on an efficient electrostatic coupling between the electronic density obtained from a grid-based projector augmented wave (GPAW) implementation of density functional theory...... and a classical potential energy function. The scheme is implemented in a general fashion and can be used with various choices for the descriptions of the QM or MM regions. Tests on H2O clusters, ranging from dimer to decamer show that no systematic energy errors are introduced by the coupling that exceeds...

  14. Modeling Chemical Reactions by QM/MM Calculations: The Case of the Tautomerization in Fireflies Bioluminescent Systems.

    Science.gov (United States)

    Berraud-Pache, Romain; Garcia-Iriepa, Cristina; Navizet, Isabelle

    2018-01-01

    In less than half a century, the hybrid QM/MM method has become one of the most used technique to model molecules embedded in a complex environment. A well-known application of the QM/MM method is for biological systems. Nowadays, one can understand how enzymatic reactions work or compute spectroscopic properties, like the wavelength of emission. Here, we have tackled the issue of modeling chemical reactions inside proteins. We have studied a bioluminescent system, fireflies, and deciphered if a keto-enol tautomerization is possible inside the protein. The two tautomers are candidates to be the emissive molecule of the bioluminescence but no outcome has been reached. One hypothesis is to consider a possible keto-enol tautomerization to treat this issue, as it has been already observed in water. A joint approach combining extensive MD simulations as well as computation of key intermediates like TS using QM/MM calculations is presented in this publication. We also emphasize the procedure and difficulties met during this approach in order to give a guide for this kind of chemical reactions using QM/MM methods.

  15. Modelling chemical reactions by QM/MM calculations: the case of the tautomerization in fireflies bioluminescent systems

    Science.gov (United States)

    Berraud-Pache, Romain; Garcia-Iriepa, Cristina; Navizet, Isabelle

    2018-04-01

    In less than half a century, the hybrid QM/MM method has become one of the most used technique to model molecules embedded in a complex environment. A well-known application of the QM/MM method is for biological systems. Nowadays, one can understand how enzymatic reactions work or compute spectroscopic properties, like the wavelength of emission. Here, we have tackled the issue of modelling chemical reactions inside proteins. We have studied a bioluminescent system, fireflies, and deciphered if a keto-enol tautomerization is possible inside the protein. The two tautomers are candidates to be the emissive molecule of the bioluminescence but no outcome has been reached. One hypothesis is to consider a possible keto-enol tautomerization to treat this issue, as it has been already observed in water. A joint approach combining extensive MD simulations as well as computation of key intermediates like TS using QM/MM calculations is presented in this publication. We also emphasize the procedure and difficulties met during this approach in order to give a guide for this kind of chemical reactions using QM/MM methods.

  16. Measurement of β-decay end point energy with planar HPGe detector

    Science.gov (United States)

    Bhattacharjee, T.; Pandit, Deepak; Das, S. K.; Chowdhury, A.; Das, P.; Banerjee, D.; Saha, A.; Mukhopadhyay, S.; Pal, S.; Banerjee, S. R.

    2014-12-01

    The β - γ coincidence measurement has been performed with a segmented planar Hyper-Pure Germanium (HPGe) detector and a single coaxial HPGe detector to determine the end point energies of nuclear β-decays. The experimental end point energies have been determined for some of the known β-decays in 106Rh →106Pd. The end point energies corresponding to three weak branches in 106Rh →106Pd decay have been measured for the first time. The γ ray and β particle responses for the planar HPGe detector were simulated using the Monte Carlo based code GEANT3. The experimentally obtained β spectra were successfully reproduced with the simulation.

  17. Thermochemical Fragment Energy Method for Biomolecules: Application to a Collagen Model Peptide.

    Science.gov (United States)

    Suárez, Ernesto; Díaz, Natalia; Suárez, Dimas

    2009-06-09

    Herein, we first review different methodologies that have been proposed for computing the quantum mechanical (QM) energy and other molecular properties of large systems through a linear combination of subsystem (fragment) energies, which can be computed using conventional QM packages. Particularly, we emphasize the similarities among the different methods that can be considered as variants of the multibody expansion technique. Nevertheless, on the basis of thermochemical arguments, we propose yet another variant of the fragment energy methods, which could be useful for, and readily applicable to, biomolecules using either QM or hybrid quantum mechanical/molecular mechanics methods. The proposed computational scheme is applied to investigate the stability of a triple-helical collagen model peptide. To better address the actual applicability of the fragment QM method and to properly compare with experimental data, we compute average energies by carrying out single-point fragment QM calculations on structures generated by a classical molecular dynamics simulation. The QM calculations are done using a density functional level of theory combined with an implicit solvent model. Other free-energy terms such as attractive dispersion interactions or thermal contributions are included using molecular mechanics. The importance of correcting both the intermolecular and intramolecular basis set superposition error (BSSE) in the QM calculations is also discussed in detail. On the basis of the favorable comparison of our fragment-based energies with experimental data and former theoretical results, we conclude that the fragment QM energy strategy could be an interesting addition to the multimethod toolbox for biomolecular simulations in order to investigate those situations (e.g., interactions with metal clusters) that are beyond the range of applicability of common molecular mechanics methods.

  18. CaFE: a tool for binding affinity prediction using end-point free energy methods.

    Science.gov (United States)

    Liu, Hui; Hou, Tingjun

    2016-07-15

    Accurate prediction of binding free energy is of particular importance to computational biology and structure-based drug design. Among those methods for binding affinity predictions, the end-point approaches, such as MM/PBSA and LIE, have been widely used because they can achieve a good balance between prediction accuracy and computational cost. Here we present an easy-to-use pipeline tool named Calculation of Free Energy (CaFE) to conduct MM/PBSA and LIE calculations. Powered by the VMD and NAMD programs, CaFE is able to handle numerous static coordinate and molecular dynamics trajectory file formats generated by different molecular simulation packages and supports various force field parameters. CaFE source code and documentation are freely available under the GNU General Public License via GitHub at https://github.com/huiliucode/cafe_plugin It is a VMD plugin written in Tcl and the usage is platform-independent. tingjunhou@zju.edu.cn. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Radiometric titration of officinal radiopharmaceuticals using radioactive kryptonates as end-point indicators. I. Salicylic, acetylosalicylic, benzoic acids

    Energy Technology Data Exchange (ETDEWEB)

    Toelgyessy, J; Dillinger, P [Slovenska Vysoka Skola Technicka, Bratislava (Czechoslovakia). Chemickotechnologicka Fakulta; Harangozo, M; Jombik, J [Komenskeho Univ., Bratislava (Czechoslovakia). Farmaceuticka Fakulta

    1980-01-01

    A method for the determination of salicylic, acetylsalicylic and benzoic acids in officinal pharmaceutical based on radiometric titration with 0.1 mol.l/sup -1/ NaOH was developed. The end-point was detected with the aid of radioactive glass kryptonate. After the end-point, the excess titrant attacks the glass surface layers and this results in releasing /sup 85/Kr, and consequently, in decreasing the radioactivity of the kryptonate employed. The radioactive kryptonate used as an indicator was prepared by the bombardment of glass with accelerated /sup 85/Kr ions. The developed method is simple, accurate and correct.

  20. Measurement of β-decay end point energy with planar HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharjee, T., E-mail: btumpa@vecc.gov.in [Physics Group, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Pandit, Deepak [Physics Group, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Das, S.K. [RCD-BARC, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Chowdhury, A.; Das, P. [Physics Group, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Banerjee, D. [RCD-BARC, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Saha, A.; Mukhopadhyay, S.; Pal, S.; Banerjee, S.R. [Physics Group, Variable Energy Cyclotron Centre, Kolkata 700 064 (India)

    2014-12-11

    The β–γ coincidence measurement has been performed with a segmented planar Hyper-Pure Germanium (HPGe) detector and a single coaxial HPGe detector to determine the end point energies of nuclear β-decays. The experimental end point energies have been determined for some of the known β-decays in {sup 106}Rh→{sup 106}Pd. The end point energies corresponding to three weak branches in {sup 106}Rh→{sup 106}Pd decay have been measured for the first time. The γ ray and β particle responses for the planar HPGe detector were simulated using the Monte Carlo based code GEANT3. The experimentally obtained β spectra were successfully reproduced with the simulation.

  1. Can tautomerization of the A·T Watson-Crick base pair via double proton transfer provoke point mutations during DNA replication? A comprehensive QM and QTAIM analysis.

    Science.gov (United States)

    Brovarets, Ol'ha O; Hovorun, Dmytro M

    2014-01-01

    Trying to answer the question posed in the title, we have carried out a detailed theoretical investigation of the biologically important mechanism of the tautomerization of the A·T Watson-Crick DNA base pair, information that is hard to establish experimentally. By combining theoretical investigations at the MP2 and density functional theory levels of QM theory with quantum theory of atoms in molecules analysis, the tautomerization of the A·T Watson-Crick base pair by the double proton transfer (DPT) was comprehensively studied in vacuo and in the continuum with a low dielectric constant (ϵ = 4) corresponding to a hydrophobic interfaces of protein-nucleic acid interactions. Based on the sweeps of the electron-topological, geometric, and energetic parameters, which describe the course of the tautomerization along its intrinsic reaction coordinate (IRC), it was proved that the A·T → A(∗)·T(∗) tautomerization through the DPT is a concerted (i.e. the pathway without an intermediate) and asynchronous (i.e. protons move with a time gap) process. The limiting stage of this phenomenon is the final PT along the N6H⋯O4 hydrogen bond (H-bond). The continuum with ϵ = 4 does not affect qualitatively the course of the tautomerization reaction: similar to that observed in vacuo, it proceeds via a concerted asynchronous process with the same structure of the transition state (TS). For the first time, the nine key points along the IRC of the A·T base pair tautomerization, which could be considered as electron-topological "fingerprints" of a concerted asynchronous process of the tautomerization via the DPT, have been identified and fully characterized. These nine key points have been used to define the reactant, TS, and product regions of the DPT in the A·T base pair. Considering the energy dependence of each of the three H-bonds, which stabilize the Watson-Crick and Löwdin's base pairs, along the IRC of the tautomerization, it was found that all these H

  2. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  3. FT-Raman and QM/MM study of the interaction between histamine and DNA

    International Nuclear Information System (INIS)

    Ruiz-Chica, A.J.; Soriano, A.; Tunon, I.; Sanchez-Jimenez, F.M.; Silla, E.; Ramirez, F.J.

    2006-01-01

    The interaction between histamine and highly polymerized calf-thymus DNA has been investigated using FT-Raman spectroscopy and the hybrid QM/MM (quantum mechanics/molecular mechanics) methodology. Raman spectra of solutions containing histamine and calf-thymus DNA, at different molar ratios, were recorded. Solutions were prepared at physiological settings of pH and ionic strength, using both natural and heavy water as the solvent. The analysis of the spectral changes on the DNA Raman spectra when adding different concentrations of histamine allowed us to identify the reactive sites of DNA and histamine, which were used to built two minor groove and one intercalated binding models. They were further used as starting points of the QM/MM theoretical study. However, minimal energy points were only reached for the two minor groove models. For each optimized structure, we calculated analytical force constants of histamine molecule in order to perform the vibrational dynamics. Normal mode descriptions allowed us to compare calculated wavenumbers for DNA-interacting histamine to those measured in the Raman spectra of DNA-histamine solutions

  4. End point control of an actinide precipitation reactor

    International Nuclear Information System (INIS)

    Muske, K.R.

    1997-01-01

    The actinide precipitation reactors in the nuclear materials processing facility at Los Alamos National Laboratory are used to remove actinides and other heavy metals from the effluent streams generated during the purification of plutonium. These effluent streams consist of hydrochloric acid solutions, ranging from one to five molar in concentration, in which actinides and other metals are dissolved. The actinides present are plutonium and americium. Typical actinide loadings range from one to five grams per liter. The most prevalent heavy metals are iron, chromium, and nickel that are due to stainless steel. Removal of these metals from solution is accomplished by hydroxide precipitation during the neutralization of the effluent. An end point control algorithm for the semi-batch actinide precipitation reactors at Los Alamos National Laboratory is described. The algorithm is based on an equilibrium solubility model of the chemical species in solution. This model is used to predict the amount of base hydroxide necessary to reach the end point of the actinide precipitation reaction. The model parameters are updated by on-line pH measurements

  5. Physics-based scoring of protein-ligand interactions: explicit polarizability, quantum mechanics and free energies.

    Science.gov (United States)

    Bryce, Richard A

    2011-04-01

    The ability to accurately predict the interaction of a ligand with its receptor is a key limitation in computer-aided drug design approaches such as virtual screening and de novo design. In this article, we examine current strategies for a physics-based approach to scoring of protein-ligand affinity, as well as outlining recent developments in force fields and quantum chemical techniques. We also consider advances in the development and application of simulation-based free energy methods to study protein-ligand interactions. Fuelled by recent advances in computational algorithms and hardware, there is the opportunity for increased integration of physics-based scoring approaches at earlier stages in computationally guided drug discovery. Specifically, we envisage increased use of implicit solvent models and simulation-based scoring methods as tools for computing the affinities of large virtual ligand libraries. Approaches based on end point simulations and reference potentials allow the application of more advanced potential energy functions to prediction of protein-ligand binding affinities. Comprehensive evaluation of polarizable force fields and quantum mechanical (QM)/molecular mechanical and QM methods in scoring of protein-ligand interactions is required, particularly in their ability to address challenging targets such as metalloproteins and other proteins that make highly polar interactions. Finally, we anticipate increasingly quantitative free energy perturbation and thermodynamic integration methods that are practical for optimization of hits obtained from screened ligand libraries.

  6. Performance assessment of semiempirical molecular orbital methods in describing halogen bonding: quantum mechanical and quantum mechanical/molecular mechanical-molecular dynamics study.

    Science.gov (United States)

    Ibrahim, Mahmoud A A

    2011-10-24

    The performance of semiempirical molecular-orbital methods--MNDO, MNDO-d, AM1, RM1, PM3 and PM6--in describing halogen bonding was evaluated, and the results were compared with molecular mechanical (MM) and quantum mechanical (QM) data. Three types of performance were assessed: (1) geometrical optimizations and binding energy calculations for 27 halogen-containing molecules complexed with various Lewis bases (Two of the tested methods, AM1 and RM1, gave results that agree with the QM data.); (2) charge distribution calculations for halobenzene molecules, determined by calculating the solvation free energies of the molecules relative to benzene in explicit and implicit generalized Born (GB) solvents (None of the methods gave results that agree with the experimental data.); and (3) appropriateness of the semiempirical methods in the hybrid quantum-mechanical/molecular-mechanical (QM/MM) scheme, investigated by studying the molecular inhibition of CK2 protein by eight halobenzimidazole and -benzotriazole derivatives using hybrid QM/MM molecular-dynamics (MD) simulations with the inhibitor described at the QM level by the AM1 method and the rest of the system described at the MM level. The pure MM approach with inclusion of an extra point of positive charge on the halogen atom approach gave better results than the hybrid QM/MM approach involving the AM1 method. Also, in comparison with the pure MM-GBSA (generalized Born surface area) binding energies and experimental data, the calculated QM/MM-GBSA binding energies of the inhibitors were improved by replacing the G(GB,QM/MM) solvation term with the corresponding G(GB,MM) term.

  7. End points for adjuvant therapy trials: has the time come to accept disease-free survival as a surrogate end point for overall survival?

    Science.gov (United States)

    Gill, Sharlene; Sargent, Daniel

    2006-06-01

    The intent of adjuvant therapy is to eradicate micro-metastatic residual disease following curative resection with the goal of preventing or delaying recurrence. The time-honored standard for demonstrating efficacy of new adjuvant therapies is an improvement in overall survival (OS). This typically requires phase III trials of large sample size with lengthy follow-up. With the intent of reducing the cost and time of completing such trials, there is considerable interest in developing alternative or surrogate end points. A surrogate end point may be employed as a substitute to directly assess the effects of an intervention on an already accepted clinical end point such as mortality. When used judiciously, surrogate end points can accelerate the evaluation of new therapies, resulting in the more timely dissemination of effective therapies to patients. The current review provides a perspective on the suitability and validity of disease-free survival (DFS) as an alternative end point for OS. Criteria for establishing surrogacy and the advantages and limitations associated with the use of DFS as a primary end point in adjuvant clinical trials and as the basis for approval of new adjuvant therapies are discussed.

  8. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  9. A time domain inverse dynamic method for the end point tracking control of a flexible manipulator

    Science.gov (United States)

    Kwon, Dong-Soo; Book, Wayne J.

    1991-01-01

    The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.

  10. A QM/MM–Based Computational Investigation on the Catalytic Mechanism of Saccharopine Reductase

    Directory of Open Access Journals (Sweden)

    James W. Gauld

    2011-10-01

    Full Text Available Saccharopine reductase from Magnaporthe grisea, an NADPH-containing enzyme in the α-aminoadipate pathway, catalyses the formation of saccharopine, a precursor to L-lysine, from the substrates glutamate and α-aminoadipate-δ-semialdehyde. Its catalytic mechanism has been investigated using quantum mechanics/molecular mechanics (QM/MM ONIOM-based approaches. In particular, the overall catalytic pathway has been elucidated and the effects of electron correlation and the anisotropic polar protein environment have been examined via the use of the ONIOM(HF/6-31G(d:AMBER94 and ONIOM(MP2/6-31G(d//HF/6-31G(d:AMBER94 methods within the mechanical embedding formulism and ONIOM(MP2/6-31G(d//HF/6-31G(d:AMBER94 and ONIOM(MP2/6-311G(d,p//HF/6-31G(d:AMBER94 within the electronic embedding formulism. The results of the present study suggest that saccharopine reductase utilises a substrate-assisted catalytic pathway in which acid/base groups within the cosubstrates themselves facilitate the mechanistically required proton transfers. Thus, the enzyme appears to act most likely by binding the three required reactant molecules glutamate, α-aminoadipate-δ-semialdehyde and NADPH in a manner and polar environment conducive to reaction.

  11. A New Test Unit for Disintegration End-Point Determination of Orodispersible Films.

    Science.gov (United States)

    Low, Ariana; Kok, Si Ling; Khong, Yuet Mei; Chan, Sui Yung; Gokhale, Rajeev

    2015-11-01

    No standard time or pharmacopoeia disintegration test method for orodispersible films (ODFs) exists. The USP disintegration test for tablets and capsules poses significant challenges for end-point determination when used for ODFs. We tested a newly developed disintegration test unit (DTU) against the USP disintegration test. The DTU is an accessory to the USP disintegration apparatus. It holds the ODF in a horizontal position, allowing top-view of the ODF during testing. A Gauge R&R study was conducted to assign relative contributions of the total variability from the operator, sample or the experimental set-up. Precision was compared using commercial ODF products in different media. Agreement between the two measurement methods was analysed. The DTU showed improved repeatability and reproducibility compared to the USP disintegration system with tighter standard deviations regardless of operator or medium. There is good agreement between the two methods, with the USP disintegration test giving generally longer disintegration times possibly due to difficulty in end-point determination. The DTU provided clear end-point determination and is suitable for quality control of ODFs during product developmental stage or manufacturing. This may facilitate the development of a standardized methodology for disintegration time determination of ODFs. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  12. Combined quantum mechanics/molecular mechanics (QM/MM) simulations for protein-ligand complexes: free energies of binding of water molecules in influenza neuraminidase.

    Science.gov (United States)

    Woods, Christopher J; Shaw, Katherine E; Mulholland, Adrian J

    2015-01-22

    The applicability of combined quantum mechanics/molecular mechanics (QM/MM) methods for the calculation of absolute binding free energies of conserved water molecules in protein/ligand complexes is demonstrated. Here, we apply QM/MM Monte Carlo simulations to investigate binding of water molecules to influenza neuraminidase. We investigate five different complexes, including those with the drugs oseltamivir and peramivir. We investigate water molecules in two different environments, one more hydrophobic and one hydrophilic. We calculate the free-energy change for perturbation of a QM to MM representation of the bound water molecule. The calculations are performed at the BLYP/aVDZ (QM) and TIP4P (MM) levels of theory, which we have previously demonstrated to be consistent with one another for QM/MM modeling. The results show that the QM to MM perturbation is significant in both environments (greater than 1 kcal mol(-1)) and larger in the more hydrophilic site. Comparison with the same perturbation in bulk water shows that this makes a contribution to binding. The results quantify how electronic polarization differences in different environments affect binding affinity and also demonstrate that extensive, converged QM/MM free-energy simulations, with good levels of QM theory, are now practical for protein/ligand complexes.

  13. On the problem of completeness of QM: von Neumann against Einstein, Podolsky, and Rosen

    OpenAIRE

    Khrennikov, Andrei

    2008-01-01

    We performed a comparative analysis of the arguments of Einstein, Podolsky and Rosen -- EPR, 1935 (against the completeness of QM) and the theoretical formalism of QM (due to von Neumann, 1932). We found that the EPR considerations do not match at all with the von Neumann's theory. Thus EPR did not criticize the real theoretical model of QM. The root of EPR's paradoxical conclusion on incompleteness of QM is the misuse of von Neumann's projection postulate. EPR applied this postulate to obser...

  14. The polarizable embedding coupled cluster method

    DEFF Research Database (Denmark)

    Sneskov, Kristian; Schwabe, Tobias; Kongsted, Jacob

    2011-01-01

    We formulate a new combined quantum mechanics/molecular mechanics (QM/MM) method based on a self-consistent polarizable embedding (PE) scheme. For the description of the QM region, we apply the popular coupled cluster (CC) method detailing the inclusion of electrostatic and polarization effects...

  15. Quantifying the mechanism of phosphate monoester hydrolysis in aqueous solution by evaluating the relevant ab initio QM/MM free-energy surfaces.

    Science.gov (United States)

    Plotnikov, Nikolay V; Prasad, B Ram; Chakrabarty, Suman; Chu, Zhen T; Warshel, Arieh

    2013-10-24

    Understanding the nature of the free-energy surfaces for phosphate hydrolysis is a prerequisite for understanding the corresponding key chemical reactions in biology. Here, the challenge has been to move to careful ab initio QM/MM (QM(ai)/MM) free-energy calculations, where obtaining converging results is very demanding and computationally expensive. This work describes such calculations, focusing on the free-energy surface for the hydrolysis of phosphate monoesters, paying special attention to the comparison between the one water (1W) and two water (2W) paths for the proton-transfer (PT) step. This issue has been explored before by energy minimization with implicit solvent models and by nonsystematic QM/MM energy minimization, as well as by nonsystematic free-energy mapping. However, no study has provided the needed reliable 2D (3D) surfaces that are necessary for reaching concrete conclusions. Here we report a systematic evaluation of the 2D (3D) free-energy maps for several relevant systems, comparing the results of QM(ai)/MM and QM(ai)/implicit solvent surfaces, and provide an advanced description of the relevant energetics. It is found that the 1W path for the hydrolysis of the methyl diphosphate (MDP) trianion is 6-9 kcal/mol higher than that the 2W path. This difference becomes slightly larger in the presence of the Mg(2+) ion because this ion reduces the pKa of the conjugated acid form of the phosphate oxygen that accepts the proton. Interestingly, the BLYP approach (which has been used extensively in some studies) gives a much smaller difference between the 1W and 2W activation barriers. At any rate, it is worth pointing out that the 2W transition state for the PT is not much higher that the common plateau that serves as the starting point of both the 1W and 2W PT paths. Thus, the calculated catalytic effects of proteins based on the 2W PT mechanistic model are not expected to be different from the catalytic effects predicted using the 1W PT mechanistic

  16. QM/MM studies on the excited-state relaxation mechanism of a semisynthetic dTPT3 base.

    Science.gov (United States)

    Guo, Wei-Wei; Zhang, Teng-Shuo; Fang, Wei-Hai; Cui, Ganglong

    2018-02-14

    Semisynthetic alphabets can potentially increase the genetic information stored in DNA through the formation of unusual base pairs. Recent experiments have shown that near-visible-light irradiation of the dTPT3 chromophore could lead to the formation of a reactive triplet state and of singlet oxygen in high quantum yields. However, the detailed excited-state relaxation paths that populate the lowest triplet state are unclear. Herein, we have for the first time employed the QM(MS-CASPT2//CASSCF)/MM method to explore the spectroscopic properties and excited-state relaxation mechanism of the aqueous dTPT3 chromophore. On the basis of the results, we have found that (1) the S 2 ( 1 ππ*) state of dTPT3 is the initially populated excited singlet state upon near-visible light irradiation; and (2) there are two efficient relaxation pathways to populate the lowest triplet state, i.e. T 1 ( 3 ππ*). In the first one, the S 2 ( 1 ππ*) system first decays to the S 1 ( 1 nπ*) state near the S 2 /S 1 conical intersection, which is followed by an efficient S 1 → T 1 intersystem crossing process at the S 1 /T 1 crossing point; in the second one, an efficient S 2 → T 2 intersystem crossing takes place first, and then, the T 2 ( 3 nπ*) system hops to the T 1 ( 3 ππ*) state through an internal conversion process at the T 2 /T 1 conical intersection. Moreover, an S 2 /S 1 /T 2 intersection region is found to play a vital role in the excited-state relaxation. These new mechanistic insights help in understanding the photophysics and photochemistry of unusual base pairs.

  17. Analysis of Peer Review Comments: QM Recommendations and Feedback Intervention Theory

    Science.gov (United States)

    Schwegler, Andria F.; Altman, Barbara W.

    2015-01-01

    Because feedback is a critical component of the continuous improvement cycle of the Quality Matters (QM) peer review process, the present research analyzed the feedback that peer reviewers provided to course developers after a voluntary, nonofficial QM peer review of online courses. Previous research reveals that the effects of feedback on…

  18. Electrical breakthrough effect for end pointing in 90 and 45 nm node circuit edit

    International Nuclear Information System (INIS)

    Liu, Kun; Soskov, Alex; Scipioni, Larry; Bassom, Neil; Sijbrandij, Sybren; Smith, Gerald

    2006-01-01

    The interaction between high-energy Ga + ions and condensed matter is studied for circuit edit applications. A new 'electrical breakthrough effect' due to charging of, and Ga + penetration/doping into, dielectrics is discovered. This new effect is proposed for end pointing in 90 and 45 nm node circuit edits where integrated circuit device dimensions are of a few hundred nanometers. This new end point approach is very sensitive, reliable, and precise. Most importantly, it is not sensitive to device dimensions. A series of circuit edits involving milling holes of high aspect ratio (5-30) and small cross-section area (0.01-0.25 μm 2 ) on real chips has been successfully performed using the electrical breakthrough effect as the end point method

  19. Benchmarking Quantum Mechanics/Molecular Mechanics (QM/MM) Methods on the Thymidylate Synthase-Catalyzed Hydride Transfer.

    Science.gov (United States)

    Świderek, Katarzyna; Arafet, Kemel; Kohen, Amnon; Moliner, Vicent

    2017-03-14

    Given the ubiquity of hydride-transfer reactions in enzyme-catalyzed processes, identifying the appropriate computational method for evaluating such biological reactions is crucial to perform theoretical studies of these processes. In this paper, the hydride-transfer step catalyzed by thymidylate synthase (TSase) is studied by examining hybrid quantum mechanics/molecular mechanics (QM/MM) potentials via multiple semiempirical methods and the M06-2X hybrid density functional. Calculations of protium and tritium transfer in these reactions across a range of temperatures allowed calculation of the temperature dependence of kinetic isotope effects (KIE). Dynamics and quantum-tunneling effects are revealed to have little effect on the reaction rate, but are significant in determining the KIEs and their temperature dependence. A good agreement with experiments is found, especially when computed for RM1/MM simulations. The small temperature dependence of quantum tunneling corrections and the quasiclassical contribution term cancel each other, while the recrossing transmission coefficient seems to be temperature-independent over the interval of 5-40 °C.

  20. Quantum mechanical fragment methods based on partitioning atoms or partitioning coordinates.

    Science.gov (United States)

    Wang, Bo; Yang, Ke R; Xu, Xuefei; Isegawa, Miho; Leverentz, Hannah R; Truhlar, Donald G

    2014-09-16

    atoms for capping dangling bonds, and we have shown that they can greatly improve the accuracy. Finally we present a new approach that goes beyond QM/MM by combining the convenience of molecular mechanics with the accuracy of fitting a potential function to electronic structure calculations on a specific system. To make the latter practical for systems with a large number of degrees of freedom, we developed a method to interpolate between local internal-coordinate fits to the potential energy. A key issue for the application to large systems is that rather than assigning the atoms or monomers to fragments, we assign the internal coordinates to reaction, secondary, and tertiary sets. Thus, we make a partition in coordinate space rather than atom space. Fits to the local dependence of the potential energy on tertiary coordinates are arrayed along a preselected reaction coordinate at a sequence of geometries called anchor points; the potential energy function is called an anchor points reactive potential. Electrostatically embedded fragment methods and the anchor points reactive potential, because they are based on treating an entire system by quantum mechanical electronic structure methods but are affordable for large and complex systems, have the potential to open new areas for accurate simulations where combined QM/MM methods are inadequate.

  1. Quality control for electron beam processing of polymeric materials by end-point analysis

    International Nuclear Information System (INIS)

    DeGraff, E.; McLaughlin, W.L.

    1981-01-01

    Properties of certain plastics, e.g. polytetrafluoroethylene, polyethylene, ethylene vinyl acetate copolymer, can be modified selectively by ionizing radiation. One of the advantages of this treatment over chemical methods is better control of the process and the end-product properties. The most convenient method of dosimetry for monitoring quality control is post-irradiation evaluation of the plastic itself, e.g., melt index and melt point determination. It is shown that by proper calibration in terms of total dose and sufficiently reproducible radiation effects, such product test methods provide convenient and meaningful analyses. Other appropriate standardized analytical methods include stress-crack resistance, stress-strain-to-fracture testing and solubility determination. Standard routine dosimetry over the dose and dose rate ranges of interest confirm that measured product end points can be correlated with calibrated values of absorbed dose in the product within uncertainty limits of the measurements. (author)

  2. Impact of confinement housing on study end-points in the calf model of cryptosporidiosis.

    Science.gov (United States)

    Graef, Geneva; Hurst, Natalie J; Kidder, Lance; Sy, Tracy L; Goodman, Laura B; Preston, Whitney D; Arnold, Samuel L M; Zambriski, Jennifer A

    2018-04-01

    Diarrhea is the second leading cause of death in children confinement housing, and Interval Collection (IC), which permits use of box stalls. CFC mimics human challenge model methodology but it is unknown if confinement housing impacts study end-points and if data gathered via this method is suitable for generalization to human populations. Using a modified crossover study design we compared CFC and IC and evaluated the impact of housing on study end-points. At birth, calves were randomly assigned to confinement (n = 14) or box stall housing (n = 9), or were challenged with 5 x 107 C. parvum oocysts, and followed for 10 days. Study end-points included fecal oocyst shedding, severity of diarrhea, degree of dehydration, and plasma cortisol. Calves in confinement had no significant differences in mean log oocysts enumerated per gram of fecal dry matter between CFC and IC samples (P = 0.6), nor were there diurnal variations in oocyst shedding (P = 0.1). Confinement housed calves shed significantly more oocysts (P = 0.05), had higher plasma cortisol (P = 0.001), and required more supportive care (P = 0.0009) than calves in box stalls. Housing method confounds study end-points in the calf model of cryptosporidiosis. Due to increased stress data collected from calves in confinement housing may not accurately estimate the efficacy of chemotherapeutics targeting C. parvum.

  3. Detection of Bordetella pertussis from Clinical Samples by Culture and End-Point PCR in Malaysian Patients.

    Science.gov (United States)

    Ting, Tan Xue; Hashim, Rohaidah; Ahmad, Norazah; Abdullah, Khairul Hafizi

    2013-01-01

    Pertussis or whooping cough is a highly infectious respiratory disease caused by Bordetella pertussis. In vaccinating countries, infants, adolescents, and adults are relevant patients groups. A total of 707 clinical specimens were received from major hospitals in Malaysia in year 2011. These specimens were cultured on Regan-Lowe charcoal agar and subjected to end-point PCR, which amplified the repetitive insertion sequence IS481 and pertussis toxin promoter gene. Out of these specimens, 275 were positive: 4 by culture only, 6 by both end-point PCR and culture, and 265 by end-point PCR only. The majority of the positive cases were from ≤3 months old patients (77.1%) (P 0.05). Our study showed that the end-point PCR technique was able to pick up more positive cases compared to culture method.

  4. Investigation into the Use of the Concept Laser QM System as an In-Situ Research and Evaluation Tool

    Science.gov (United States)

    Bagg, Stacey

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) is using a Concept Laser Fusing (Cusing) M2 powder bed additive manufacturing system for the build of space flight prototypes and hardware. NASA MSFC is collecting and analyzing data from the M2 QM Meltpool and QM Coating systems for builds. This data is intended to aide in understanding of the powder-bed additive manufacturing process, and in the development of a thermal model for the process. The QM systems are marketed by Concept Laser GmbH as in-situ quality management modules. The QM Meltpool system uses both a high-speed near-IR camera and a photodiode to monitor the melt pool generated by the laser. The software determines from the camera images the size of the melt pool. The camera also measures the integrated intensity of the IR radiation, and the photodiode gives an intensity value based on the brightness of the melt pool. The QM coating system uses a high resolution optical camera to image the surface after each layer has been formed. The objective of this investigation was to determine the adequacy of the QM Meltpool system as a research instrument for in-situ measurement of melt pool size and temperature and its applicability to NASA's objectives in (1) Developing a process thermal model and (2) Quantifying feedback measurements with the intent of meeting quality requirements or specifications. Note that Concept Laser markets the system only as capable of giving an indication of changes between builds, not as an in-situ research and evaluation tool. A secondary objective of the investigation is to determine the adequacy of the QM Coating system as an in-situ layer-wise geometry and layer quality evaluation tool.

  5. Surrogate end points in clinical research: hazardous to your health.

    Science.gov (United States)

    Grimes, David A; Schulz, Kenneth F

    2005-05-01

    Surrogate end points in clinical research pose real danger. A surrogate end point is an outcome measure, commonly a laboratory test, that substitutes for a clinical event of true importance. Resistance to activated protein C, for example, has been used as a surrogate for venous thrombosis in women using oral contraceptives. Other examples of inappropriate surrogate end points in contraception include the postcoital test instead of pregnancy to evaluate new spermicides, breakage and slippage instead of pregnancy to evaluate condoms, and bone mineral density instead of fracture to assess the safety of depo-medroxyprogesterone acetate. None of these markers captures the effect of the treatment on the true outcome. A valid surrogate end point must both correlate with and accurately predict the outcome of interest. Although many surrogate markers correlate with an outcome, few have been shown to capture the effect of a treatment (for example, oral contraceptives) on the outcome (venous thrombosis). As a result, thousands of useless and misleading reports on surrogate end points litter the medical literature. New drugs have been shown to benefit a surrogate marker, but, paradoxically, triple the risk of death. Thousands of patients have died needlessly because of reliance on invalid surrogate markers. Researchers should avoid surrogate end points unless they have been validated; that requires at least one well done trial using both the surrogate and true outcome. The clinical maxim that "a difference to be a difference must make a difference" applies to research as well. Clinical research should focus on outcomes that matter.

  6. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  7. End-Stop Exemplar Based Recognition

    DEFF Research Database (Denmark)

    Olsen, Søren I.

    2003-01-01

    An approach to exemplar based recognition of visual shapes is presented. The shape information is described by attributed interest points (keys) detected by an end-stop operator. The attributes describe the statistics of lines and edges local to the interest point, the position of neighboring int...... interest points, and (in the training phase) a list of recognition names. Recognition is made by a simple voting procedure. Preliminary experiments indicate that the recognition is robust to noise, small deformations, background clutter and partial occlusion....

  8. Kinetic titration with differential thermometric determination of the end-point.

    Science.gov (United States)

    Sajó, I

    1968-06-01

    A method has been described for the determination of concentrations below 10(-4)M by applying catalytic reactions and using thermometric end-point determination. A reference solution, identical with the sample solution except for catalyst, is titrated with catalyst solution until the rates of reaction become the same, as shown by a null deflection on a galvanometer connected via bridge circuits to two opposed thermistors placed in the solutions.

  9. QM/MM Molecular Dynamics Studies of Metal Binding Proteins

    Directory of Open Access Journals (Sweden)

    Pietro Vidossich

    2014-07-01

    Full Text Available Mixed quantum-classical (quantum mechanical/molecular mechanical (QM/MM simulations have strongly contributed to providing insights into the understanding of several structural and mechanistic aspects of biological molecules. They played a particularly important role in metal binding proteins, where the electronic effects of transition metals have to be explicitly taken into account for the correct representation of the underlying biochemical process. In this review, after a brief description of the basic concepts of the QM/MM method, we provide an overview of its capabilities using selected examples taken from our work. Specifically, we will focus on heme peroxidases, metallo-β-lactamases, α-synuclein and ligase ribozymes to show how this approach is capable of describing the catalytic and/or structural role played by transition (Fe, Zn or Cu and main group (Mg metals. Applications will reveal how metal ions influence the formation and reduction of high redox intermediates in catalytic cycles and enhance drug metabolism, amyloidogenic aggregate formation and nucleic acid synthesis. In turn, it will become manifest that the protein frame directs and modulates the properties and reactivity of the metal ions.

  10. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  11. A quantitative analysis of statistical power identifies obesity end points for improved in vivo preclinical study design.

    Science.gov (United States)

    Selimkhanov, J; Thompson, W C; Guo, J; Hall, K D; Musante, C J

    2017-08-01

    The design of well-powered in vivo preclinical studies is a key element in building the knowledge of disease physiology for the purpose of identifying and effectively testing potential antiobesity drug targets. However, as a result of the complexity of the obese phenotype, there is limited understanding of the variability within and between study animals of macroscopic end points such as food intake and body composition. This, combined with limitations inherent in the measurement of certain end points, presents challenges to study design that can have significant consequences for an antiobesity program. Here, we analyze a large, longitudinal study of mouse food intake and body composition during diet perturbation to quantify the variability and interaction of the key metabolic end points. To demonstrate how conclusions can change as a function of study size, we show that a simulated preclinical study properly powered for one end point may lead to false conclusions based on secondary end points. We then propose the guidelines for end point selection and study size estimation under different conditions to facilitate proper power calculation for a more successful in vivo study design.

  12. Quantum Triple Point and Quantum Critical End Points in Metallic Magnets.

    Science.gov (United States)

    Belitz, D; Kirkpatrick, T R

    2017-12-29

    In low-temperature metallic magnets, ferromagnetic (FM) and antiferromagnetic (AFM) orders can exist, adjacent to one another or concurrently, in the phase diagram of a single system. We show that universal quantum effects qualitatively alter the known phase diagrams for classical magnets. They shrink the region of concurrent FM and AFM order, change various transitions from second to first order, and, in the presence of a magnetic field, lead to either a quantum triple point where the FM, AFM, and paramagnetic phases all coexist or a quantum critical end point.

  13. An application of the 'end-point' method to the minimum critical mass problem in two group transport theory

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2003-01-01

    A two group integral equation derived using transport theory, which describes the fuel distribution necessary for a flat thermal flux and minimum critical mass, is solved by the classical end-point method. This method has a number of advantages and in particular highlights the changing behaviour of the fissile mass distribution function in the neighbourhood of the core-reflector interface. We also show how the reflector thermal flux behaves and explain the origin of the maximum which arises when the critical size is less than that corresponding to minimum critical mass. A comparison is made with diffusion theory and the necessary and somewhat artificial presence of surface delta functions in the fuel distribution is shown to be analogous to the edge transients that arise naturally in transport theory

  14. Common QA/QM Criteria for Multinational Vendor Inspection

    International Nuclear Information System (INIS)

    2014-01-01

    This VICWG document provides the 'Common QA/QM Criteria' which will be used in Multinational Vendor Inspection. The 'Common QA/QM Criteria' provides the basic consideration when performing the Vendor Inspection. These criteria has been developed in conformity with International Codes and Standards such as IAEA, ISO and so on that MDEP member countries adopted. The purpose of the VICWG is to establish areas of co-operation in the Vendor Inspection practices among MDEP member countries as described in the MDEP issue-specific Terms of Reference (ToR). As part of this, from the beginning, a survey was performed to understand and to identify areas of commonality and differences between regulatory practices of member countries in the area of vendor inspection. The VICWG also collaborated by performing Witnessed Inspections and Joint Inspections. Through these activities, it was recognized that member countries commonly apply the IAEA safety standard (GS-R-3) to the vendor inspection criteria, and almost ail European member countries apply the ISO standard (ISO9001). In the US, the NRC regulatory requirement in 10 CFR, Part 50, Appendix B is used. South Korea uses the same criteria as in the US. As a result of the information obtained, a comparison table between codes and standards (IAEAGS-R-3, ISO 9001:2008.10CFR50 Appendix Band ASME NQA-1) has been developed in order to inform the development of 'Common QA/QM Criteria'. The result is documented in Table 1, 'MDEP CORE QA/QM Requirement and Comparison between Codes and Standards'. In addition, each country's criteria were compared with the US 10CFR50 Appendix B as a template. Table 2 shows VICWG Survey on Quality Assurance Program Requirements. Through these activities above, we considered that the core requirements should be consistent with both IAEA safety standard and ISO standard, and considered that the common requirements in the US 10CFR50 Appendix B used to the survey

  15. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    Science.gov (United States)

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  16. Biomarkers of Host Response Predict Primary End-Point Radiological Pneumonia in Tanzanian Children with Clinical Pneumonia: A Prospective Cohort Study

    Science.gov (United States)

    Erdman, Laura K.; D’Acremont, Valérie; Hayford, Kyla; Kilowoko, Mary; Kyungu, Esther; Hongoa, Philipina; Alamo, Leonor; Streiner, David L.; Genton, Blaise; Kain, Kevin C.

    2015-01-01

    Background Diagnosing pediatric pneumonia is challenging in low-resource settings. The World Health Organization (WHO) has defined primary end-point radiological pneumonia for use in epidemiological and vaccine studies. However, radiography requires expertise and is often inaccessible. We hypothesized that plasma biomarkers of inflammation and endothelial activation may be useful surrogates for end-point pneumonia, and may provide insight into its biological significance. Methods We studied children with WHO-defined clinical pneumonia (n = 155) within a prospective cohort of 1,005 consecutive febrile children presenting to Tanzanian outpatient clinics. Based on x-ray findings, participants were categorized as primary end-point pneumonia (n = 30), other infiltrates (n = 31), or normal chest x-ray (n = 94). Plasma levels of 7 host response biomarkers at presentation were measured by ELISA. Associations between biomarker levels and radiological findings were assessed by Kruskal-Wallis test and multivariable logistic regression. Biomarker ability to predict radiological findings was evaluated using receiver operating characteristic curve analysis and Classification and Regression Tree analysis. Results Compared to children with normal x-ray, children with end-point pneumonia had significantly higher C-reactive protein, procalcitonin and Chitinase 3-like-1, while those with other infiltrates had elevated procalcitonin and von Willebrand Factor and decreased soluble Tie-2 and endoglin. Clinical variables were not predictive of radiological findings. Classification and Regression Tree analysis generated multi-marker models with improved performance over single markers for discriminating between groups. A model based on C-reactive protein and Chitinase 3-like-1 discriminated between end-point pneumonia and non-end-point pneumonia with 93.3% sensitivity (95% confidence interval 76.5–98.8), 80.8% specificity (72.6–87.1), positive likelihood ratio 4.9 (3.4–7

  17. European network on the determination of site end points for radiologically contaminated land

    International Nuclear Information System (INIS)

    Booth, Peter; Lennon, Chris

    2007-01-01

    Nexia Solutions are currently running a small European network entitled 'European Network on the Determination of Site End Points for Radiologically Contaminated Land (ENDSEP)'. Other network members include NRG (Netherlands), UKAEA (UK), CEA (France), SOGIN (Italy), Wismut (Germany), Saxon State Agency of Environment and Geology (Germany). The network is focused on the technical and socio-economical issues associated with the determination of end points for sites potentially, or actually, impacted by radiological contamination. Such issues will cover: - Those associated with the run up to establishing a site end point; - Those associated with verifying that the end points have been met; and Those associated with post closure. The network's current high level objectives can be summarized as follows: Share experience and best practice in the key issues running up to determining site end points; Gain a better understanding of the potential effects of recent and forthcoming EU legislation; Assess consistency between approaches; Highlight potential gaps within the remit of site end point determination and management; and - Consider the formulation of research projects with a view to sharing time and expense. The programme of work revolves around the following key tasks: - Share information, experience and existing good practice. - Look to determine sustainable approaches to contaminated land site end point management. - Through site visits, gain first hand experience of determining an appropriate end point strategy, and identifying and resolving end point issues. Highlight the key data gaps and consider the development of programmes to either close out these gaps or to build confidence in the approaches taken. Production of position papers on each technical are a highlighting how different countries approach/resolve a specific problem. (authors)

  18. Density-Dependent Formulation of Dispersion-Repulsion Interactions in Hybrid Multiscale Quantum/Molecular Mechanics (QM/MM) Models.

    Science.gov (United States)

    Curutchet, Carles; Cupellini, Lorenzo; Kongsted, Jacob; Corni, Stefano; Frediani, Luca; Steindal, Arnfinn Hykkerud; Guido, Ciro A; Scalmani, Giovanni; Mennucci, Benedetta

    2018-03-13

    Mixed multiscale quantum/molecular mechanics (QM/MM) models are widely used to explore the structure, reactivity, and electronic properties of complex chemical systems. Whereas such models typically include electrostatics and potentially polarization in so-called electrostatic and polarizable embedding approaches, respectively, nonelectrostatic dispersion and repulsion interactions are instead commonly described through classical potentials despite their quantum mechanical origin. Here we present an extension of the Tkatchenko-Scheffler semiempirical van der Waals (vdW TS ) scheme aimed at describing dispersion and repulsion interactions between quantum and classical regions within a QM/MM polarizable embedding framework. Starting from the vdW TS expression, we define a dispersion and a repulsion term, both of them density-dependent and consistently based on a Lennard-Jones-like potential. We explore transferable atom type-based parametrization strategies for the MM parameters, based on either vdW TS calculations performed on isolated fragments or on a direct estimation of the parameters from atomic polarizabilities taken from a polarizable force field. We investigate the performance of the implementation by computing self-consistent interaction energies for the S22 benchmark set, designed to represent typical noncovalent interactions in biological systems, in both equilibrium and out-of-equilibrium geometries. Overall, our results suggest that the present implementation is a promising strategy to include dispersion and repulsion in multiscale QM/MM models incorporating their explicit dependence on the electronic density.

  19. Prediction of the Chapman-Jouguet chemical equilibrium state in a detonation wave from first principles based reactive molecular dynamics.

    Science.gov (United States)

    Guo, Dezhou; Zybin, Sergey V; An, Qi; Goddard, William A; Huang, Fenglei

    2016-01-21

    The combustion or detonation of reacting materials at high temperature and pressure can be characterized by the Chapman-Jouguet (CJ) state that describes the chemical equilibrium of the products at the end of the reaction zone of the detonation wave for sustained detonation. This provides the critical properties and product kinetics for input to macroscale continuum simulations of energetic materials. We propose the ReaxFF Reactive Dynamics to CJ point protocol (Rx2CJ) for predicting the CJ state parameters, providing the means to predict the performance of new materials prior to synthesis and characterization, allowing the simulation based design to be done in silico. Our Rx2CJ method is based on atomistic reactive molecular dynamics (RMD) using the QM-derived ReaxFF force field. We validate this method here by predicting the CJ point and detonation products for three typical energetic materials. We find good agreement between the predicted and experimental detonation velocities, indicating that this method can reliably predict the CJ state using modest levels of computation.

  20. Towards Automatic Testing of Reference Point Based Interactive Methods

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2016-01-01

    In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...

  1. The use of a radioactive tracer for the determination of distillation end point in a coke oven

    International Nuclear Information System (INIS)

    Burgio, N.; Capannesi, G.; Ciavola, C.; Sedda, F.

    1995-01-01

    A novel high precision detection method for the determination of the distillation end point of the coking process (usually in the 950 deg C range) has been developed. The system is based on the use of a metallic capsule that melts at a fixed temperature and releases a radioactive gas tracer ( 133 Xe) in the stream of the distillation gas. A series of tests on a pilot oven confirmed the feasibility of the method on industrial scale. Application of the radioactive tracer method to the staging and monitoring in the coking process appears to be possible. (author). 6 refs., 5 figs., 3 tabs

  2. Estimated GFR Decline as a Surrogate End Point for Kidney Failure

    DEFF Research Database (Denmark)

    Lambers Heerspink, Hiddo J; Weldegiorgis, Misghina; Inker, Lesley A

    2014-01-01

    A doubling of serum creatinine value, corresponding to a 57% decline in estimated glomerular filtration rate (eGFR), is used frequently as a component of a composite kidney end point in clinical trials in type 2 diabetes. The aim of this study was to determine whether alternative end points defin...

  3. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    Science.gov (United States)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  4. A point-value enhanced finite volume method based on approximate delta functions

    Science.gov (United States)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  5. Final report of APMP.QM-S6: clenbuterol in porcine meat

    Science.gov (United States)

    Sin, D. W.-M.; Ho, C.; Yip, Y.-C.

    2016-01-01

    At the CCQM Organic Analysis Working Group (OAWG) Meeting held in April 2012 and the APMP TCQM Meeting held in November 2012, an APMP supplementary comparison (APMP.QM-S6) on the determination of clenbuterol in porcine meat was supported by the OAWG and APMP TCQM. This comparison was organized by the Government Laboratory, Hong Kong. In order to accommodate a wider participation, a pilot study (APMP.QM-P22) was run in parallel to APMP.QM-S6. This study provided the means for assessing the measurement capabilities for determination of low-polarity measurands in a procedure that requires extraction, clean-up, analytical separation, and selective detection in a food matrix. A total of 7 institutes registered for the supplementary comparison and 6 of them submitted their results. 4 results were included for SCRV calculation. All participating laboratories applied Isotope Dilution Liquid Chromatography-Tandem Mass Spectrometry (ID-LCMS/MS) technique with clenbuterol-d9 as internal standard spiked for quantitation in this programme. KEY WORDS FOR SEARCH APMP.QM-S6 and Clenbuterol Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  6. Efficient parallel implementations of QM/MM-REMD (quantum mechanical/molecular mechanics-replica-exchange MD) and umbrella sampling: isomerization of H2O2 in aqueous solution.

    Science.gov (United States)

    Fedorov, Dmitri G; Sugita, Yuji; Choi, Cheol Ho

    2013-07-03

    An efficient parallel implementation of QM/MM-based replica-exchange molecular dynamics (REMD) as well as umbrella samplings techniques was proposed by adopting the generalized distributed data interface (GDDI). Parallelization speed-up of 40.5 on 48 cores was achieved, making our QM/MM-MD engine a robust tool for studying complex chemical dynamics in solution. They were comparatively used to study the torsional isomerization of hydrogen peroxide in aqueous solution. All results by QM/MM-REMD and QM/MM umbrella sampling techniques yielded nearly identical potentials of mean force (PMFs) regardless of the particular QM theories for solute, showing that the overall dynamics are mainly determined by solvation. Although the entropic penalty of solvent rearrangements exists in cisoid conformers, it was found that both strong intermolecular hydrogen bonding and dipole-dipole interactions preferentially stabilize them in solution, reducing the torsional free-energy barrier at 0° by about 3 kcal/mol as compared to that in gas phase.

  7. Improved DEA Cross Efficiency Evaluation Method Based on Ideal and Anti-Ideal Points

    Directory of Open Access Journals (Sweden)

    Qiang Hou

    2018-01-01

    Full Text Available A new model is introduced in the process of evaluating efficiency value of decision making units (DMUs through data envelopment analysis (DEA method. Two virtual DMUs called ideal point DMU and anti-ideal point DMU are combined to form a comprehensive model based on the DEA method. The ideal point DMU is taking self-assessment system according to efficiency concept. The anti-ideal point DMU is taking other-assessment system according to fairness concept. The two distinctive ideal point models are introduced to the DEA method and combined through using variance ration. From the new model, a reasonable result can be obtained. Numerical examples are provided to illustrate the new constructed model and certify the rationality of the constructed model through relevant analysis with the traditional DEA model.

  8. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

    Directory of Open Access Journals (Sweden)

    Zhao Hong-hao

    2016-01-01

    Full Text Available Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distributed normal process. Then the Bases theory is used to characterize the end-to-end network traffic. By calculating the parameters, the model is determined correctly. Simulation results show that our approach is feasible and effective.

  9. Guidelines for time-to-event end-point definitions in trials for pancreatic cancer. Results of the DATECAN initiative (Definition for the Assessment of Time-to-event End-points in CANcer trials).

    Science.gov (United States)

    Bonnetain, Franck; Bonsing, Bert; Conroy, Thierry; Dousseau, Adelaide; Glimelius, Bengt; Haustermans, Karin; Lacaine, François; Van Laethem, Jean Luc; Aparicio, Thomas; Aust, Daniela; Bassi, Claudio; Berger, Virginie; Chamorey, Emmanuel; Chibaudel, Benoist; Dahan, Laeticia; De Gramont, Aimery; Delpero, Jean Robert; Dervenis, Christos; Ducreux, Michel; Gal, Jocelyn; Gerber, Erich; Ghaneh, Paula; Hammel, Pascal; Hendlisz, Alain; Jooste, Valérie; Labianca, Roberto; Latouche, Aurelien; Lutz, Manfred; Macarulla, Teresa; Malka, David; Mauer, Muriel; Mitry, Emmanuel; Neoptolemos, John; Pessaux, Patrick; Sauvanet, Alain; Tabernero, Josep; Taieb, Julien; van Tienhoven, Geertjan; Gourgou-Bourgade, Sophie; Bellera, Carine; Mathoulin-Pélissier, Simone; Collette, Laurence

    2014-11-01

    Using potential surrogate end-points for overall survival (OS) such as Disease-Free- (DFS) or Progression-Free Survival (PFS) is increasingly common in randomised controlled trials (RCTs). However, end-points are too often imprecisely defined which largely contributes to a lack of homogeneity across trials, hampering comparison between them. The aim of the DATECAN (Definition for the Assessment of Time-to-event End-points in CANcer trials)-Pancreas project is to provide guidelines for standardised definition of time-to-event end-points in RCTs for pancreatic cancer. Time-to-event end-points currently used were identified from a literature review of pancreatic RCT trials (2006-2009). Academic research groups were contacted for participation in order to select clinicians and methodologists to participate in the pilot and scoring groups (>30 experts). A consensus was built after 2 rounds of the modified Delphi formal consensus approach with the Rand scoring methodology (range: 1-9). For pancreatic cancer, 14 time to event end-points and 25 distinct event types applied to two settings (detectable disease and/or no detectable disease) were considered relevant and included in the questionnaire sent to 52 selected experts. Thirty experts answered both scoring rounds. A total of 204 events distributed over the 14 end-points were scored. After the first round, consensus was reached for 25 items; after the second consensus was reached for 156 items; and after the face-to-face meeting for 203 items. The formal consensus approach reached the elaboration of guidelines for standardised definitions of time-to-event end-points allowing cross-comparison of RCTs in pancreatic cancer. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Stepwise catalytic mechanism via short-lived intermediate inferred from combined QM/MM MERP and PES calculations on retaining glycosyltransferase ppGalNAcT2.

    Science.gov (United States)

    Trnka, Tomáš; Kozmon, Stanislav; Tvaroška, Igor; Koča, Jaroslav

    2015-04-01

    The glycosylation of cell surface proteins plays a crucial role in a multitude of biological processes, such as cell adhesion and recognition. To understand the process of protein glycosylation, the reaction mechanisms of the participating enzymes need to be known. However, the reaction mechanism of retaining glycosyltransferases has not yet been sufficiently explained. Here we investigated the catalytic mechanism of human isoform 2 of the retaining glycosyltransferase polypeptide UDP-GalNAc transferase by coupling two different QM/MM-based approaches, namely a potential energy surface scan in two distance difference dimensions and a minimum energy reaction path optimisation using the Nudged Elastic Band method. Potential energy scan studies often suffer from inadequate sampling of reactive processes due to a predefined scan coordinate system. At the same time, path optimisation methods enable the sampling of a virtually unlimited number of dimensions, but their results cannot be unambiguously interpreted without knowledge of the potential energy surface. By combining these methods, we have been able to eliminate the most significant sources of potential errors inherent to each of these approaches. The structural model is based on the crystal structure of human isoform 2. In the QM/MM method, the QM region consists of 275 atoms, the remaining 5776 atoms were in the MM region. We found that ppGalNAcT2 catalyzes a same-face nucleophilic substitution with internal return (SNi). The optimized transition state for the reaction is 13.8 kcal/mol higher in energy than the reactant while the energy of the product complex is 6.7 kcal/mol lower. During the process of nucleophilic attack, a proton is synchronously transferred to the leaving phosphate. The presence of a short-lived metastable oxocarbenium intermediate is likely, as indicated by the reaction energy profiles obtained using high-level density functionals.

  11. Stepwise catalytic mechanism via short-lived intermediate inferred from combined QM/MM MERP and PES calculations on retaining glycosyltransferase ppGalNAcT2.

    Directory of Open Access Journals (Sweden)

    Tomáš Trnka

    2015-04-01

    Full Text Available The glycosylation of cell surface proteins plays a crucial role in a multitude of biological processes, such as cell adhesion and recognition. To understand the process of protein glycosylation, the reaction mechanisms of the participating enzymes need to be known. However, the reaction mechanism of retaining glycosyltransferases has not yet been sufficiently explained. Here we investigated the catalytic mechanism of human isoform 2 of the retaining glycosyltransferase polypeptide UDP-GalNAc transferase by coupling two different QM/MM-based approaches, namely a potential energy surface scan in two distance difference dimensions and a minimum energy reaction path optimisation using the Nudged Elastic Band method. Potential energy scan studies often suffer from inadequate sampling of reactive processes due to a predefined scan coordinate system. At the same time, path optimisation methods enable the sampling of a virtually unlimited number of dimensions, but their results cannot be unambiguously interpreted without knowledge of the potential energy surface. By combining these methods, we have been able to eliminate the most significant sources of potential errors inherent to each of these approaches. The structural model is based on the crystal structure of human isoform 2. In the QM/MM method, the QM region consists of 275 atoms, the remaining 5776 atoms were in the MM region. We found that ppGalNAcT2 catalyzes a same-face nucleophilic substitution with internal return (SNi. The optimized transition state for the reaction is 13.8 kcal/mol higher in energy than the reactant while the energy of the product complex is 6.7 kcal/mol lower. During the process of nucleophilic attack, a proton is synchronously transferred to the leaving phosphate. The presence of a short-lived metastable oxocarbenium intermediate is likely, as indicated by the reaction energy profiles obtained using high-level density functionals.

  12. PyCPR - a python-based implementation of the Conjugate Peak Refinement (CPR) algorithm for finding transition state structures.

    Science.gov (United States)

    Gisdon, Florian J; Culka, Martin; Ullmann, G Matthias

    2016-10-01

    Conjugate peak refinement (CPR) is a powerful and robust method to search transition states on a molecular potential energy surface. Nevertheless, the method was to the best of our knowledge so far only implemented in CHARMM. In this paper, we present PyCPR, a new Python-based implementation of the CPR algorithm within the pDynamo framework. We provide a detailed description of the theory underlying our implementation and discuss the different parts of the implementation. The method is applied to two different problems. First, we illustrate the method by analyzing the gauche to anti-periplanar transition of butane using a semiempirical QM method. Second, we reanalyze the mechanism of a glycyl-radical enzyme, namely of 4-hydroxyphenylacetate decarboxylase (HPD) using QM/MM calculations. In the end, we suggest a strategy how to use our implementation of the CPR algorithm. The integration of PyCPR into the framework pDynamo allows the combination of CPR with the large variety of methods implemented in pDynamo. PyCPR can be used in combination with quantum mechanical and molecular mechanical methods (and hybrid methods) implemented directly in pDynamo, but also in combination with external programs such as ORCA using pDynamo as interface. PyCPR is distributed as free, open source software and can be downloaded from http://www.bisb.uni-bayreuth.de/index.php?page=downloads . Graphical Abstract PyCPR is a search tool for finding saddle points on the potential energy landscape of a molecular system.

  13. Marker list: QM109927 [PGDBj Registered plant list, Marker list, QTL list, Plant DB link and Genome analysis methods[Archive

    Lifescience Database Archive (English)

    Full Text Available ulation VWP2 x VWP4 and pseudo-F2 population ... Chr12 ... 10.1007/s00438-005-1149-2 16021467 ... QM109927 Solanum lycopersicum Solanaceae L16L Others ATAGTGTCTACTTCAGGG CCATCAAACAGTTCTCT S. peruvianum pop

  14. Marker list: QM109928 [PGDBj Registered plant list, Marker list, QTL list, Plant DB link and Genome analysis methods[Archive

    Lifescience Database Archive (English)

    Full Text Available GATGGCGTT S. peruvianum population VWP2 x VWP4 and pseudo-F2 population ... Chr12 ... 10.1007/s00438-005-1149-2 16021467 ... QM109928 Solanum lycopersicum Solanaceae DH8R Others TAGAGAGACTATCCTTTA CACATTCAGT

  15. Marker list: QM357356 [PGDBj Registered plant list, Marker list, QTL list, Plant DB link and Genome analysis methods[Archive

    Lifescience Database Archive (English)

    Full Text Available QM357356 Solanum tuberosum Solanaceae toPt-437059 Others ... CIP703825 ... Chr10 ratio of tuber length to tuber... width trait and eye depth of tuber trait 1 10.1186/s12863-015-0213-0 26024857

  16. Comparison between amperometric and true potentiometric end-point detection in the determination of water by the Karl Fischer method.

    Science.gov (United States)

    Cedergren, A

    1974-06-01

    A rapid and sensitive method using true potentiometric end-point detection has been developed and compared with the conventional amperometric method for Karl Fischer determination of water. The effect of the sulphur dioxide concentration on the shape of the titration curve is shown. By using kinetic data it was possible to calculate the course of titrations and make comparisons with those found experimentally. The results prove that the main reaction is the slow step, both in the amperometric and the potentiometric method. Results obtained in the standardization of the Karl Fischer reagent showed that the potentiometric method, including titration to a preselected potential, gave a standard deviation of 0.001(1) mg of water per ml, the amperometric method using extrapolation 0.002(4) mg of water per ml and the amperometric titration to a pre-selected diffusion current 0.004(7) mg of water per ml. Theories and results dealing with dilution effects are presented. The time of analysis was 1-1.5 min for the potentiometric and 4-5 min for the amperometric method using extrapolation.

  17. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  18. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

    OpenAIRE

    Zhao Hong-hao; Meng Fan-bo; Zhao Si-wen; Zhao Si-hang; Lu Yi

    2016-01-01

    Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distrib...

  19. Solvent Boundary Potentials for Hybrid QM/MM Computations Using Classical Drude Oscillators: A Fully Polarizable Model.

    Science.gov (United States)

    Boulanger, Eliot; Thiel, Walter

    2012-11-13

    Accurate quantum mechanical/molecular mechanical (QM/MM) treatments should account for MM polarization and properly include long-range electrostatic interactions. We report on a development that covers both these aspects. Our approach combines the classical Drude oscillator (DO) model for the electronic polarizability of the MM atoms with the generalized solvent boundary Potential (GSBP) and the solvated macromolecule boundary potential (SMBP). These boundary potentials (BP) are designed to capture the long-range effects of the outer region of a large system on its interior. They employ a finite difference approximation to the Poisson-Boltzmann equation for computing electrostatic interactions and take into account outer-region bulk solvent through a polarizable dielectric continuum (PDC). This approach thus leads to fully polarizable three-layer QM/MM-DO/BP methods. As the mutual responses of each of the subsystems have to be taken into account, we propose efficient schemes to converge the polarization of each layer simultaneously. For molecular dynamics (MD) simulations using GSBP, this is achieved by considering the MM polarizable model as a dynamical degree of freedom, and hence contributions from the boundary potential can be evaluated for a frozen state of polarization at every time step. For geometry optimizations using SMBP, we propose a dual self-consistent field approach for relaxing the Drude oscillators to their ideal positions and converging the QM wave function with the proper boundary potential. The chosen coupling schemes are evaluated with a test system consisting of a glycine molecule in a water ball. Both boundary potentials are capable of properly reproducing the gradients at the inner-region atoms and the Drude oscillators. We show that the effect of the Drude oscillators must be included in all terms of the boundary potentials to obtain accurate results and that the use of a high dielectric constant for the PDC does not lead to a polarization

  20. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    Science.gov (United States)

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  1. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    Science.gov (United States)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  2. A GROSS ERROR ELIMINATION METHOD FOR POINT CLOUD DATA BASED ON KD-TREE

    Directory of Open Access Journals (Sweden)

    Q. Kang

    2018-04-01

    Full Text Available Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  3. Insights into the Lactonase Mechanism of Serum Paraoxonase 1 (PON1): Experimental and Quantum Mechanics/Molecular Mechanics (QM/MM) Studies.

    Science.gov (United States)

    Le, Quang Anh Tuan; Kim, Seonghoon; Chang, Rakwoo; Kim, Yong Hwan

    2015-07-30

    Serum paraoxonase 1 (PON1) is a versatile enzyme for the hydrolysis of various substrates (e.g., lactones, phosphotriesters) and for the formation of a promising chemical platform γ-valerolactone. Elucidation of the PON1-catalyzed lactonase reaction mechanism is very important for understanding the enzyme function and for engineering this enzyme for specific applications. Kinetic study and hybrid quantum mechanics/molecular mechanics (QM/MM) method were used to investigate the PON1-catalyzed lactonase reaction of γ-butyrolactone (GBL) and (R)-γ-valerolactone (GVL). The activation energies obtained from the QM/MM calculations were in good agreement with the experiments. Interestingly, the QM/MM energy barriers at MP2/3-21G(d,p) level for the lactonase of GVL and GBL were respectively 14.3-16.2 and 11.5-13.1 kcal/mol, consistent with the experimental values (15.57 and 14.73 kcal/mol derived from respective kcat values of 36.62 and 147.21 s(-1)). The QM/MM energy barriers at MP2/6-31G(d) and MP2/6-31G(d,p) levels were also in relatively good agreements with the experiments. Importantly, the difference in the QM/MM energy barriers at MP2 level with all investigated basis sets for the lactonase of GVL and GBL were in excellent agreement with the experiments (0.9-3.1 and 0.8 kcal/mol, respectively). A detailed mechanism for the PON1-catalyzed lactonase reaction was also proposed in this study.

  4. Invalid-point removal based on epipolar constraint in the structured-light method

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-06-01

    In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.

  5. Density-Dependent Formulation of Dispersion-Repulsion Interactions in Hybrid Multiscale Quantum/Molecular Mechanics (QM/MM) Models

    DEFF Research Database (Denmark)

    Curutchet, Carles; Cupellini, Lorenzo; Kongsted, Jacob

    2018-01-01

    embedding approaches, respectively, nonelectrostatic dispersion and repulsion interactions are instead commonly described through classical potentials despite their quantum mechanical origin. Here we present an extension of the Tkatchenko-Scheffler semiempirical van der Waals (vdWTS) scheme aimed......Mixed multiscale quantum/molecular mechanics (QM/MM) models are widely used to explore the structure, reactivity, and electronic properties of complex chemical systems. Whereas such models typically include electrostatics and potentially polarization in so-called electrostatic and polarizable...... at describing dispersion and repulsion interactions between quantum and classical regions within a QM/MM polarizable embedding framework. Starting from the vdWTSexpression, we define a dispersion and a repulsion term, both of them density-dependent and consistently based on a Lennard-Jones-like potential. We...

  6. Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

    NARCIS (Netherlands)

    Asadi, A.R.; Roos, C.

    2015-01-01

    In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.

  7. Marker list: QM183663 [PGDBj Registered plant list, Marker list, QTL list, Plant DB link and Genome analysis methods[Archive

    Lifescience Database Archive (English)

    Full Text Available CG GCTCCAACTTCAATGCCTGT Gold Ball Livingston x Yellow Pear|San Marzano x Gold Ball Livingston|T1693 x Yellow Pear ... chr11 fruit shape 1 10.1038/hdy.2013.45 23673388 ... QM183663 Solanum lycopersicum Solanaceae 11EP186 CAPS/dCAPS TGGAAGCTTTAAACTTGTCGTT

  8. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  9. Cation solvation with quantum chemical effects modeled by a size-consistent multi-partitioning quantum mechanics/molecular mechanics method.

    Science.gov (United States)

    Watanabe, Hiroshi C; Kubillus, Maximilian; Kubař, Tomáš; Stach, Robert; Mizaikoff, Boris; Ishikita, Hiroshi

    2017-07-21

    In the condensed phase, quantum chemical properties such as many-body effects and intermolecular charge fluctuations are critical determinants of the solvation structure and dynamics. Thus, a quantum mechanical (QM) molecular description is required for both solute and solvent to incorporate these properties. However, it is challenging to conduct molecular dynamics (MD) simulations for condensed systems of sufficient scale when adapting QM potentials. To overcome this problem, we recently developed the size-consistent multi-partitioning (SCMP) quantum mechanics/molecular mechanics (QM/MM) method and realized stable and accurate MD simulations, using the QM potential to a benchmark system. In the present study, as the first application of the SCMP method, we have investigated the structures and dynamics of Na + , K + , and Ca 2+ solutions based on nanosecond-scale sampling, a sampling 100-times longer than that of conventional QM-based samplings. Furthermore, we have evaluated two dynamic properties, the diffusion coefficient and difference spectra, with high statistical certainty. Furthermore the calculation of these properties has not previously been possible within the conventional QM/MM framework. Based on our analysis, we have quantitatively evaluated the quantum chemical solvation effects, which show distinct differences between the cations.

  10. Is automated kinetic measurement superior to end-point for advanced oxidation protein product?

    Science.gov (United States)

    Oguz, Osman; Inal, Berrin Bercik; Emre, Turker; Ozcan, Oguzhan; Altunoglu, Esma; Oguz, Gokce; Topkaya, Cigdem; Guvenen, Guvenc

    2014-01-01

    Advanced oxidation protein product (AOPP) was first described as an oxidative protein marker in chronic uremic patients and measured with a semi-automatic end-point method. Subsequently, the kinetic method was introduced for AOPP assay. We aimed to compare these two methods by adapting them to a chemistry analyzer and to investigate the correlation between AOPP and fibrinogen, the key molecule responsible for human plasma AOPP reactivity, microalbumin, and HbA1c in patients with type II diabetes mellitus (DM II). The effects of EDTA and citrate-anticogulated tubes on these two methods were incorporated into the study. This study included 93 DM II patients (36 women, 57 men) with HbA1c levels > or = 7%, who were admitted to the diabetes and nephrology clinics. The samples were collected in EDTA and in citrate-anticoagulated tubes. Both methods were adapted to a chemistry analyzer and the samples were studied in parallel. In both types of samples, we found a moderate correlation between the kinetic and the endpoint methods (r = 0.611 for citrate-anticoagulated, r = 0.636 for EDTA-anticoagulated, p = 0.0001 for both). We found a moderate correlation between fibrinogen-AOPP and microalbumin-AOPP levels only in the kinetic method (r = 0.644 and 0.520 for citrate-anticoagulated; r = 0.581 and 0.490 for EDTA-anticoagulated, p = 0.0001). We conclude that adaptation of the end-point method to automation is more difficult and it has higher between-run CV% while application of the kinetic method is easier and it may be used in oxidative stress studies.

  11. Yoink: An interaction-based partitioning API.

    Science.gov (United States)

    Zheng, Min; Waller, Mark P

    2018-05-15

    Herein, we describe the implementation details of our interaction-based partitioning API (application programming interface) called Yoink for QM/MM modeling and fragment-based quantum chemistry studies. Interactions are detected by computing density descriptors such as reduced density gradient, density overlap regions indicator, and single exponential decay detector. Only molecules having an interaction with a user-definable QM core are added to the QM region of a hybrid QM/MM calculation. Moreover, a set of molecule pairs having density-based interactions within a molecular system can be computed in Yoink, and an interaction graph can then be constructed. Standard graph clustering methods can then be applied to construct fragments for further quantum chemical calculations. The Yoink API is licensed under Apache 2.0 and can be accessed via yoink.wallerlab.org. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  12. A multiscale quantum mechanics/electromagnetics method for device simulations.

    Science.gov (United States)

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-07

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  13. Multiple environment single system quantum mechanical/molecular mechanical (MESS-QM/MM) calculations. 1. Estimation of polarization energies.

    Science.gov (United States)

    Sodt, Alexander J; Mei, Ye; König, Gerhard; Tao, Peng; Steele, Ryan P; Brooks, Bernard R; Shao, Yihan

    2015-03-05

    In combined quantum mechanical/molecular mechanical (QM/MM) free energy calculations, it is often advantageous to have a frozen geometry for the quantum mechanical (QM) region. For such multiple-environment single-system (MESS) cases, two schemes are proposed here for estimating the polarization energy: the first scheme, termed MESS-E, involves a Roothaan step extrapolation of the self-consistent field (SCF) energy; whereas the other scheme, termed MESS-H, employs a Newton-Raphson correction using an approximate inverse electronic Hessian of the QM region (which is constructed only once). Both schemes are extremely efficient, because the expensive Fock updates and SCF iterations in standard QM/MM calculations are completely avoided at each configuration. They produce reasonably accurate QM/MM polarization energies: MESS-E can predict the polarization energy within 0.25 kcal/mol in terms of the mean signed error for two of our test cases, solvated methanol and solvated β-alanine, using the M06-2X or ωB97X-D functionals; MESS-H can reproduce the polarization energy within 0.2 kcal/mol for these two cases and for the oxyluciferin-luciferase complex, if the approximate inverse electronic Hessians are constructed with sufficient accuracy.

  14. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    International Nuclear Information System (INIS)

    Zhang Guiyong; Liu Guirong

    2010-01-01

    In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and

  15. Calculation of wave-functions with frozen orbitals in mixed quantum mechanics/molecular mechanics methods. Part I. Application of the Huzinaga equation.

    Science.gov (United States)

    Ferenczy, György G

    2013-04-05

    Mixed quantum mechanics/quantum mechanics (QM/QM) and quantum mechanics/molecular mechanics (QM/MM) methods make computations feasible for extended chemical systems by separating them into subsystems that are treated at different level of sophistication. In many applications, the subsystems are covalently bound and the use of frozen localized orbitals at the boundary is a possible way to separate the subsystems and to ensure a sensible description of the electronic structure near to the boundary. A complication in these methods is that orthogonality between optimized and frozen orbitals has to be warranted and this is usually achieved by an explicit orthogonalization of the basis set to the frozen orbitals. An alternative to this approach is proposed by calculating the wave-function from the Huzinaga equation that guaranties orthogonality to the frozen orbitals without basis set orthogonalization. The theoretical background and the practical aspects of the application of the Huzinaga equation in mixed methods are discussed. Forces have been derived to perform geometry optimization with wave-functions from the Huzinaga equation. Various properties have been calculated by applying the Huzinaga equation for the central QM subsystem, representing the environment by point charges and using frozen strictly localized orbitals to connect the subsystems. It is shown that a two to three bond separation of the chemical or physical event from the frozen bonds allows a very good reproduction (typically around 1 kcal/mol) of standard Hartree-Fock-Roothaan results. The proposed scheme provides an appropriate framework for mixed QM/QM and QM/MM methods. Copyright © 2012 Wiley Periodicals, Inc.

  16. A Study on the Effect of the Contact Point and the Contact Force of a Glass Fiber under End-Face Polishing Process

    Directory of Open Access Journals (Sweden)

    Ying-Chien Tsai

    2015-01-01

    Full Text Available The offset between the center lines of the polished end-face and the fiber core has a significant effect on coupling efficiency. The initial contact point and the contact force are two of the most important parameters that induce the offset. This study proposes an image assistant method to find the initial contact point and a mathematical model to estimate the contact force when fabricating the double-variable-curvature end-face of single mode glass fiber. The repeatability of finding the initial contact point via the vision assistant program is 0.3 μm. Based on the assumption of a large deflection, a mathematical model is developed to study the relationship between the contact force and the displacement of the lapping film. In order to verify the feasibility of the mathematical model, experiments, as well as DEFORM simulations, are carried out. The results show that the contact forces are alomst linearly proportional to the feed amounts of the lapping film and the errors are less than 9%. By using the method developed in this study, the offset between the grinding end-face and the center line of the fiber core is within 0.15 to 0.35 μm.

  17. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  18. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    Science.gov (United States)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  19. Guidelines for time-to-event end-point definitions in trials for pancreatic cancer. Results of the DATECAN initiative (Definition for the Assessment of Time-to-event End-points in CANcer trials)

    NARCIS (Netherlands)

    Bonnetain, Franck; Bonsing, Bert; Conroy, Thierry; Dousseau, Adelaide; Glimelius, Bengt; Haustermans, Karin; Lacaine, François; van Laethem, Jean Luc; Aparicio, Thomas; Aust, Daniela; Bassi, Claudio; Berger, Virginie; Chamorey, Emmanuel; Chibaudel, Benoist; Dahan, Laeticia; de Gramont, Aimery; Delpero, Jean Robert; Dervenis, Christos; Ducreux, Michel; Gal, Jocelyn; Gerber, Erich; Ghaneh, Paula; Hammel, Pascal; Hendlisz, Alain; Jooste, Valérie; Labianca, Roberto; Latouche, Aurelien; Lutz, Manfred; Macarulla, Teresa; Malka, David; Mauer, Muriel; Mitry, Emmanuel; Neoptolemos, John; Pessaux, Patrick; Sauvanet, Alain; Tabernero, Josep; Taieb, Julien; van Tienhoven, Geertjan; Gourgou-Bourgade, Sophie; Bellera, Carine; Mathoulin-Pélissier, Simone; Collette, Laurence

    2014-01-01

    Using potential surrogate end-points for overall survival (OS) such as Disease-Free- (DFS) or Progression-Free Survival (PFS) is increasingly common in randomised controlled trials (RCTs). However, end-points are too often imprecisely defined which largely contributes to a lack of homogeneity across

  20. QM/MM Calculations with deMon2k

    Directory of Open Access Journals (Sweden)

    Dennis R. Salahub

    2015-03-01

    Full Text Available The density functional code deMon2k employs a fitted density throughout (Auxiliary Density Functional Theory, which offers a great speed advantage without sacrificing necessary accuracy. Powerful Quantum Mechanical/Molecular Mechanical (QM/MM approaches are reviewed. Following an overview of the basic features of deMon2k that make it efficient while retaining accuracy, three QM/MM implementations are compared and contrasted. In the first, deMon2k is interfaced with the CHARMM MM code (CHARMM-deMon2k; in the second MM is coded directly within the deMon2k software; and in the third the Chemistry in Ruby (Cuby wrapper is used to drive the calculations. Cuby is also used in the context of constrained-DFT/MM calculations. Each of these implementations is described briefly; pros and cons are discussed and a few recent applications are described briefly. Applications include solvated ions and biomolecules, polyglutamine peptides important in polyQ neurodegenerative diseases, copper monooxygenases and ultra-rapid electron transfer in cryptochromes.

  1. An improved phase-locked loop method for automatic resonance frequency tracing based on static capacitance broadband compensation for a high-power ultrasonic transducer.

    Science.gov (United States)

    Dong, Hui-juan; Wu, Jian; Zhang, Guang-yu; Wu, Han-fu

    2012-02-01

    The phase-locked loop (PLL) method is widely used for automatic resonance frequency tracing (ARFT) of high-power ultrasonic transducers, which are usually vibrating systems with high mechanical quality factor (Qm). However, a heavily-loaded transducer usually has a low Qm because the load has a large mechanical loss. In this paper, a series of theoretical analyses is carried out to detail why the traditional PLL method could cause serious frequency tracing problems, including loss of lock, antiresonance frequency tracing, and large tracing errors. The authors propose an improved ARFT method based on static capacitance broadband compensation (SCBC), which is able to address these problems. Experiments using a generator based on the novel method were carried out using crude oil as the transducer load. The results obtained have demonstrated the effectiveness of the novel method, compared with the conventional PLL method, in terms of improved tracing accuracy (±9 Hz) and immunity to antiresonance frequency tracing and loss of lock.

  2. End-Point Contact Force Control with Quantitative Feedback Theory for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Shuhuan Wen

    2012-12-01

    Full Text Available Robot force control is an important issue for intelligent mobile robotics. The end-point stiffness of a robot is a key and open problem in the research community. The control strategies are mostly dependent on both the specifications of the task and the environment of the robot. Due to the limited stiffness of the end-effector, we may adopt inherent torque to feedback the oscillations of the controlled force. This paper proposes an effective control strategy which contains a controller using quantitative feedback theory. The nested loop controllers take into account the physical limitation of the system's inner variables and harmful interference. The biggest advantage of the method is its simplicity in both the design process and the implementation of the control algorithm in engineering practice. Taking the one-link manipulator as an example, numerical experiments are carried out to verify the proposed control method. The results show the satisfactory performance.

  3. Distributions of chain ends and junction points in ordered block copolymers

    International Nuclear Information System (INIS)

    Mayes, A.M.; Johnson, R.D.; Russell, T.P.; Smith, S.D.; Satija, S.K.; Majkrzak, C.F.

    1993-01-01

    Chain configurations in ordered symmetric poly(styrene-b-methyl methacrylate) diblock copolymers were examined by neutron reflectively. In a thin-film geometry the copolymers organize into lamellar microdomains oriented parallel to the substrate surface. The copolymers organize into lamellar microdomains oriented parallel to the substrate surface. The copolymers were synthesized with small fractions of deuterated segments at either the chain ends or centers. This selective labeling permitted characterization of the spatial distribution of chain ends and junction points normal to the plane of the film. From the reflectivity analysis, the junction points are found to be confined to the PS/PMMA interfacial regions. The chain ends, however, are well distributed through their respective domains, exhibiting only a weak maximum in concentration at the center of the domains

  4. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  5. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  6. Standardized End Point Definitions for Coronary Intervention Trials: The Academic Research Consortium-2 Consensus Document.

    Science.gov (United States)

    Garcia-Garcia, Hector M; McFadden, Eugène P; Farb, Andrew; Mehran, Roxana; Stone, Gregg W; Spertus, John; Onuma, Yoshinobu; Morel, Marie-Angèle; van Es, Gerrit-Anne; Zuckerman, Bram; Fearon, William F; Taggart, David; Kappetein, Arie-Pieter; Krucoff, Mitchell W; Vranckx, Pascal; Windecker, Stephan; Cutlip, Donald; Serruys, Patrick W

    2018-06-14

    The Academic Research Consortium (ARC)-2 initiative revisited the clinical and angiographic end point definitions in coronary device trials, proposed in 2007, to make them more suitable for use in clinical trials that include increasingly complex lesion and patient populations and incorporate novel devices such as bioresorbable vascular scaffolds. In addition, recommendations for the incorporation of patient-related outcomes in clinical trials are proposed. Academic Research Consortium-2 is a collaborative effort between academic research organizations in the United States and Europe, device manufacturers, and European, US, and Asian regulatory bodies. Several in-person meetings were held to discuss the changes that have occurred in the device landscape and in clinical trials and regulatory pathways in the last decade. The consensus-based end point definitions in this document are endorsed by the stakeholders of this document and strongly advocated for clinical trial purposes. This Academic Research Consortium-2 document provides further standardization of end point definitions for coronary device trials, incorporating advances in technology and knowledge. Their use will aid interpretation of trial outcomes and comparison among studies, thus facilitating the evaluation of the safety and effectiveness of these devices.

  7. The Purification Method of Matching Points Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    DONG Yang

    2017-02-01

    Full Text Available The traditional purification method of matching points usually uses a small number of the points as initial input. Though it can meet most of the requirements of point constraints, the iterative purification solution is easy to fall into local extreme, which results in the missing of correct matching points. To solve this problem, we introduce the principal component analysis method to use the whole point set as initial input. And thorough mismatching points step eliminating and robust solving, more accurate global optimal solution, which intends to reduce the omission rate of correct matching points and thus reaches better purification effect, can be obtained. Experimental results show that this method can obtain the global optimal solution under a certain original false matching rate, and can decrease or avoid the omission of correct matching points.

  8. Guidelines for the definition of time-to-event end points in renal cell cancer clinical trials: results of the DATECAN project†.

    Science.gov (United States)

    Kramar, A; Negrier, S; Sylvester, R; Joniau, S; Mulders, P; Powles, T; Bex, A; Bonnetain, F; Bossi, A; Bracarda, S; Bukowski, R; Catto, J; Choueiri, T K; Crabb, S; Eisen, T; El Demery, M; Fitzpatrick, J; Flamand, V; Goebell, P J; Gravis, G; Houédé, N; Jacqmin, D; Kaplan, R; Malavaud, B; Massard, C; Melichar, B; Mourey, L; Nathan, P; Pasquier, D; Porta, C; Pouessel, D; Quinn, D; Ravaud, A; Rolland, F; Schmidinger, M; Tombal, B; Tosi, D; Vauleon, E; Volpe, A; Wolter, P; Escudier, B; Filleron, T

    2015-12-01

    In clinical trials, the use of intermediate time-to-event end points (TEEs) is increasingly common, yet their choice and definitions are not standardized. This limits the usefulness for comparing treatment effects between studies. The aim of the DATECAN Kidney project is to clarify and recommend definitions of TEE in renal cell cancer (RCC) through a formal consensus method for end point definitions. A formal modified Delphi method was used for establishing consensus. From a 2006-2009 literature review, the Steering Committee (SC) selected 9 TEE and 15 events in the nonmetastatic (NM) and metastatic/advanced (MA) RCC disease settings. Events were scored on the range of 1 (totally disagree to include) to 9 (totally agree to include) in the definition of each end point. Rating Committee (RC) experts were contacted for the scoring rounds. From these results, final recommendations were established for selecting pertinent end points and the associated events. Thirty-four experts scored 121 events for 9 end points. Consensus was reached for 31%, 43% and 85% events during the first, second and third rounds, respectively. The expert recommend the use of three and two endpoints in NM and MA setting, respectively. In the NM setting: disease-free survival (contralateral RCC, appearance of metastases, local or regional recurrence, death from RCC or protocol treatment), metastasis-free survival (appearance of metastases, regional recurrence, death from RCC); and local-regional-free survival (local or regional recurrence, death from RCC). In the MA setting: kidney cancer-specific survival (death from RCC or protocol treatment) and progression-free survival (death from RCC, local, regional, or metastatic progression). The consensus method revealed that intermediate end points have not been well defined, because all of the selected end points had at least one event definition for which no consensus was obtained. These clarified definitions of TEE should become standard practice in

  9. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  10. Parametric methods for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper

    is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...

  11. QM/MM study of dislocation—hydrogen/helium interactions in α-Fe

    International Nuclear Information System (INIS)

    Zhao, Yi; Lu, Gang

    2011-01-01

    Impurities such as hydrogen (H) and helium (He) interact strongly with dislocations in metals. Using a multiscale quantum-mechanics/molecular-mechanics (QM/MM) approach, we have examined the interactions between the impurities (H and He) with dislocations (edge and screw) in α-Fe. The impurity trapping at the dislocation core is examined by calculating the impurity-dislocation binding energy and the impurity solution energy. We find that in general both H and He prefer the tetrahedral sites at the dislocation core, as well as in the bulk; the exceptions are due to deformed structures at the dislocation cores. Both H and He have a greater solution energy and binding energy to the edge dislocation than to the screw dislocation. The impurity pipe diffusion along the dislocation core is investigated using the QM/MM nudged-elastic-band method. We find that the diffusion barrier along the screw dislocation is lower than the bulk value for both H and He impurities. For the edge dislocation, although H has similar diffusion barriers as in the bulk, He has much higher diffusion energy barriers compared with the bulk. Finally we have examined the impurity effect on the dislocation mobility. We find that both H and He can lower the Peierls energy barrier for the screw dislocation significantly. The H enhanced dislocation mobility is consistent with experimental observations

  12. A Novel Complementary Method for the Point-Scan Nondestructive Tests Based on Lamb Waves

    Directory of Open Access Journals (Sweden)

    Rahim Gorgin

    2014-01-01

    Full Text Available This study presents a novel area-scan damage identification method based on Lamb waves which can be used as a complementary method for point-scan nondestructive techniques. The proposed technique is able to identify the most probable locations of damages prior to point-scan test which lead to decreasing the time and cost of inspection. The test-piece surface was partitioned with some smaller areas and the damage probability presence of each area was evaluated. A0 mode of Lamb wave was generated and collected using a mobile handmade transducer set at each area. Subsequently, a damage presence probability index (DPPI based on the energy of captured responses was defined for each area. The area with the highest DPPI value highlights the most probable locations of damages in test-piece. Point-scan nondestructive methods can then be used once these areas are found to identify the damage in detail. The approach was validated by predicting the most probable locations of representative damages including through-thickness hole and crack in aluminum plates. The obtained experimental results demonstrated the high potential of developed method in defining the most probable locations of damages in structures.

  13. The physicochemical essence of the purine·pyrimidine transition mismatches with Watson-Crick geometry in DNA: A·C* versa A*·C. A QM and QTAIM atomistic understanding.

    Science.gov (United States)

    Brovarets', Ol'ha O; Hovorun, Dmytro M

    2015-01-01

    It was established for the first time by DFT and MP2 quantum-mechanical (QM) methods either in vacuum, so in the continuum with a low dielectric constant (ε = 4), typical for hydrophobic interfaces of specific protein-nucleic acid interactions, that the repertoire for the tautomerisation of the biologically important adenine · cytosine* (A · C*) mismatched DNA base pair, formed by the amino tautomer of the A and the imino mutagenic tautomer of the C, into the A*·C base mispair (∆G = 2.72 kcal mol(-1) obtained at the MP2 level of QM theory in the continuum with ε = 4), formed by the imino mutagenic tautomer of the A and the amino tautomer of the C, proceeds via the asynchronous concerted double proton transfer along two antiparallel H-bonds through the transition state (TSA · C* ↔ A* · C). The limiting stage of the A · C* → A* · C tautomerisation is the final proton transfer along the intermolecular N6H · · · N4 H-bond. It was found that the A · C*/A* · C DNA base mispairs with Watson-Crick geometry are associated by the N6H · · · N4/N4H · · · N6, N3H · · · N1/N1H · · · N3 and C2H · · · O2 H-bonds, respectively, while the TSA · C*↔ A* · C is joined by the N6-H-N4 covalent bridge and the N1H · · · N3 and C2H · · · O2 H-bonds. It was revealed that the A · C* ↔ A* · C tautomerisation is assisted by the true C2H · · · O2 H-bond, that in contrast to the two others conventional H-bonds exists along the entire intrinsic reaction coordinate (IRC) range herewith becoming stronger at the transition from vacuum to the continuum with ε = 4. To better understand the behavior of the intermolecular H-bonds and base mispairs along the IRC of the A · C* ↔ A* · C tautomerisation, the profiles of their electron-topological, energetical, geometrical, polar and charge characteristics are reported in this study. It was established based on the profiles of the H-bond energies that all three H-bonds are cooperative, mutually

  14. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    Directory of Open Access Journals (Sweden)

    Zhiying Song

    2017-01-01

    Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.

  15. A Comparison of Quantum and Molecular Mechanical Methods to Estimate Strain Energy in Druglike Fragments.

    Science.gov (United States)

    Sellers, Benjamin D; James, Natalie C; Gobbi, Alberto

    2017-06-26

    Reducing internal strain energy in small molecules is critical for designing potent drugs. Quantum mechanical (QM) and molecular mechanical (MM) methods are often used to estimate these energies. In an effort to determine which methods offer an optimal balance in accuracy and performance, we have carried out torsion scan analyses on 62 fragments. We compared nine QM and four MM methods to reference energies calculated at a higher level of theory: CCSD(T)/CBS single point energies (coupled cluster with single, double, and perturbative triple excitations at the complete basis set limit) calculated on optimized geometries using MP2/6-311+G**. The results show that both the more recent MP2.X perturbation method as well as MP2/CBS perform quite well. In addition, combining a Hartree-Fock geometry optimization with a MP2/CBS single point energy calculation offers a fast and accurate compromise when dispersion is not a key energy component. Among MM methods, the OPLS3 force field accurately reproduces CCSD(T)/CBS torsion energies on more test cases than the MMFF94s or Amber12:EHT force fields, which struggle with aryl-amide and aryl-aryl torsions. Using experimental conformations from the Cambridge Structural Database, we highlight three example structures for which OPLS3 significantly overestimates the strain. The energies and conformations presented should enable scientists to estimate the expected error for the methods described and we hope will spur further research into QM and MM methods.

  16. Pharmaceutics, Drug Delivery and Pharmaceutical Technology: A New Test Unit for Disintegration End-Point Determination of Orodispersible Films.

    Science.gov (United States)

    Low, Ariana; Kok, Si Ling; Khong, Yuetmei; Chan, Sui Yung; Gokhale, Rajeev

    2015-11-01

    No standard time or pharmacopoeia disintegration test method for orodispersible films (ODFs) exists. The USP disintegration test for tablets and capsules poses significant challenges for end-point determination when used for ODFs. We tested a newly developed disintegration test unit (DTU) against the USP disintegration test. The DTU is an accessory to the USP disintegration apparatus. It holds the ODF in a horizontal position, allowing top-view of the ODF during testing. A Gauge R&R study was conducted to assign relative contributions of the total variability from the operator, sample or the experimental set-up. Precision was compared using commercial ODF products in different media. Agreement between the two measurement methods was analysed. The DTU showed improved repeatability and reproducibility compared to the USP disintegration system with tighter standard deviations regardless of operator or medium. There is good agreement between the two methods, with the USP disintegration test giving generally longer disintegration times possibly due to difficulty in end-point determination. The DTU provided clear end-point determination and is suitable for quality control of ODFs during product developmental stage or manufacturing. This may facilitate the development of a standardized methodology for disintegration time determination of ODFs. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 104:3893-3903, 2015. Copyright © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  17. Simulation-based investigation of the paired-gear method in cod-end selectivity studies

    DEFF Research Database (Denmark)

    Herrmann, Bent; Frandsen, Rikke; Holst, René

    2007-01-01

    In this paper, the paired-gear and covered cod-end methods for estimating the selectivity of trawl cod-ends are compared. A modified version of the cod-end selectivity simulator PRESEMO is used to simulate the data that would be collected from a paired-gear experiment where the test cod-end also ...

  18. Efficient Computational Research Protocol to Survey Free Energy Surface for Solution Chemical Reaction in the QM/MM Framework: The FEG-ER Methodology and Its Application to Isomerization Reaction of Glycine in Aqueous Solution.

    Science.gov (United States)

    Takenaka, Norio; Kitamura, Yukichi; Nagaoka, Masataka

    2016-03-03

    In solution chemical reaction, we often need to consider a multidimensional free energy (FE) surface (FES) which is analogous to a Born-Oppenheimer potential energy surface. To survey the FES, an efficient computational research protocol is proposed within the QM/MM framework; (i) we first obtain some stable states (or transition states) involved by optimizing their structures on the FES, in a stepwise fashion, finally using the free energy gradient (FEG) method, and then (ii) we directly obtain the FE differences among any arbitrary states on the FES, efficiently by employing the QM/MM method with energy representation (ER), i.e., the QM/MM-ER method. To validate the calculation accuracy and efficiency, we applied the above FEG-ER methodology to a typical isomerization reaction of glycine in aqueous solution, and reproduced quite satisfactorily the experimental value of the reaction FE. Further, it was found that the structural relaxation of the solute in the QM/MM force field is not negligible to estimate correctly the FES. We believe that the present research protocol should become prevailing as one computational strategy and will play promising and important roles in solution chemistry toward solution reaction ergodography.

  19. Protein structure refinement using a quantum mechanics-based chemical shielding predictor.

    Science.gov (United States)

    Bratholm, Lars A; Jensen, Jan H

    2017-03-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ , 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1-0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural

  20. Evolution of Randomized Trials in Advanced/Metastatic Soft Tissue Sarcoma: End Point Selection, Surrogacy, and Quality of Reporting.

    Science.gov (United States)

    Zer, Alona; Prince, Rebecca M; Amir, Eitan; Abdul Razak, Albiruni

    2016-05-01

    Randomized controlled trials (RCTs) in soft tissue sarcoma (STS) have used varying end points. The surrogacy of intermediate end points, such as progression-free survival (PFS), response rate (RR), and 3-month and 6-month PFS (3moPFS and 6moPFS) with overall survival (OS), remains unknown. The quality of efficacy and toxicity reporting in these studies is also uncertain. A systematic review of systemic therapy RCTs in STS was performed. Surrogacy between intermediate end points and OS was explored using weighted linear regression for the hazard ratio for OS with the hazard ratio for PFS or the odds ratio for RR, 3moPFS, and 6moPFS. The quality of reporting for efficacy and toxicity was also evaluated. Fifty-two RCTs published between 1974 and 2014, comprising 9,762 patients, met the inclusion criteria. There were significant correlations between PFS and OS (R = 0.61) and between RR and OS (R = 0.51). Conversely, there were nonsignificant correlations between 3moPFS and 6moPFS with OS. A reduction in the use of RR as the primary end point was observed over time, favoring time-based events (P for trend = .02). In 14% of RCTs, the primary end point was not met, but the study was reported as being positive. Toxicity was comprehensively reported in 47% of RCTs, whereas 14% inadequately reported toxicity. In advanced STS, PFS and RR seem to be appropriate surrogates for OS. There is poor correlation between OS and both 3moPFS and 6moPFS. As such, caution is urged with the use of these as primary end points in randomized STS trials. The quality of toxicity reporting and interpretation of results is suboptimal. © 2016 by American Society of Clinical Oncology.

  1. Finite Elements on Point Based Surfaces

    NARCIS (Netherlands)

    Clarenz, U.; Rumpf, M.; Telea, A.

    2004-01-01

    We present a framework for processing point-based surfaces via partial differential equations (PDEs). Our framework efficiently and effectively brings well-known PDE-based processing techniques to the field of point-based surfaces. Our method is based on the construction of local tangent planes and

  2. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Directory of Open Access Journals (Sweden)

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  3. Medication overuse headache: a critical review of end points in recent follow-up studies

    DEFF Research Database (Denmark)

    Hagen, Knut; Jensen, Rigmor; Bøe, Magne Geir

    2010-01-01

    in headache index at the end of follow-up were reported in only one and two of nine studies, respectively. The present review demonstrated a lack of uniform end points used in recently published follow-up studies. Guidelines for presenting follow-up data on MOH are needed and we propose end points......No guidelines for performing and presenting the results of studies on patients with medication overuse headache (MOH) exist. The aim of this study was to review long-term outcome measures in follow-up studies published in 2006 or later. We included MOH studies with >6 months duration presenting...... a minimum of one predefined end point. In total, nine studies were identified. The 1,589 MOH patients (22% men) had an overall mean frequency of 25.3 headache days/month at baseline. Headache days/month at the end of follow-up was reported in six studies (mean 13.8 days/month). The decrease was more...

  4. Criteria for use of composite end points for competing risks-a systematic survey of the literature with recommendations.

    Science.gov (United States)

    Manja, Veena; AlBashir, Siwar; Guyatt, Gordon

    2017-02-01

    Composite end points are frequently used in reports of clinical trials. One rationale for the use of composite end points is to account for competing risks. In the presence of competing risks, the event rate of a specific event depends on the rates of other competing events. One proposed solution is to include all important competing events in one composite end point. Clinical trialists require guidance regarding when this approach is appropriate. To identify publications describing criteria for use of composite end points for competing risk and to offer guidance regarding when a composite end point is appropriate on the basis of competing risks. We searched MEDLINE, CINAHL, EMBASE, The Cochrane's Central & Systematic Review databases including the Health Technology Assessment database, and the Cochrane's Methodology register from inception to April 2015, and candidate textbooks, to identify all articles providing guidance on this issue. Eligible publications explicitly addressed the issue of a composite outcome to address competing risks. Two reviewers independently screened the titles and abstracts for full-text review; independently reviewed full-text publications; and abstracted specific criteria authors offered for use of composite end points to address competing risks. Of 63,645 titles and abstracts, 166 proved potentially relevant of which 43 publications were included in the final review. Most publications note competing risks as a reason for using composite end points without further elaboration. None of the articles or textbook chapters provide specific criteria for use of composite end points for competing risk. Some advocate using composite end points to avoid bias due to competing risks and others suggest that composite end points seldom or never be used for this purpose. We recommend using composite end points for competing risks only if the competing risk is plausible and if it occurs with sufficiently high frequency to influence the interpretation

  5. Curvature computation in volume-of-fluid method based on point-cloud sampling

    Science.gov (United States)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  6. Guidelines for time-to-event end point definitions in breast cancer trials: results of the DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials)†.

    Science.gov (United States)

    Gourgou-Bourgade, S; Cameron, D; Poortmans, P; Asselain, B; Azria, D; Cardoso, F; A'Hern, R; Bliss, J; Bogaerts, J; Bonnefoi, H; Brain, E; Cardoso, M J; Chibaudel, B; Coleman, R; Cufer, T; Dal Lago, L; Dalenc, F; De Azambuja, E; Debled, M; Delaloge, S; Filleron, T; Gligorov, J; Gutowski, M; Jacot, W; Kirkove, C; MacGrogan, G; Michiels, S; Negreiros, I; Offersen, B V; Penault Llorca, F; Pruneri, G; Roche, H; Russell, N S; Schmitt, F; Servent, V; Thürlimann, B; Untch, M; van der Hage, J A; van Tienhoven, G; Wildiers, H; Yarnold, J; Bonnetain, F; Mathoulin-Pélissier, S; Bellera, C; Dabakuyo-Yonli, T S

    2015-05-01

    Using surrogate end points for overall survival, such as disease-free survival, is increasingly common in randomized controlled trials. However, the definitions of several of these time-to-event (TTE) end points are imprecisely which limits interpretation and cross-trial comparisons. The estimation of treatment effects may be directly affected by the definitions of end points. The DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials) aims to provide recommendations for definitions of TTE end points. We report guidelines for randomized cancer clinical trials (RCTs) in breast cancer. A literature review was carried out to identify TTE end points (primary or secondary) reported in publications of randomized trials or guidelines. An international multidisciplinary panel of experts proposed recommendations for the definitions of these end points based on a validated consensus method that formalize the degree of agreement among experts. Recommended guidelines for the definitions of TTE end points commonly used in RCTs for breast cancer are provided for non-metastatic and metastatic settings. The use of standardized definitions should facilitate comparisons of trial results and improve the quality of trial design and reporting. These guidelines could be of particular interest to those involved in the design, conducting, reporting, or assessment of RCT. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. Biomarkers of Host Response Predict Primary End-Point Radiological Pneumonia in Tanzanian Children with Clinical Pneumonia: A Prospective Cohort Study.

    Directory of Open Access Journals (Sweden)

    Laura K Erdman

    Full Text Available Diagnosing pediatric pneumonia is challenging in low-resource settings. The World Health Organization (WHO has defined primary end-point radiological pneumonia for use in epidemiological and vaccine studies. However, radiography requires expertise and is often inaccessible. We hypothesized that plasma biomarkers of inflammation and endothelial activation may be useful surrogates for end-point pneumonia, and may provide insight into its biological significance.We studied children with WHO-defined clinical pneumonia (n = 155 within a prospective cohort of 1,005 consecutive febrile children presenting to Tanzanian outpatient clinics. Based on x-ray findings, participants were categorized as primary end-point pneumonia (n = 30, other infiltrates (n = 31, or normal chest x-ray (n = 94. Plasma levels of 7 host response biomarkers at presentation were measured by ELISA. Associations between biomarker levels and radiological findings were assessed by Kruskal-Wallis test and multivariable logistic regression. Biomarker ability to predict radiological findings was evaluated using receiver operating characteristic curve analysis and Classification and Regression Tree analysis.Compared to children with normal x-ray, children with end-point pneumonia had significantly higher C-reactive protein, procalcitonin and Chitinase 3-like-1, while those with other infiltrates had elevated procalcitonin and von Willebrand Factor and decreased soluble Tie-2 and endoglin. Clinical variables were not predictive of radiological findings. Classification and Regression Tree analysis generated multi-marker models with improved performance over single markers for discriminating between groups. A model based on C-reactive protein and Chitinase 3-like-1 discriminated between end-point pneumonia and non-end-point pneumonia with 93.3% sensitivity (95% confidence interval 76.5-98.8, 80.8% specificity (72.6-87.1, positive likelihood ratio 4.9 (3.4-7.1, negative likelihood ratio 0

  8. A polarizable QM/MM approach to the molecular dynamics of amide groups solvated in water

    Energy Technology Data Exchange (ETDEWEB)

    Schwörer, Magnus; Wichmann, Christoph; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludwig-Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)

    2016-03-21

    The infrared (IR) spectra of polypeptides are dominated by the so-called amide bands. Because they originate from the strongly polar and polarizable amide groups (AGs) making up the backbone, their spectral positions sensitively depend on the local electric fields. Aiming at accurate computations of these IR spectra by molecular dynamics (MD) simulations, which derive atomic forces from a hybrid quantum and molecular mechanics (QM/MM) Hamiltonian, here we consider the effects of solvation in bulk liquid water on the amide bands of the AG model compound N-methyl-acetamide (NMA). As QM approach to NMA we choose grid-based density functional theory (DFT). For the surrounding MM water, we develop, largely based on computations, a polarizable molecular mechanics (PMM) model potential called GP6P, which features six Gaussian electrostatic sources (one induced dipole, five static partial charge distributions) and, therefore, avoids spurious distortions of the DFT electron density in hybrid DFT/PMM simulations. Bulk liquid GP6P is shown to have favorable properties at the thermodynamic conditions of the parameterization and beyond. Lennard-Jones (LJ) parameters of the DFT fragment NMA are optimized by comparing radial distribution functions in the surrounding GP6P liquid with reference data obtained from a “first-principles” DFT-MD simulation. Finally, IR spectra of NMA in GP6P water are calculated from extended DFT/PMM-MD trajectories, in which the NMA is treated by three different DFT functionals (BP, BLYP, B3LYP). Method-specific frequency scaling factors are derived from DFT-MD simulations of isolated NMA. The DFT/PMM-MD simulations with GP6P and with the optimized LJ parameters then excellently predict the effects of aqueous solvation and deuteration observed in the IR spectra of NMA. As a result, the methods required to accurately compute such spectra by DFT/PMM-MD also for larger peptides in aqueous solution are now at hand.

  9. Protein structure refinement using a quantum mechanics-based chemical shielding predictor

    DEFF Research Database (Denmark)

    Bratholm, Lars Andersen; Jensen, Jan Halborg

    2017-01-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor...... of a protein backbone and CB chemical shifts (ProCS15, PeerJ, 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic...

  10. Learning-based position control of a closed-kinematic chain robot end-effector

    Science.gov (United States)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1990-01-01

    A trajectory control scheme whose design is based on learning theory, for a six-degree-of-freedom (DOF) robot end-effector built to study robotic assembly of NASA hardwares in space is presented. The control scheme consists of two control systems: the feedback control system and the learning control system. The feedback control system is designed using the concept of linearization about a selected operating point, and the method of pole placement so that the closed-loop linearized system is stabilized. The learning control scheme consisting of PD-type learning controllers, provides additional inputs to improve the end-effector performance after each trial. Experimental studies performed on a 2 DOF end-effector built at CUA, for three tracking cases show that actual trajectories approach desired trajectories as the number of trials increases. The tracking errors are substantially reduced after only five trials.

  11. Novel TPPO Based Maximum Power Point Method for Photovoltaic System

    Directory of Open Access Journals (Sweden)

    ABBASI, M. A.

    2017-08-01

    Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.

  12. Calculating solution redox free energies with ab initio quantum mechanical/molecular mechanical minimum free energy path method

    International Nuclear Information System (INIS)

    Zeng Xiancheng; Hu Hao; Hu Xiangqian; Yang Weitao

    2009-01-01

    A quantum mechanical/molecular mechanical minimum free energy path (QM/MM-MFEP) method was developed to calculate the redox free energies of large systems in solution with greatly enhanced efficiency for conformation sampling. The QM/MM-MFEP method describes the thermodynamics of a system on the potential of mean force surface of the solute degrees of freedom. The molecular dynamics (MD) sampling is only carried out with the QM subsystem fixed. It thus avoids 'on-the-fly' QM calculations and thus overcomes the high computational cost in the direct QM/MM MD sampling. In the applications to two metal complexes in aqueous solution, the new QM/MM-MFEP method yielded redox free energies in good agreement with those calculated from the direct QM/MM MD method. Two larger biologically important redox molecules, lumichrome and riboflavin, were further investigated to demonstrate the efficiency of the method. The enhanced efficiency and uncompromised accuracy are especially significant for biochemical systems. The QM/MM-MFEP method thus provides an efficient approach to free energy simulation of complex electron transfer reactions.

  13. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Lars; Andersen, Søren; Damkilde, Lars

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  14. End-point impedance measurements across dominant and nondominant hands and robotic assistance with directional damping.

    Science.gov (United States)

    Erden, Mustafa Suphi; Billard, Aude

    2015-06-01

    The goal of this paper is to perform end-point impedance measurements across dominant and nondominant hands while doing airbrush painting and to use the results for developing a robotic assistance scheme. We study airbrush painting because it resembles in many ways manual welding, a standard industrial task. The experiments are performed with the 7 degrees of freedom KUKA lightweight robot arm. The robot is controlled in admittance using a force sensor attached at the end-point, so as to act as a free-mass and be passively guided by the human. For impedance measurements, a set of nine subjects perform 12 repetitions of airbrush painting, drawing a straight-line on a cartoon horizontally placed on a table, while passively moving the airbrush mounted on the robot's end-point. We measure hand impedance during the painting task by generating sudden and brief external forces with the robot. The results show that on average the dominant hand displays larger impedance than the nondominant in the directions perpendicular to the painting line. We find the most significant difference in the damping values in these directions. Based on this observation, we develop a "directional damping" scheme for robotic assistance and conduct a pilot study with 12 subjects to contrast airbrush painting with and without robotic assistance. Results show significant improvement in precision with both dominant and nondominant hands when using robotic assistance.

  15. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  16. Energy analysis of four dimensional extended hyperbolic Scarf I plus three dimensional separable trigonometric noncentral potentials using SUSY QM approach

    International Nuclear Information System (INIS)

    Suparmi, A.; Cari, C.; Deta, U. A.; Handhika, J.

    2016-01-01

    The non-relativistic energies and wave functions of extended hyperbolic Scarf I plus separable non-central shape invariant potential in four dimensions are investigated using Supersymmetric Quantum Mechanics (SUSY QM) Approach. The three dimensional separable non-central shape invariant angular potential consists of trigonometric Scarf II, Manning Rosen and Poschl-Teller potentials. The four dimensional Schrodinger equation with separable shape invariant non-central potential is reduced into four one dimensional Schrodinger equations through variable separation method. By using SUSY QM, the non-relativistic energies and radial wave functions are obtained from radial Schrodinger equation, the orbital quantum numbers and angular wave functions are obtained from angular Schrodinger equations. The extended potential means there is perturbation terms in potential and cause the decrease in energy spectra of Scarf I potential. (paper)

  17. Multi-point probe for testing electrical properties and a method of producing a multi-point probe

    DEFF Research Database (Denmark)

    2011-01-01

    A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...... of specific locations of the test sample. At least one of the probe arms has an extension defining a pointing distal end providing its specific area or point of contact located offset relative to its perpendicular bisector....

  18. Minimizing Barriers in Learning for On-Call Radiology Residents-End-to-End Web-Based Resident Feedback System.

    Science.gov (United States)

    Choi, Hailey H; Clark, Jennifer; Jay, Ann K; Filice, Ross W

    2018-02-01

    Feedback is an essential part of medical training, where trainees are provided with information regarding their performance and further directions for improvement. In diagnostic radiology, feedback entails a detailed review of the differences between the residents' preliminary interpretation and the attendings' final interpretation of imaging studies. While the on-call experience of independently interpreting complex cases is important to resident education, the more traditional synchronous "read-out" or joint review is impossible due to multiple constraints. Without an efficient method to compare reports, grade discrepancies, convey salient teaching points, and view images, valuable lessons in image interpretation and report construction are lost. We developed a streamlined web-based system, including report comparison and image viewing, to minimize barriers in asynchronous communication between attending radiologists and on-call residents. Our system provides real-time, end-to-end delivery of case-specific and user-specific feedback in a streamlined, easy-to-view format. We assessed quality improvement subjectively through surveys and objectively through participation metrics. Our web-based feedback system improved user satisfaction for both attending and resident radiologists, and increased attending participation, particularly with regards to cases where substantive discrepancies were identified.

  19. Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model

    Science.gov (United States)

    Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man

    2017-03-01

    Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.

  20. The effect of adherence to statin therapy on cardiovascular mortality: quantification of unmeasured bias using falsification end-points

    Directory of Open Access Journals (Sweden)

    Maarten J. Bijlsma

    2016-04-01

    Full Text Available Abstract Background To determine the clinical effectiveness of statins on cardiovascular mortality in practice, observational studies are needed. Control for confounding is essential in any observational study. Falsification end-points may be useful to determine if bias is present after adjustment has taken place. Methods We followed starters on statin therapy in the Netherlands aged 46 to 100 years over the period 1996 to 2012, from initiation of statin therapy until cardiovascular mortality or censoring. Within this group (n = 49,688, up to 16 years of follow-up, we estimated the effect of adherence to statin therapy (0 = completely non-adherent, 1 = fully adherent on ischemic heart diseases and cerebrovascular disease (ICD10-codes I20-I25 and I60-I69 as well as respiratory and endocrine disease mortality (ICD10-codes J00-J99 and E00-E90 as falsification end points, controlling for demographic factors, socio-economic factors, birth cohort, adherence to other cardiovascular medications, and diabetes using time-varying Cox regression models. Results Falsification end-points indicated that a simpler model was less biased than a model with more controls. Adherence to statins appeared to be protective against cardiovascular mortality (HR: 0.70, 95 % CI 0.61 to 0.81. Conclusions Falsification end-points helped detect overadjustment bias or bias due to competing risks, and thereby proved to be a useful technique in such a complex setting.

  1. QM/MM studies of cisplatin complexes with DNA dimer and octamer

    KAUST Repository

    Gkionis, Konstantinos

    2012-08-01

    Hybrid QM/MM calculations on adducts of cisplatin with DNA dimer and octamer are reported. Starting from the crystal structure of a cisplatin-DNA dimer complex and an NMR structure of a cisplatin-DNA octamer complex, several variants of the ONIOM approach are tested, all employing BHandH for the QM part and AMBER for MM. We demonstrate that a generic set of molecular mechanics parameters for description of Pt-coordination can be used within the subtractive ONIOM scheme without loss of accuracy, such that dedicated parameters for new platinum complexes may not be required. Comparison of optimised structures obtained with different strategies indicates that electrostatic embedding is vital for proper description of the complex, while inclusion of water molecules as explicit solvent further improves performance. The resulting DNA structural parameters are in good general agreement with the experimental structure obtained, particularly when the inherent variability in NMR-derived parameters is taken into account. © 2012 Elsevier B.V.

  2. Free-time and fixed end-point multi-target optimal control theory: Application to quantum computing

    International Nuclear Information System (INIS)

    Mishima, K.; Yamashita, K.

    2011-01-01

    Graphical abstract: The two-state Deutsch-Jozsa algortihm used to demonstrate the utility of free-time and fixed-end point multi-target optimal control theory. Research highlights: → Free-time and fixed-end point multi-target optimal control theory (FRFP-MTOCT) was constructed. → The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. → The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361 (2009) 106]. → The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. → The calculation examples show that our theory is useful for minor adjustment of the external fields. - Abstract: An extension of free-time and fixed end-point optimal control theory (FRFP-OCT) to monotonically convergent free-time and fixed end-point multi-target optimal control theory (FRFP-MTOCT) is presented. The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361, (2009), 106]. The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. The calculation examples show that our theory is useful for minor

  3. Method Points: towards a metric for method complexity

    Directory of Open Access Journals (Sweden)

    Graham McLeod

    1998-11-01

    Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.

  4. Centralized adjudication of cardiovascular end points in cardiovascular and noncardiovascular pharmacologic trials: a report from the Cardiac Safety Research Consortium.

    Science.gov (United States)

    Seltzer, Jonathan H; Turner, J Rick; Geiger, Mary Jane; Rosano, Giuseppe; Mahaffey, Kenneth W; White, William B; Sabol, Mary Beth; Stockbridge, Norman; Sager, Philip T

    2015-02-01

    This white paper provides a summary of presentations and discussions at a cardiovascular (CV) end point adjudication think tank cosponsored by the Cardiac Safety Research Committee and the US Food and Drug Administration (FDA) that was convened at the FDA's White Oak headquarters on November 6, 2013. Attention was focused on the lack of clarity concerning the need for end point adjudication in both CV and non-CV trials: there is currently an absence of widely accepted academic or industry standards and a definitive regulatory policy on how best to structure and use clinical end point committees (CECs). This meeting therefore provided a forum for leaders in the fields of CV clinical trials and CV safety to develop a foundation of initial best practice recommendations for use in future CEC charters. Attendees included representatives from pharmaceutical companies, regulatory agencies, end point adjudication specialist groups, clinical research organizations, and active, academically based adjudicators. The manuscript presents recommendations from the think tank regarding when CV end point adjudication should be considered in trials conducted by cardiologists and by noncardiologists as well as detailing key issues in the composition of a CEC and its charter. In addition, it presents several recommended best practices for the establishment and operation of CECs. The science underlying CV event adjudication is evolving, and suggestions for additional areas of research will be needed to continue to advance this science. This manuscript does not constitute regulatory guidance. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. BUILDING A RELATIONSHIP WITH THE CUSTOMER: A CRM VERSUS A QM PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Sandru Ioana Maria Diana

    2009-05-01

    Full Text Available Customer relationship management (CRM and quality management (QM both define the customer as being the focus of all business activities. The question arises on how these two concepts work together. In the change defined environment, where getting ahead

  6. A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John

    2017-01-01

    model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems...

  7. Modified titrimetric determination of plutonium using photometric end-point detection

    International Nuclear Information System (INIS)

    Baughman, W.J.; Dahlby, J.W.

    1980-04-01

    A method used at LASL for the accurate and precise assay of plutonium metal was modified for the measurement of plutonium in plutonium oxides, nitrate solutions, and in other samples containing large quantities of plutonium in oxidized states higher than +3. In this modified method, the plutonium oxide or other sample is dissolved using the sealed-reflux dissolution method or other appropriate methods. Weighed aliquots, containing approximately 100 mg of plutonium, of the dissolved sample or plutonium nitrate solution are fumed to dryness with an HC1O 4 -H 2 SO 4 mixture. The dried residue is dissolved in dilute H 2 SO 4 , and the plutonium is reduced to plutonium (III) with zinc metal. The excess zinc metal is dissolved with HCl, and the solution is passed through a lead reductor column to ensure complete reduction of the plutonium to plutonium (III). The solution, with added ferroin indicator, is then titrated immediately with standardized ceric solution to a photometric end point. For the analysis of plutonium metal solutions, plutonium oxides, and nitrate solutions, the relative standard deviation are 0.06, 0.08, and 0.14%, respectively. Of the elements most likely to be found with the plutonium, only iron, neptunium, and uranium interfere. Small amounts of uranium and iron, which titrate quantitatively in the method, are determined by separate analytical methods, and suitable corrections are applied to the plutonium value. 4 tables, 4 figures

  8. Genome sequencing and transcriptome analysis of Trichoderma reesei QM9978 strain reveals a distal chromosome translocation to be responsible for loss of vib1 expression and loss of cellulase induction.

    Science.gov (United States)

    Ivanova, Christa; Ramoni, Jonas; Aouam, Thiziri; Frischmann, Alexa; Seiboth, Bernhard; Baker, Scott E; Le Crom, Stéphane; Lemoine, Sophie; Margeot, Antoine; Bidard, Frédérique

    2017-01-01

    expression is absent in QM9978. We propose that in T. reesei , as in Neurospora crassa , vib1 is involved in cellulase induction, although the exact mechanism remains to be elucidated. The data presented here show an example of a combined genome sequencing and transcriptomic approach to explain a specific trait, in this case the QM9978 cellulase-negative phenotype, and how it helps to better understand the mechanisms during cellulase gene regulation. When focusing on mutations on the single base-pair level, changes on the chromosome level can be easily overlooked and through this work we provide an example that stresses the importance of the big picture of the genomic landscape during analysis of sequencing data.

  9. Determination of the Acidity of Oils Using Paraformaldehyde as a Thermometric End-Point Indicator

    Directory of Open Access Journals (Sweden)

    Carneiro Mário J. D.

    2002-01-01

    Full Text Available The determination of the acidity of oils by catalytic thermometric titrimetry using paraformaldehyde as the thermometric end-point indicator was investigated. The sample solvent was a 1:1 (v/v mixture of toluene and 2-propanol and the titrant was 0.1 mol L-1 aqueous sodium hydroxide. Paraformaldehyde, being insoluble in the sample solvent, does not present the inconvenience of other indicators that change the properties of the solvent due to composition changes. The titration can therefore be done effectively in the same medium as the standard potentiometric and visual titration methods. The results of the application of the method to both non-refined and refined oils are presented herein. The proposed method has advantages in relation to the potentiometric method in terms of speed and simplicity.

  10. The End of Points

    Science.gov (United States)

    Feldman, Jo

    2018-01-01

    Have teachers become too dependent on points? This article explores educators' dependency on their points systems, and the ways that points can distract teachers from really analyzing students' capabilities and achievements. Feldman argues that using a more subjective grading system can help illuminate crucial information about students and what…

  11. Free Energy, Enthalpy and Entropy from Implicit Solvent End-Point Simulations.

    Science.gov (United States)

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2018-01-01

    Free energy is the key quantity to describe the thermodynamics of biological systems. In this perspective we consider the calculation of free energy, enthalpy and entropy from end-point molecular dynamics simulations. Since the enthalpy may be calculated as the ensemble average over equilibrated simulation snapshots the difficulties related to free energy calculation are ultimately related to the calculation of the entropy of the system and in particular of the solvent entropy. In the last two decades implicit solvent models have been used to circumvent the problem and to take into account solvent entropy implicitly in the solvation terms. More recently outstanding advancement in both implicit solvent models and in entropy calculations are making the goal of free energy estimation from end-point simulations more feasible than ever before. We review briefly the basic theory and discuss the advancements in light of practical applications.

  12. Free Energy, Enthalpy and Entropy from Implicit Solvent End-Point Simulations

    Directory of Open Access Journals (Sweden)

    Federico Fogolari

    2018-02-01

    Full Text Available Free energy is the key quantity to describe the thermodynamics of biological systems. In this perspective we consider the calculation of free energy, enthalpy and entropy from end-point molecular dynamics simulations. Since the enthalpy may be calculated as the ensemble average over equilibrated simulation snapshots the difficulties related to free energy calculation are ultimately related to the calculation of the entropy of the system and in particular of the solvent entropy. In the last two decades implicit solvent models have been used to circumvent the problem and to take into account solvent entropy implicitly in the solvation terms. More recently outstanding advancement in both implicit solvent models and in entropy calculations are making the goal of free energy estimation from end-point simulations more feasible than ever before. We review briefly the basic theory and discuss the advancements in light of practical applications.

  13. Calculation of wave-functions with frozen orbitals in mixed quantum mechanics/molecular mechanics methods. II. Application of the local basis equation.

    Science.gov (United States)

    Ferenczy, György G

    2013-04-05

    The application of the local basis equation (Ferenczy and Adams, J. Chem. Phys. 2009, 130, 134108) in mixed quantum mechanics/molecular mechanics (QM/MM) and quantum mechanics/quantum mechanics (QM/QM) methods is investigated. This equation is suitable to derive local basis nonorthogonal orbitals that minimize the energy of the system and it exhibits good convergence properties in a self-consistent field solution. These features make the equation appropriate to be used in mixed QM/MM and QM/QM methods to optimize orbitals in the field of frozen localized orbitals connecting the subsystems. Calculations performed for several properties in divers systems show that the method is robust with various choices of the frozen orbitals and frontier atom properties. With appropriate basis set assignment, it gives results equivalent with those of a related approach [G. G. Ferenczy previous paper in this issue] using the Huzinaga equation. Thus, the local basis equation can be used in mixed QM/MM methods with small size quantum subsystems to calculate properties in good agreement with reference Hartree-Fock-Roothaan results. It is shown that bond charges are not necessary when the local basis equation is applied, although they are required for the self-consistent field solution of the Huzinaga equation based method. Conversely, the deformation of the wave-function near to the boundary is observed without bond charges and this has a significant effect on deprotonation energies but a less pronounced effect when the total charge of the system is conserved. The local basis equation can also be used to define a two layer quantum system with nonorthogonal localized orbitals surrounding the central delocalized quantum subsystem. Copyright © 2013 Wiley Periodicals, Inc.

  14. Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing

    Directory of Open Access Journals (Sweden)

    Yunsick Sung

    2018-03-01

    Full Text Available Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS, a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images

  15. Probabilistic multiobjective wind-thermal economic emission dispatch based on point estimated method

    International Nuclear Information System (INIS)

    Azizipanah-Abarghooee, Rasoul; Niknam, Taher; Roosta, Alireza; Malekpour, Ahmad Reza; Zare, Mohsen

    2012-01-01

    In this paper, wind power generators are being incorporated in the multiobjective economic emission dispatch problem which minimizes wind-thermal electrical energy cost and emissions produced by fossil-fueled power plants, simultaneously. Large integration of wind energy sources necessitates an efficient model to cope with uncertainty arising from random wind variation. Hence, a multiobjective stochastic search algorithm based on 2m point estimated method is implemented to analyze the probabilistic wind-thermal economic emission dispatch problem considering both overestimation and underestimation of available wind power. 2m point estimated method handles the system uncertainties and renders the probability density function of desired variables efficiently. Moreover, a new population-based optimization algorithm called modified teaching-learning algorithm is proposed to determine the set of non-dominated optimal solutions. During the simulation, the set of non-dominated solutions are kept in an external memory (repository). Also, a fuzzy-based clustering technique is implemented to control the size of the repository. In order to select the best compromise solution from the repository, a niching mechanism is utilized such that the population will move toward a smaller search space in the Pareto-optimal front. In order to show the efficiency and feasibility of the proposed framework, three different test systems are represented as case studies. -- Highlights: ► WPGs are being incorporated in the multiobjective economic emission dispatch problem. ► 2m PEM handles the system uncertainties. ► A MTLBO is proposed to determine the set of non-dominated (Pareto) optimal solutions. ► A fuzzy-based clustering technique is implemented to control the size of the repository.

  16. Modular correction method of bending elastic modulus based on sliding behavior of contact point

    International Nuclear Information System (INIS)

    Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi

    2015-01-01

    During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)

  17. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    International Nuclear Information System (INIS)

    Su Xiaoxing; Wang Yuesheng

    2010-01-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  18. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2010-09-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  19. A proton point source produced by laser interaction with cone-top-end target

    International Nuclear Information System (INIS)

    Yu, Jinqing; Jin, Xiaolin; Zhou, Weimin; Zhao, Zongqing; Yan, Yonghong; Li, Bin; Hong, Wei; Gu, Yuqiu

    2012-01-01

    In this paper, we propose a proton point source by the interaction of laser and cone-top-end target and investigate it by two-dimensional particle-in-cell (2D-PIC) simulations as the proton point sources are well known for higher spatial resolution of proton radiography. Our results show that the relativistic electrons are guided to the rear of the cone-top-end target by the electrostatic charge-separation field and self-generated magnetic field along the profile of the target. As a result, the peak magnitude of sheath field at the rear surface of cone-top-end target is higher compared to common cone target. We test this scheme by 2D-PIC simulation and find the result has a diameter of 0.79λ 0 , an average energy of 9.1 MeV and energy spread less than 35%.

  20. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Science.gov (United States)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  1. Information resources and the correlation of response patterns between biological end points

    Energy Technology Data Exchange (ETDEWEB)

    Malling, H.V. [National Institute of Environmental Health Sciences, Research Triangle Park, NC (United States); Wassom, J.S. [Oak Ridge National Laboratory, TN (United States)

    1990-12-31

    This paper focuses on the analysis of information for mutagenesis, a biological end point that is important in the overall process of assessing possible adverse health effects from chemical exposure. 17 refs.

  2. Advanced DNA-Based Point-of-Care Diagnostic Methods for Plant Diseases Detection

    Directory of Open Access Journals (Sweden)

    Han Yih Lau

    2017-12-01

    Full Text Available Diagnostic technologies for the detection of plant pathogens with point-of-care capability and high multiplexing ability are an essential tool in the fight to reduce the large agricultural production losses caused by plant diseases. The main desirable characteristics for such diagnostic assays are high specificity, sensitivity, reproducibility, quickness, cost efficiency and high-throughput multiplex detection capability. This article describes and discusses various DNA-based point-of care diagnostic methods for applications in plant disease detection. Polymerase chain reaction (PCR is the most common DNA amplification technology used for detecting various plant and animal pathogens. However, subsequent to PCR based assays, several types of nucleic acid amplification technologies have been developed to achieve higher sensitivity, rapid detection as well as suitable for field applications such as loop-mediated isothermal amplification, helicase-dependent amplification, rolling circle amplification, recombinase polymerase amplification, and molecular inversion probe. The principle behind these technologies has been thoroughly discussed in several review papers; herein we emphasize the application of these technologies to detect plant pathogens by outlining the advantages and disadvantages of each technology in detail.

  3. Rapid Convergence of Energy and Free Energy Profiles with Quantum Mechanical Size in Quantum Mechanical-Molecular Mechanical Simulations of Proton Transfer in DNA.

    Science.gov (United States)

    Das, Susanta; Nam, Kwangho; Major, Dan Thomas

    2018-03-13

    In recent years, a number of quantum mechanical-molecular mechanical (QM/MM) enzyme studies have investigated the dependence of reaction energetics on the size of the QM region using energy and free energy calculations. In this study, we revisit the question of QM region size dependence in QM/MM simulations within the context of energy and free energy calculations using a proton transfer in a DNA base pair as a test case. In the simulations, the QM region was treated with a dispersion-corrected AM1/d-PhoT Hamiltonian, which was developed to accurately describe phosphoryl and proton transfer reactions, in conjunction with an electrostatic embedding scheme using the particle-mesh Ewald summation method. With this rigorous QM/MM potential, we performed rather extensive QM/MM sampling, and found that the free energy reaction profiles converge rapidly with respect to the QM region size within ca. ±1 kcal/mol. This finding suggests that the strategy of QM/MM simulations with reasonably sized and selected QM regions, which has been employed for over four decades, is a valid approach for modeling complex biomolecular systems. We point to possible causes for the sensitivity of the energy and free energy calculations to the size of the QM region, and potential implications.

  4. Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-01-01

    Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.

  5. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Science.gov (United States)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  6. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2010-01-01

    Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  7. Direct hydride shift mechanism and stereoselectivity of P450nor confirmed by QM/MM calculations.

    Science.gov (United States)

    Krámos, Balázs; Menyhárd, Dóra K; Oláh, Julianna

    2012-01-19

    Nitric oxide reductase (P450(nor)) found in Fusarium oxysporum catalyzes the reduction of nitric oxide to N(2)O in a multistep process. The reducing agent, NADH, is bound in the distal pocket of the enzyme, and direct hydride transfer occurs from NADH to the nitric oxide bound heme enzyme, forming intermediate I. Here we studied the possibility of hydride transfer from NADH to both the nitrogen and oxygen of the heme-bound nitric oxide, using quantum chemical and combined quantum mechanics/molecular mechanics (QM/MM) calculations, on two different protein models, representing both possible stereochemistries, a syn- and an anti-NADH arrangement. All calculations clearly favor hydride transfer to the nitrogen of nitric oxide, and the QM-only barrier and kinetic isotope effects are good agreement with the experimental values of intermediate I formation. We obtained higher barriers in the QM/MM calculations for both pathways, but hydride transfer to the nitrogen of nitric oxide is still clearly favored. The barriers obtained for the syn, Pro-R conformation of NADH are lower and show significantly less variation than the barriers obtained in the case of anti conformation. The effect of basis set and wide range of functionals on the obtained results are also discussed.

  8. Recent applications of a QM/MM scheme at the CASPT2//CASSCF/AMBER (or CHARMM) level of theory in photochemistry and photobiology

    International Nuclear Information System (INIS)

    Sinicropi, A; Basosi, R; Olivucci, M

    2008-01-01

    The excited-state properties of chemically different chromophores embedded in diverse protein environments or in solution can be nowadays correctly evaluated by means of a hybrid quantum mechanics/molecular mechanics (QM/MM) computational strategy based on multiconfigurational perturbation theory and complete-active-space-self-consistent-field geometry optimization. In particular, in this article we show how a QM/MM strategy has been recently developed in our laboratory and has been successfully applied to the investigation of the fluorescence of the green fluorescent protein (GFP) and how the same strategy (embedding the chromophores in methanol solution) has been combined with retrosynthetic analysis to design a prototype light-driven Z/E molecular switch featuring a single reactive double bond and the same electronic structure and photoisomerization mechanism of the chromophore of the visual pigment Rhodopsin

  9. Photodissociation dynamics of CH3C(O)SH in argon matrix: A QM/MM nonadiabatic dynamics simulation

    Science.gov (United States)

    Xia, Shu-Hua; Liu, Xiang-Yang; Fang, Qiu; Cui, Ganglong

    2015-11-01

    In this work, we have first employed the combined quantum mechanics/molecular mechanics (QM/MM) method to study the photodissociation mechanism of thioacetic acid CH3C(O)SH in the S1, T1, and S0 states in argon matrix. CH3C(O)SH is treated quantum mechanically using the complete active space self-consistent field and complete active space second-order perturbation theory methods; argon matrix is described classically using Lennard-Jones potentials. We find that the C-S bond fission is predominant due to its small barriers of ca. 3.0 and 1.0 kcal/mol in the S1 and T1 states. It completely suppresses the nearby C—C bond fission. After the bond fission, the S1 radical pair of CH3CO and SH can decay to the S0 and T1 states via internal conversion and intersystem crossing, respectively. In the S0 state, the radical pair can either recombine to form CH3C(O)SH or proceed to form molecular products of CH2CO and H2S. We have further employed our recently developed QM/MM generalized trajectory-based surface-hopping method to simulate the photodissociation dynamics of CH3C(O)SH. In 1 ps dynamics simulation, 56% trajectories stay at the Franck-Condon region; the S1 C—S bond fission takes place in the remaining 44% trajectories. Among all nonadiabatic transitions, the S1 → S0 internal conversion is major (55%) but the S1 → T1 intersystem crossing is still comparable and cannot be ignored, which accounts for 28%. Finally, we have found a radical channel generating the molecular products of CH2CO and H2S, which is complementary to the concerted molecular channel. The present work sets the stage for simulating photodissociation dynamics of similar thio-carbonyl systems in matrix.

  10. Chemical Reaction Rates from Ring Polymer Molecular Dynamics: Zero Point Energy Conservation in Mu + H2 → MuH + H.

    Science.gov (United States)

    Pérez de Tudela, Ricardo; Aoiz, F J; Suleimanov, Yury V; Manolopoulos, David E

    2012-02-16

    A fundamental issue in the field of reaction dynamics is the inclusion of the quantum mechanical (QM) effects such as zero point energy (ZPE) and tunneling in molecular dynamics simulations, and in particular in the calculation of chemical reaction rates. In this work we study the chemical reaction between a muonium atom and a hydrogen molecule. The recently developed ring polymer molecular dynamics (RPMD) technique is used, and the results are compared with those of other methods. For this reaction, the thermal rate coefficients calculated with RPMD are found to be in excellent agreement with the results of an accurate QM calculation. The very minor discrepancies are within the convergence error even at very low temperatures. This exceptionally good agreement can be attributed to the dominant role of ZPE in the reaction, which is accounted for extremely well by RPMD. Tunneling only plays a minor role in the reaction.

  11. KINETIKA FERMENTASI SELULOSA MURNI OLEH Trichoderma reesi QM 9414 MENJADI GLUKOSA DAN PENERAPANNYA PADA JERAMI PADI BEBAS LIGNIN [Kinetics of Pure Cellulose Fermentation by Trichoderma Reesei QM 9414 to Glucose and Its Application of on Lignin Free Rice Straw

    Directory of Open Access Journals (Sweden)

    M Iyan Sofyan

    2004-12-01

    Full Text Available The objectives of this research were: 1 to determine aeration rate and substrate concentration of pure cellulose to produce maximum glucose by Trichoderma reesei QM 9414 at 30 oC, and agitation 150 rpm; 2 to study the kinetics of pure cellulose fermentation by Trichoderma reesei QM 9414 to glucose and its implication upon fermentation of the lignin free rice straw. The experiment was arranged in factorial randomized complete design in three times replication. Treatments consisted of three levels of aeration (1,00 vvm; 1,5 vvm; 2,0 vvm and three levels of substrate concentration (0,75 ; 1,00 ; 1,25 % w/v. The results showed that at the exponential phase the average specific growth of Trichoderma reesei QM 9414 was 0,05374 hour-1, the maximum glucose product concentration of pure cellulose was 0.1644 gL-1,and the oxygen transfer was 0,0328 mg L-1 hour-1. According to t-test, the kinetics of pure cellulose fermentation model just the same as the lignin free rice straw fermentation.The enzymes produced by Trichoderma reesei QM 9414 in pure cellulose fermentation media followed the Michaelis-Menten model. The enzyme kinetic parameters were the maximum growth rate was 37x10-3 hour-1 and Michaelis-Menten constant was ½ maximum μ =17,5x10-3 hour-1. The volumetric oxygen transfer (KLa using rice straw was 0,0337 mg.hour-1. The value of KLa could be used for conversion from bioreactor at laboratory scale to commercial scale design.

  12. The Tūqmāq and the Ming China: The Tūqmāq and the Chinese Relations during the Ming Period (1394–1456

    Directory of Open Access Journals (Sweden)

    Kenzheakhmet N.

    2017-12-01

    Full Text Available Objective: Little is known about diplomatic relations between the Jūchīd Ulūs and Ming China (1368–1644, even though some evidence of early tributary trade relations exists. The first extant Chinese account about the country of Salai (Saray dates to around 1394, when accounts of diplomatic exchange between the Ming court and the Jūchīd Ulūs began to appear in the Ming shilu (The Veritable Records of the Ming. Research materials: This article analyzes the Ming shilu in order to understand the character of Chinese knowledge about the Jūchīd Ulūs during their years of contact between 1394 and 1456. Additional sources like geographic accounts and maps help define the extent of Chinese knowledge about the khanate, clarify the kinds of information that the Chinese sought and the reasons why, and measure the influence of cross-cultural contact on Ming Chinese understanding of the Jūchīd Ulūs. Results and novelty of the research: The Ming shilu suggests that at least by the end of the fourteenth and the early years of the fifteenth century, Salai (Saray became an integral, and possibly the most important, element in the name that the Ming court used for the country of the Jūchīd Ulūs. The Persian and Mongol historians used the term Tūqmāq and Togmog to refer to the Jūchīd Ulūs, while Ming Chinese historians used the term Tuohema to refer to the Jūchīd Ulūs or the whole Dasht-i Qipchāq, in post-Mongol Central Eurasia. The diplomatic contact between Ming China and the Tuohuma occurred through the Chinese system of tribute trade during the mid-fifteenth century. Under the reign of Yongle (1402–1424, Zhengtong (1435–1449, and Jingtai (1449–1457, the foundations for a flourishing relationship between Ming China and the Jūchīd Ulūs were established. At that time, the Chinese knew the Jūchīd Ulūs by the name Salai (Saray and Tuohuma (Tūqmāq. Despite the political turmoil that erupted after the fall of the Jūchīd Ul

  13. Csf Based Non-Ground Points Extraction from LIDAR Data

    Science.gov (United States)

    Shen, A.; Zhang, W.; Shi, H.

    2017-09-01

    Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points

  14. Tracing the Base: A Topographic Test for Collusive Basing-Point Pricing

    NARCIS (Netherlands)

    Bos, Iwan; Schinkel, Maarten Pieter

    2009-01-01

    Basing-point pricing is known to have been abused by geographically dispersed firms in order to eliminate competition on transportation costs. This paper develops a topographic test for collusive basing-point pricing. The method uses transaction data (prices, quantities) and customer project site

  15. Tracing the base: A topographic test for collusive basing-point pricing

    NARCIS (Netherlands)

    Bos, I.; Schinkel, M.P.

    2008-01-01

    Basing-point pricing is known to have been abused by geographically dispersed firms in order to eliminate competition on transportation costs. This paper develops a topographic test for collusive basing-point pricing. The method uses transaction data (prices, quantities) and customer project site

  16. A digital image-based method for determining of total acidity in red wines using acid-base titration without indicator.

    Science.gov (United States)

    Tôrres, Adamastor Rodrigues; Lyra, Wellington da Silva; de Andrade, Stéfani Iury Evangelista; Andrade, Renato Allan Navarro; da Silva, Edvan Cirino; Araújo, Mário César Ugulino; Gaião, Edvaldo da Nóbrega

    2011-05-15

    This work proposes the use of digital image-based method for determination of total acidity in red wines by means of acid-base titration without using an external indicator or any pre-treatment of the sample. Digital images present the colour of the emergent radiation which is complementary to the radiation absorbed by anthocyanines present in wines. Anthocyanines change colour depending on the pH of the medium, and from the variation of colour in the images obtained during titration, the end point can be localized with accuracy and precision. RGB-based values were employed to build titration curves, and end points were localized by second derivative curves. The official method recommends potentiometric titration with a NaOH standard solution, and sample dilution until the pH reaches 8.2-8.4. In order to illustrate the feasibility of the proposed method, titrations of ten red wines were carried out. Results were compared with the reference method, and no statistically significant difference was observed between the results by applying the paired t-test at the 95% confidence level. The proposed method yielded more precise results than the official method. This is due to the trivariate nature of the measurements (RGB), associated with digital images. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds

    Science.gov (United States)

    Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang

    2017-04-01

    3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.

  18. Final report on AFRIMETS.QM-K27: Determination of ethanol in aqueous matrix

    Science.gov (United States)

    Archer, Marcellé; Fernandes-Whaley, Maria; Visser, Ria; de Vos, Jayne; Prins, Sara; Rosso, Adriana; Ruiz de Arechavaleta, Mariana; Tahoun, Ibrahim; Kakoulides, Elias; Luvonga, Caleb; Muriira, Geoffrey; Naujalis, Evaldas; Zakaria, Osman Bin; Buzoianu, Mirella; Bebic, Jelena; Achour Mounir, Ben; Thanh, Ngo Huy

    2013-01-01

    From within AFRIMETS, the Regional Metrology Organization (RMO) for Africa, the RMO Key Comparison AFRIMETS.QM-K27 was coordinated by the National Metrology Institute of South Africa (NMISA) in 2011. Ten Metrology Institutes participated, comprising three AFRIMETS, two APMP, four EURAMET and one SIM participant. Participants were required to determine the forensic level concentration of two aqueous ethanol solutions that were gravimetrically prepared by the NMISA. Concentrations were expected to lie in the range of 0.1 mg/g to 5.0 mg/g. The accurate determination of ethanol content in aqueous medium is critical for regulatory forensic and trade purposes. The CCQM Organic Analysis Working Group has carried out several key comparisons (CCQM-K27 series) on the determination of ethanol in wine and aqueous matrices. Developing NMIs now had the opportunity to link to the earlier CCQM-K27 studies through the AFRIMETS.QM-K27 study. Gas chromatography coupled to flame ionisation or mass spectrometric detection was applied by eight of the participants, while three participants (including NMISA) applied titrimetry for the ethanol assay. The assigned reference value of the aqueous ethanol solutions was used to link AFRIMETS.QM-K27 to the CCQM-K27 key comparison reference value. The assigned reference values for AFRIMETS.QM-K27 Level 1 and Level 2 were (0.3249 ± 0.0021) mg/g (k = 2) and (4.6649 ± 0.0152) mg/g (k = 2), respectively. The reference values were determined using the purity-corrected gravimetric preparation values, while the standard uncertainty incorporated the gravimetric preparation and titrimetric homogeneity uncertainties. From previous CCQM-K27 studies, the expected spread (%CV) of higher order measurements of ethanol in aqueous medium is about 0.85% relative. In this study the CV for Level 1 is about 12% (10% with two outliers removed) and for Level 2 about 4%. Three of the ten laboratories submitted results within 1.5% of the gravimetric reference value for

  19. An aggregate method to calibrate the reference point of cumulative prospect theory-based route choice model for urban transit network

    Science.gov (United States)

    Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia

    2015-12-01

    Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.

  20. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Science.gov (United States)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  1. Understanding the reaction between muonium atoms and hydrogen molecules: zero point energy, tunnelling, and vibrational adiabaticity

    Science.gov (United States)

    Aldegunde, J.; Jambrina, P. G.; García, E.; Herrero, V. J.; Sáez-Rábanos, V.; Aoiz, F. J.

    2013-11-01

    The advent of very precise measurements of rate coefficients in reactions of muonium (Mu), the lightest hydrogen isotope, with H2 in its ground and first vibrational state and of kinetic isotope effects with respect to heavier isotopes has triggered a renewed interests in the field of muonic chemistry. The aim of the present article is to review the most recent results about the dynamics and mechanism of the reaction Mu+H2 to shed light on the importance of quantum effects such as tunnelling, the preservation of the zero point energy, and the vibrational adiabaticity. In addition to accurate quantum mechanical (QM) calculations, quasiclassical trajectories (QCT) have been run in order to check the reliability of this method for this isotopic variant. It has been found that the reaction with H2(v=0) is dominated by the high zero point energy (ZPE) of the products and that tunnelling is largely irrelevant. Accordingly, both QCT calculations that preserve the products' ZPE as well as those based on the Ring Polymer Molecular Dynamics methodology can reproduce the QM rate coefficients. However, when the hydrogen molecule is vibrationally excited, QCT calculations fail completely in the prediction of the huge vibrational enhancement of the reactivity. This failure is attributed to tunnelling, which plays a decisive role breaking the vibrational adiabaticity when v=1. By means of the analysis of the results, it can be concluded that the tunnelling takes place through the ν1=1 collinear barrier. Somehow, the tunnelling that is missing in the Mu+H2(v=0) reaction is found in Mu+H2(v=1).

  2. Active Site Dynamics in Substrate Hydrolysis Catalyzed by DapE Enzyme and Its Mutants from Hybrid QM/MM-Molecular Dynamics Simulation.

    Science.gov (United States)

    Dutta, Debodyuti; Mishra, Sabyashachi

    2017-07-27

    The mechanism of the catalytic hydrolysis of N-succinyl diaminopimelic acid (SDAP) by the microbial enzyme DapE in its wild-type (wt) form as well as three of its mutants (E134D, H67A, and H349A) is investigated employing a hybrid quantum mechanics/molecular mechanics (QM/MM) method coupled with molecular dynamics (MD) simulations, wherein the time evolution of the atoms of the QM and MM regions are obtained from the forces acting on the individual atoms. The free-energy profiles along the reaction coordinates of this multistep hydrolysis reaction process are explored using a combination of equilibrium and nonequilibrium (umbrella sampling) QM/MM-MD simulation techniques. In the enzyme-substrate complexes of wt-DapE and the E134D mutant, nucleophilic attack is found to be the rate-determining step involving a barrier of 15.3 and 21.5 kcal/mol, respectively, which satisfactorily explains the free energy of activation obtained from kinetic experiments in wt-DapE-SDAP (15.2 kcal/mol) and the 3 orders of magnitude decrease in the catalytic activity due to E134D mutation. The catalysis is found to be quenched in the H67A and H349A mutants of DapE due to conformational rearrangement in the active site induced by the absence of the active site His residues that prohibits activation of the catalytic water molecule.

  3. Cellulase production by two mutant strain of Trichoderma longibrachiatum Qm9414 and Rut C30; Produccion de celulasas a partir de dos cepas hiperproductoras de trichoderma longibrachiatum Qm9414 y Rut C30

    Energy Technology Data Exchange (ETDEWEB)

    Blanco, M.J.

    1991-12-31

    Native or pretreated biomass from Onopordum nervosum boiss, has been examined as candidate feedstock for cellulase production by two mutant strain of trichoderma longibrachiatum QM9414 and Rut C30. Batch cultivation methods were evaluated and compared with previous experiments using ball-milled, crystalline cellulose (Solka floc). Batch cultivation of T. longibrachiatum Rut C30 on 55% (W/V) acid pretreated O. nervosum biomass yielded enzyme productivities and activities comparable to those obtained on Solka floc. However, the overall enzyme production performance was lower than on Solka floc at comparable cellulose concentrations. This fact may be due to the accumulation of pretreated by products and lignin in the fermentor.(author)

  4. Cellulase production by two mutant strain of Trichoderma longibrachiatum Qm9414 and Rut C30. Produccion de celulasas a partir de dos cepas hiperproductoras de trichoderma longibrachiatum Qm9414 y Rut C30

    Energy Technology Data Exchange (ETDEWEB)

    Blanco, M.J.

    1991-01-01

    Native or pretreated biomass from Onopordum nervosum boiss, has been examined as candidate feedstock for cellulase production by two mutant strain of trichoderma longibrachiatum QM9414 and Rut C30. Batch cultivation methods were evaluated and compared with previous experiments using ball-milled, crystalline cellulose (Solka floc). Batch cultivation of T. longibrachiatum Rut C30 on 55% (W/V) acid pretreated O. nervosum biomass yielded enzyme productivities and activities comparable to those obtained on Solka floc. However, the overall enzyme production performance was lower than on Solka floc at comparable cellulose concentrations. This fact may be due to the accumulation of pretreated by products and lignin in the fermentor.(author)

  5. Photodissociation dynamics of CH{sub 3}C(O)SH in argon matrix: A QM/MM nonadiabatic dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Shu-Hua; Liu, Xiang-Yang; Fang, Qiu; Cui, Ganglong, E-mail: ganglong.cui@bnu.edu.cn [Key Laboratory of Theoretical and Computational Photochemistry, Ministry of Education, College of Chemistry, Beijing Normal University, Beijing 100875 (China)

    2015-11-21

    In this work, we have first employed the combined quantum mechanics/molecular mechanics (QM/MM) method to study the photodissociation mechanism of thioacetic acid CH{sub 3}C(O)SH in the S{sub 1}, T{sub 1}, and S{sub 0} states in argon matrix. CH{sub 3}C(O)SH is treated quantum mechanically using the complete active space self-consistent field and complete active space second-order perturbation theory methods; argon matrix is described classically using Lennard-Jones potentials. We find that the C-S bond fission is predominant due to its small barriers of ca. 3.0 and 1.0 kcal/mol in the S{sub 1} and T{sub 1} states. It completely suppresses the nearby C—C bond fission. After the bond fission, the S{sub 1} radical pair of CH{sub 3}CO and SH can decay to the S{sub 0} and T{sub 1} states via internal conversion and intersystem crossing, respectively. In the S{sub 0} state, the radical pair can either recombine to form CH{sub 3}C(O)SH or proceed to form molecular products of CH{sub 2}CO and H{sub 2}S. We have further employed our recently developed QM/MM generalized trajectory-based surface-hopping method to simulate the photodissociation dynamics of CH{sub 3}C(O)SH. In 1 ps dynamics simulation, 56% trajectories stay at the Franck-Condon region; the S{sub 1} C—S bond fission takes place in the remaining 44% trajectories. Among all nonadiabatic transitions, the S{sub 1} → S{sub 0} internal conversion is major (55%) but the S{sub 1} → T{sub 1} intersystem crossing is still comparable and cannot be ignored, which accounts for 28%. Finally, we have found a radical channel generating the molecular products of CH{sub 2}CO and H{sub 2}S, which is complementary to the concerted molecular channel. The present work sets the stage for simulating photodissociation dynamics of similar thio-carbonyl systems in matrix.

  6. Combined quantum mechanical and molecular mechanical method for metal-organic frameworks: proton topologies of NU-1000.

    Science.gov (United States)

    Wu, Xin-Ping; Gagliardi, Laura; Truhlar, Donald G

    2018-01-17

    Metal-organic frameworks (MOFs) are materials with applications in catalysis, gas separations, and storage. Quantum mechanical (QM) calculations can provide valuable guidance to understand and predict their properties. In order to make the calculations faster, rather than modeling these materials as periodic (infinite) systems, it is useful to construct finite models (called cluster models) and use subsystem methods such as fragment methods or combined quantum mechanical and molecular mechanical (QM/MM) methods. Here we employ a QM/MM methodology to study one particular MOF that has been of widespread interest because of its wide pores and good solvent and thermal stability, namely NU-1000, which contains hexanuclear zirconium nodes and 1,3,6,8-tetrakis(p-benzoic acid)pyrene (TBAPy 4- ) linkers. A modified version of the Bristow-Tiana-Walsh transferable force field has been developed to allow QM/MM calculations on NU-1000; we call the new parametrization the NU1T force field. We consider isomeric structures corresponding to various proton topologies of the [Zr 6 (μ 3 -O) 8 O 8 H 16 ] 8+ node of NU-1000, and we compute their relative energies using a QM/MM scheme designed for the present kind of problem. We compared the results to full quantum mechanical (QM) energy calculations and found that the QM/MM models can reproduce the full QM relative energetics (which span a range of 334 kJ mol -1 ) with a mean unsigned deviation (MUD) of only 2 kJ mol -1 . Furthermore, we found that the structures optimized by QM/MM are nearly identical to their full QM optimized counterparts.

  7. At-line determination of pharmaceuticals small molecule's blending end point using chemometric modeling combined with Fourier transform near infrared spectroscopy

    Science.gov (United States)

    Tewari, Jagdish; Strong, Richard; Boulas, Pierre

    2017-02-01

    This article summarizes the development and validation of a Fourier transform near infrared spectroscopy (FT-NIR) method for the rapid at-line prediction of active pharmaceutical ingredient (API) in a powder blend to optimize small molecule formulations. The method was used to determine the blend uniformity end-point for a pharmaceutical solid dosage formulation containing a range of API concentrations. A set of calibration spectra from samples with concentrations ranging from 1% to 15% of API (w/w) were collected at-line from 4000 to 12,500 cm- 1. The ability of the FT-NIR method to predict API concentration in the blend samples was validated against a reference high performance liquid chromatography (HPLC) method. The prediction efficiency of four different types of multivariate data modeling methods such as partial least-squares 1 (PLS1), partial least-squares 2 (PLS2), principal component regression (PCR) and artificial neural network (ANN), were compared using relevant multivariate figures of merit. The prediction ability of the regression models were cross validated against results generated with the reference HPLC method. PLS1 and ANN showed excellent and superior prediction abilities when compared to PLS2 and PCR. Based upon these results and because of its decreased complexity compared to ANN, PLS1 was selected as the best chemometric method to predict blend uniformity at-line. The FT-NIR measurement and the associated chemometric analysis were implemented in the production environment for rapid at-line determination of the end-point of the small molecule blending operation. FIGURE 1: Correlation coefficient vs Rank plot FIGURE 2: FT-NIR spectra of different steps of Blend and final blend FIGURE 3: Predictions ability of PCR FIGURE 4: Blend uniformity predication ability of PLS2 FIGURE 5: Prediction efficiency of blend uniformity using ANN FIGURE 6: Comparison of prediction efficiency of chemometric models TABLE 1: Order of Addition for Blending Steps

  8. End point detection in ion milling processes by sputter-induced optical emission spectroscopy

    International Nuclear Information System (INIS)

    Lu, C.; Dorian, M.; Tabei, M.; Elsea, A.

    1984-01-01

    The characteristic optical emission from the sputtered material during ion milling processes can provide an unambiguous indication of the presence of the specific etched species. By monitoring the intensity of a representative emission line, the etching process can be precisely terminated at an interface. Enhancement of the etching end point is possible by using a dual-channel photodetection system operating in a ratio or difference mode. The installation of the optical detection system to an existing etching chamber has been greatly facilitated by the use of optical fibers. Using a commercial ion milling system, experimental data for a number of etching processes have been obtained. The result demonstrates that sputter-induced optical emission spectroscopy offers many advantages over other techniques in detecting the etching end point of ion milling processes

  9. Overview of the "epigenetic end points in toxicologic pathology and relevance to human health" session of the 2014 Society Of Toxicologic Pathology Annual Symposium.

    Science.gov (United States)

    Hoenerhoff, Mark J; Hartke, James

    2015-01-01

    The theme of the Society of Toxicologic Pathology 2014 Annual Symposium was "Translational Pathology: Relevance of Toxicologic Pathology to Human Health." The 5th session focused on epigenetic end points in biology, toxicity, and carcinogenicity, and how those end points are relevant to human exposures. This overview highlights the various presentations in this session, discussing integration of epigenetics end points in toxicologic pathology studies, investigating the role of epigenetics in product safety assessment, epigenetic changes in cancers, methodologies to detect them, and potential therapies, chromatin remodeling in development and disease, and epigenomics and the microbiome. The purpose of this overview is to discuss the application of epigenetics to toxicologic pathology and its utility in preclinical or mechanistic based safety, efficacy, and carcinogenicity studies. © 2014 by The Author(s).

  10. Structural insight into RNA catalysis revealed by molecular dynamics simulations and QM/MM calculation

    Czech Academy of Sciences Publication Activity Database

    Banáš, P.; Walter, N.G.; Šponer, Jiří; Otyepka, Michal

    2009-01-01

    Roč. 26, č. 6 (2009), s. 816 ISSN 0739-1102. [The 17th Conversation . 16.06.2009-20.06.2009, Albany] Institutional research plan: CEZ:AV0Z50040507; CEZ:AV0Z50040702 Keywords : QM/MM * RNA Subject RIV: BO - Biophysics

  11. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  12. Appropriate xenon-inhalation speed in xenon-enhanced CT using the end-tidal gas-sampling method

    International Nuclear Information System (INIS)

    Suga, Sadao; Toya, Shigeo; Kawase, Takeshi; Koyama, Hideki; Shiga, Hayao

    1986-01-01

    This report describes some problems when end-tidal xenon gas is substituted for the arterial xenon concentration in xenon-enhanced CT. The authors used a newly developed xenon inhalator with a xenon-gas-concentration analyzer and performed xenon-enhanced CT by means of the ''arterio-venous shunt'' method and the ''end-tidal gas-sampling'' method simultaneously. By the former method, the arterial build-up rate (K) was obtained directly from the CT slices of a blood circuit passing through the phantom. By the latter method, it was calculated from the xenon concentration of end-tidal gas sampled from the mask. The speed of xenon supply was varied between 0.6 - 1.2 L/min. in 11 patients with or without a cerebral lesion. The results revealed that rapid xenon inhalation caused a discrepancy in the arterial K between the ''shunt'' method and the ''end-tidal'' method. This discrepancy may be responsible for the mixing of inhalated gas and expired gas in respiratory dead space, such as the nasal cavity or the mask. The cerebral blood flow was underestimated because of the higher arterial K in the latter method. Too much slow inhalation, however, was timewasting, and it increased the body motion in the subanesthetic state. Therefore, an inhalation speed of the arterial K of as much as 0.2 was ideal to represent the end-tidal xenon concentration for the arterial K in the ''end-tidal gas-sampling'' method. When attention is given to this point, this method may offer a reliable absolute value in xenon-enhanced CT. (author)

  13. Confinement in Melts of Chains with Junction Points, but No Ends

    Science.gov (United States)

    Foster, Mark; He, Qiming; Zhou, Yang; Zhang, Fan; Huang, Chongwen; Narayanan, Suresh

    Measurements of surface fluctuations of 4-arm star and ''8-shaped'' analogs of the same polystyrene (PS) chain show that elimination of chain ends is much more important in dictating the fragility in a thin film than is the introduction of a branch point in the molecule. Both the viscosities derived from surface fluctuations and rheological measurements for the 8-shaped PS manifest a lower value than the 4-arm star PS analog, with the discrepancy increasing as the temperature approaches the glass transition temperature, Tg , bulk. Comparison among different chain topologies shows the effect of the number of chain ends and junction point on the viscosity. The viscosity behavior of the 8-shaped PS is quite different from that of the star analog, but similar to that of the simple cycle analog. The fragility of the 8-shaped molecule in the thin film is reduced relative to that in the bulk, manifesting a nanoconfinement effect. This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

  14. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    Science.gov (United States)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  15. Parametrization of Combined Quantum Mechanical and Molecular Mechanical Methods: Bond-Tuned Link Atoms.

    Science.gov (United States)

    Wu, Xin-Ping; Gagliardi, Laura; Truhlar, Donald G

    2018-05-30

    Combined quantum mechanical and molecular mechanical (QM/MM) methods are the most powerful available methods for high-level treatments of subsystems of very large systems. The treatment of the QM-MM boundary strongly affects the accuracy of QM/MM calculations. For QM/MM calculations having covalent bonds cut by the QM-MM boundary, it has been proposed previously to use a scheme with system-specific tuned fluorine link atoms. Here, we propose a broadly parametrized scheme where the parameters of the tuned F link atoms depend only on the type of bond being cut. In the proposed new scheme, the F link atom is tuned for systems with a certain type of cut bond at the QM-MM boundary instead of for a specific target system, and the resulting link atoms are call bond-tuned link atoms. In principle, the bond-tuned link atoms can be as convenient as the popular H link atoms, and they are especially well adapted for high-throughput and accurate QM/MM calculations. Here, we present the parameters for several kinds of cut bonds along with a set of validation calculations that confirm that the proposed bond-tuned link-atom scheme can be as accurate as the system-specific tuned F link-atom scheme.

  16. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  17. QM/MM Geometry Optimization on Extensive Free-Energy Surfaces for Examination of Enzymatic Reactions and Design of Novel Functional Properties of Proteins.

    Science.gov (United States)

    Hayashi, Shigehiko; Uchida, Yoshihiro; Hasegawa, Taisuke; Higashi, Masahiro; Kosugi, Takahiro; Kamiya, Motoshi

    2017-05-05

    Many remarkable molecular functions of proteins use their characteristic global and slow conformational dynamics through coupling of local chemical states in reaction centers with global conformational changes of proteins. To theoretically examine the functional processes of proteins in atomic detail, a methodology of quantum mechanical/molecular mechanical (QM/MM) free-energy geometry optimization is introduced. In the methodology, a geometry optimization of a local reaction center is performed with a quantum mechanical calculation on a free-energy surface constructed with conformational samples of the surrounding protein environment obtained by a molecular dynamics simulation with a molecular mechanics force field. Geometry optimizations on extensive free-energy surfaces by a QM/MM reweighting free-energy self-consistent field method designed to be variationally consistent and computationally efficient have enabled examinations of the multiscale molecular coupling of local chemical states with global protein conformational changes in functional processes and analysis and design of protein mutants with novel functional properties.

  18. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  19. Devoluming and stabilizing method for end piece

    International Nuclear Information System (INIS)

    Ikeda, Satoshi; Shiotsuki, Msao; Kawamura, Shigeyoshi; Komatsu, Fumiaki.

    1991-01-01

    As a first method, end pieces and other radioactive metal wastes are filled as a mixture in a vessel, and preliminary compression is conducted. Then, the bulk density of the radioactive metal wastes is reduced and gaps in the end pieces are filled with radioactive metal wastes. If they are applied with heat treatment under high pressure in this state together with the vessel, they are devolumed and stabilized without damaging the vessel. As a second method, metal powders are mixed and filled in the vessel together with the end pieces. The gaps in the end pieces are filled with the metal powders and if they are applied with heat treatment under high pressure in this state together with the vessel, the end pieces are devolumed and stabilized without damaging the vessel in the same manner. This can devolume and stabilize the end pieces separated from nuclear fuel assemblies easily and safely. (T.M.)

  20. Adaptive switching of interaction potentials in the time domain: an extended Lagrangian approach tailored to transmute force field to QM/MM simulations and back.

    Science.gov (United States)

    Böckmann, Marcus; Doltsinis, Nikos L; Marx, Dominik

    2015-06-09

    An extended Lagrangian formalism that allows for a smooth transition between two different descriptions of interactions during a molecular dynamics simulation is presented. This time-adaptive method is particularly useful in the context of multiscale simulation as it provides a sound recipe to switch on demand between different hierarchical levels of theory, for instance between ab initio ("QM") and force field ("MM") descriptions of a given (sub)system in the course of a molecular dynamics simulation. The equations of motion can be integrated straightforwardly using the usual propagators, such as the Verlet algorithm. First test cases include a bath of harmonic oscillators, of which a subset is switched to a different force constant and/or equilibrium position, as well as an all-MM to QM/MM transition in a hydrogen-bonded water dimer. The method is then applied to a smectic 8AB8 liquid crystal and is shown to be able to switch dynamically a preselected 8AB8 molecule from an all-MM to a QM/MM description which involves partition boundaries through covalent bonds. These examples show that the extended Lagrangian approach is not only easy to implement into existing code but that it is also efficient and robust. The technique moreover provides easy access to a conserved energy quantity, also in cases when Nosé-Hoover chain thermostatting is used throughout dynamical switching. A simple quadratic driving potential proves to be sufficient to guarantee a smooth transition whose time scale can be easily tuned by varying the fictitious mass parameter associated with the auxiliary variable used to extend the Lagrangian. The method is general and can be applied to time-adaptive switching on demand between two different levels of theory within the framework of hybrid scale-bridging simulations.

  1. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  2. Cellulase production by two mutant strain of Trichoderma longibrachiatum Qm9414 and Rut C30

    International Nuclear Information System (INIS)

    Blanco, M.J.

    1991-01-01

    Native or pretreated biomass from Onopordum nervosum boiss, has been examined as candidate feedstock for cellulase production by two mutant strain of trichoderma longibrachiatum QM9414 and Rut C30. Batch cultivation methods were evaluated and compared with previous experiments using ball-milled, crystalline cellulose (Solka floc). Batch cultivation of T. longibrachiatum Rut C30 on 55% (W/V) acid pretreated O. nervosum biomass yielded enzyme productivities and activities comparable to those obtained on Solka floc. However, the overall enzyme production performance was lower than on Solka floc at comparable cellulose concentrations. This fact may be due to the accumulation of pretreated by products and lignin in the fermentor.(author)

  3. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  4. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  5. Active Involvement of End Users When Developing Web-Based Mental Health Interventions

    Directory of Open Access Journals (Sweden)

    Derek de Beurs

    2017-05-01

    Full Text Available BackgroundAlthough many web-based mental health interventions are being released, the actual uptake by end users is limited. The marginal level of engagement of end users when developing these interventions is recognized as an important cause for uptake problems. In this paper, we offer our perceptive on how to improve user engagement. By doing so, we aim to stimulate a discourse on user involvement within the field of online mental health interventions.MethodsWe shortly describe three different methods (the expert-driven method, intervention mapping, and scrum that were currently used to develop web-based health interventions. We will focus to what extent the end user was involved in the developmental phase, and what the additional challenges were. In the final paragraph, lessons learned are summarized, and recommendations provided.ResultsEvery method seems to have its trade-off: if end users are highly involved, availability of end users and means become problematic. If end users are less actively involved, the product may be less appropriate for the end user. Other challenges to consider are the funding of the more active role of technological companies, and the time it takes to process the results of shorter development cycles.ConclusionThinking about user-centered design and carefully planning, the involvement of end users should become standard in the field of web-based (mental health. When deciding on the level of user involvement, one should balance the need for input from users with the availability of resources such as time and funding.

  6. Fractal supersymmetric QM, Geometric Probability and the Riemann Hypothesis

    CERN Document Server

    Castro, C

    2004-01-01

    The Riemann's hypothesis (RH) states that the nontrivial zeros of the Riemann zeta-function are of the form $ s_n =1/2+i\\lambda_n $. Earlier work on the RH based on supersymmetric QM, whose potential was related to the Gauss-Jacobi theta series, allows to provide the proper framework to construct the well defined algorithm to compute the probability to find a zero (an infinity of zeros) in the critical line. Geometric probability theory furnishes the answer to the very difficult question whether the probability that the RH is true is indeed equal to unity or not. To test the validity of this geometric probabilistic framework to compute the probability if the RH is true, we apply it directly to the the hyperbolic sine function $ \\sinh (s) $ case which obeys a trivial analog of the RH (the HSRH). Its zeros are equally spaced in the imaginary axis $ s_n = 0 + i n \\pi $. The geometric probability to find a zero (and an infinity of zeros) in the imaginary axis is exactly unity. We proceed with a fractal supersymme...

  7. Pre-discovery detections and progenitor candidate for SPIRITS17qm in NGC 1365

    Science.gov (United States)

    Jencson, J. E.; Bond, H. E.; Adams, S. M.; Kasliwal, M. M.

    2018-04-01

    We report the detection of a pre-discovery outburst of SPIRITS17qm, discovered as part of the ongoing Spitzer InfraRed Intensive Transients Survey (SPIRITS) using the 3.6 and 4.5 micron imaging channels ([3.6] and [4.5]) of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope (ATel #11575).

  8. Preface [EmQM15: 3. international symposium on emergent quantum mechanics

    International Nuclear Information System (INIS)

    2016-01-01

    These proceedings comprise the invited lectures of the third international symposium on Emergent Quantum Mechanics (EmQM15), which was held at the Vienna University of Technology in Vienna, Austria, 23-25 October 2015. The symposium convened at the Festsaal and the adjacent Boeckl-Saal of the Technical University, and was devoted to the open exploration of the quantum state as a reality. The resurgence of interest in ontological quantum theory, including both deterministic and indeterministic approaches, challenges long held assumptions and focuses on the following questions: Is the world local or nonlocal? What is the nature of quantum nonlocality? If nonlocal, i.e., superluminal, influences exist then why can't they be used for superluminal signaling and communication? How is the role of the scientific observer/agent to be accounted for in realistic approaches to quantum theory? How could recent developments in the field of space-time as an emergent phenomenon advance new insight at this research frontier? What new experiments might contribute to new understanding? These and related questions were addressed in the context also of a possible deeper level theory for quantum mechanics that interconnects three fields of knowledge: emergence, the quantum, and information. Could there appear a revised image of physical reality from recognizing new links between emergence, the quantum, and information? The symposium provided a forum for considering (i) current theoretical and conceptual obstacles which need to be overcome as well as (ii) promising developments and research opportunities on the way towards realistic quantum mechanics. Contributions were invited that present current advances in both standard as well as unconventional approaches. The EmQM15 symposium was co-organized by Gerhard Grössing (Austrian Institute for Nonlinear Studies (AINS), Vienna), and by Jan Walleczek (Fetzer Franklin Fund, USA, and Phenoscience Laboratories, Berlin). After two

  9. End-point detection in potentiometric titration by continuous wavelet transform.

    Science.gov (United States)

    Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W

    2009-10-15

    The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.

  10. Point Based Emotion Classification Using SVM

    OpenAIRE

    Swinkels, Wout

    2016-01-01

    The detection of emotions is a hot topic in the area of computer vision. Emotions are based on subtle changes in the face that are intuitively detected and interpreted by humans. Detecting these subtle changes, based on mathematical models, is a great challenge in the area of computer vision. In this thesis a new method is proposed to achieve state-of-the-art emotion detection performance. This method is based on facial feature points to monitor subtle changes in the face. Therefore the c...

  11. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    Science.gov (United States)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  12. Image Relaxation Matching Based on Feature Points for DSM Generation

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shunyi; ZHANG Zuxun; ZHANG Jianqing

    2004-01-01

    In photogrammetry and remote sensing, image matching is a basic and crucial process for automatic DEM generation. In this paper we presented a image relaxation matching method based on feature points. This method can be considered as an extention of regular grid point based matching. It avoids the shortcome of grid point based matching. For example, with this method, we can avoid low or even no texture area where errors frequently appear in cross correlaton matching. In the mean while, it makes full use of some mature techniques such as probability relaxation, image pyramid and the like which have already been successfully used in grid point matching process. Application of the technique to DEM generaton in different regions proved that it is more reasonable and reliable.

  13. Methods of a large prospective, randomised, open-label, blinded end-point study comparing morning versus evening dosing in hypertensive patients: the Treatment In Morning versus Evening (TIME) study.

    Science.gov (United States)

    Rorie, David A; Rogers, Amy; Mackenzie, Isla S; Ford, Ian; Webb, David J; Willams, Bryan; Brown, Morris; Poulter, Neil; Findlay, Evelyn; Saywood, Wendy; MacDonald, Thomas M

    2016-02-09

    Nocturnal blood pressure (BP) appears to be a better predictor of cardiovascular outcome than daytime BP. The BP lowering effects of most antihypertensive therapies are often greater in the first 12 h compared to the next 12 h. The Treatment In Morning versus Evening (TIME) study aims to establish whether evening dosing is more cardioprotective than morning dosing. The TIME study uses the prospective, randomised, open-label, blinded end-point (PROBE) design. TIME recruits participants by advertising in the community, from primary and secondary care, and from databases of consented patients in the UK. Participants must be aged over 18 years, prescribed at least one antihypertensive drug taken once a day, and have a valid email address. After the participants have self-enrolled and consented on the secure TIME website (http://www.timestudy.co.uk) they are randomised to take their antihypertensive medication in the morning or the evening. Participant follow-ups are conducted after 1 month and then every 3 months by automated email. The trial is expected to run for 5 years, randomising 10,269 participants, with average participant follow-up being 4 years. The primary end point is hospitalisation for the composite end point of non-fatal myocardial infarction (MI), non-fatal stroke (cerebrovascular accident; CVA) or any vascular death determined by record-linkage. Secondary end points are: each component of the primary end point, hospitalisation for non-fatal stroke, hospitalisation for non-fatal MI, cardiovascular death, all-cause mortality, hospitalisation or death from congestive heart failure. The primary outcome will be a comparison of time to first event comparing morning versus evening dosing using an intention-to-treat analysis. The sample size is calculated for a two-sided test to detect 20% superiority at 80% power. TIME has ethical approval in the UK, and results will be published in a peer-reviewed journal. UKCRN17071; Pre-results. Published by the BMJ

  14. Magnet pole shape design for reduction of thrust ripple of slotless permanent magnet linear synchronous motor with arc-shaped magnets considering end-effect based on analytical method

    Directory of Open Access Journals (Sweden)

    Kyung-Hun Shin

    2017-05-01

    Full Text Available The shape of the magnet is essential to the performance of a slotless permanent magnet linear synchronous machine (PMLSM because it is directly related to desirable machine performance. This paper presents a reduction in the thrust ripple of a PMLSM through the use of arc-shaped magnets based on electromagnetic field theory. The magnetic field solutions were obtained by considering end effect using a magnetic vector potential and two-dimensional Cartesian coordinate system. The analytical solution of each subdomain (PM, air-gap, coil, and end region is derived, and the field solution is obtained by applying the boundary and interface conditions between the subdomains. In particular, an analytical method was derived for the instantaneous thrust and thrust ripple reduction of a PMLSM with arc-shaped magnets. In order to demonstrate the validity of the analytical results, the back electromotive force results of a finite element analysis and experiment on the manufactured prototype model were compared. The optimal point for thrust ripple minimization is suggested.

  15. End-point construction and systematic titration error in linear titration curves-complexation reactions

    NARCIS (Netherlands)

    Coenegracht, P.M.J.; Duisenberg, A.J.M.

    The systematic titration error which is introduced by the intersection of tangents to hyperbolic titration curves is discussed. The effects of the apparent (conditional) formation constant, of the concentration of the unknown component and of the ranges used for the end-point construction are

  16. Merchandising at the point of sale: differential effect of end of aisle and islands

    Directory of Open Access Journals (Sweden)

    Álvaro Garrido-Morgado

    2015-01-01

    Full Text Available Merchandising at point of sale comprises a set of techniques aimed at encouraging the purchase at the point of sale. This paper analyzes the impact on sales of two of these techniques, especially used in the context of non-specialized food stores and rarely distinguished in academic research: (1 the presentation of product in the ends of the aisles or main aisles, leading from the side aisle access and, (2 the presentation of the product in islands within the main aisles. This research combines cross-sectional and longitudinal data and analyzes specific information on sales and commercial stimulus for all references in two large categories of products from a hypermarket over ten weeks. Results show that both the ends of aisle and the islands have a positive effect on sales and their relative importance depends on the nature of the category analyzed. There are also greater synergies between ends of aisle and price promotions. Finally, the results provide some evidence into the impact of the extension or termination of these merchandising stimuli.

  17. Integrated Systems-Based Approach for Reaching Acceptable End Points for Groundwater - 13629

    International Nuclear Information System (INIS)

    Lee, M. Hope; Wellman, Dawn; Truex, Mike; Freshley, Mark D.; Sorenson, Kent S. Jr.; Wymore, Ryan

    2013-01-01

    The sheer mass and nature of contaminated materials at DOE and DoD sites, makes it impractical to completely restore these sites to pre-disposal conditions. DOE faces long-term challenges, particularly with developing monitoring and end state approaches for clean-up that are protective of the environment, technically based and documented, sustainable, and most importantly cost effective. Integrated systems-based monitoring approaches (e.g., tools for characterization and monitoring, multi-component strategies, geophysical modeling) could provide novel approaches and a framework to (a) define risk-informed endpoints and/or conditions that constitute completion of cleanup and (b) provide the understanding for implementation of advanced scientific approaches to meet cleanup goals. Multi-component strategies which combine site conceptual models, biological, chemical, and physical remediation strategies, as well as iterative review and optimization have proven successful at several DOE sites. Novel tools such as enzyme probes and quantitative PCR for DNA and RNA, and innovative modeling approaches for complex subsurface environments, have been successful at facilitating the reduced operation or shutdown of pump and treat facilities and transition of clean-up activities into monitored natural attenuation remedies. Integrating novel tools with site conceptual models and other lines of evidence to characterize, optimize, and monitor long term remedial approaches for complex contaminant plumes are critical for transitioning active remediation into cost effective, yet technically defensible endpoint strategies. (authors)

  18. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Directory of Open Access Journals (Sweden)

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  19. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  20. New methods of subcooled water recognition in dew point hygrometers

    Science.gov (United States)

    Weremczuk, Jerzy; Jachowicz, Ryszard

    2001-08-01

    Two new methods of sub-cooled water recognition in dew point hygrometers are presented in this paper. The first one- impedance method use a new semiconductor mirror in which the dew point detector, the thermometer and the heaters were integrated all together. The second one an optical method based on a multi-section optical detector is discussed in the report. Experimental results of both methods are shown. New types of dew pont hydrometers of ability to recognized sub-cooled water were proposed.

  1. The complexity of interior point methods for solving discounted turn-based stochastic games

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus

    2013-01-01

    for general 2TBSGs. This implies that a number of interior point methods can be used to solve 2TBSGs. We consider two such algorithms: the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, and the interior point potential reduction algorithm of Kojima, Megiddo, and Ye. The algorithms run...... states and discount factor γ we get κ=Θ(n(1−γ)2) , −δ=Θ(n√1−γ) , and 1/θ=Θ(n(1−γ)2) in the worst case. The lower bounds for κ, − δ, and 1/θ are all obtained using the same family of deterministic games....

  2. A portable low-cost 3D point cloud acquiring method based on structure light

    Science.gov (United States)

    Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia

    2018-03-01

    A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.

  3. EVALUATION OF METHODS FOR COREGISTRATION AND FUSION OF RPAS-BASED 3D POINT CLOUDS AND THERMAL INFRARED IMAGES

    Directory of Open Access Journals (Sweden)

    L. Hoegner

    2016-06-01

    Full Text Available This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  4. Machine vision method for online surface inspection of easy open can ends

    Science.gov (United States)

    Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel

    2006-10-01

    Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.

  5. TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method

    International Nuclear Information System (INIS)

    Dubi, A.

    1985-01-01

    1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering

  6. Soft modes at the critical end point in the chiral effective models

    International Nuclear Information System (INIS)

    Fujii, Hirotsugu; Ohtani, Munehisa

    2004-01-01

    At the critical end point in QCD phase diagram, the scalar, vector and entropy susceptibilities are known to diverge. The dynamic origin of this divergence is identified within the chiral effective models as softening of a hydrodynamic mode of the particle-hole-type motion, which is a consequence of the conservation law of the baryon number and the energy. (author)

  7. Practical dose point-based methods to characterize dose distribution in a stationary elliptical body phantom for a cone-beam C-arm CT system

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Mechanical Engineering, Stanford University, Stanford, California 94305 (United States); Constantin, Dragos [Microwave Physics R& E, Varian Medical Systems, Palo Alto, California 94304 (United States); Ganguly, Arundhuti; Girard, Erin; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Morin, Richard L. [Mayo Clinic Jacksonville, Jacksonville, Florida 32224 (United States); Dixon, Robert L. [Department of Radiology, Wake Forest University, Winston-Salem, North Carolina 27157 (United States)

    2015-08-15

    Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp) and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1

  8. Pointing Verification Method for Spaceborne Lidars

    Directory of Open Access Journals (Sweden)

    Axel Amediek

    2017-01-01

    Full Text Available High precision acquisition of atmospheric parameters from the air or space by means of lidar requires accurate knowledge of laser pointing. Discrepancies between the assumed and actual pointing can introduce large errors due to the Doppler effect or a wrongly assumed air pressure at ground level. In this paper, a method for precisely quantifying these discrepancies for airborne and spaceborne lidar systems is presented. The method is based on the comparison of ground elevations derived from the lidar ranging data with high-resolution topography data obtained from a digital elevation model and allows for the derivation of the lateral and longitudinal deviation of the laser beam propagation direction. The applicability of the technique is demonstrated by using experimental data from an airborne lidar system, confirming that geo-referencing of the lidar ground spot trace with an uncertainty of less than 10 m with respect to the used digital elevation model (DEM can be obtained.

  9. A Thin Plate Spline-Based Feature-Preserving Method for Reducing Elevation Points Derived from LiDAR

    Directory of Open Access Journals (Sweden)

    Chuanfa Chen

    2015-09-01

    Full Text Available Light detection and ranging (LiDAR technique is currently one of the most important tools for collecting elevation points with a high density in the context of digital elevation model (DEM construction. However, the high density data always leads to serious time and memory consumption problems in data processing. In this paper, we have developed a thin plate spline (TPS-based feature-preserving (TPS-F method for LiDAR-derived ground data reduction by selecting a certain amount of significant terrain points and by extracting geomorphological features from the raw dataset to maintain the accuracy of constructed DEMs as high as possible, while maximally keeping terrain features. We employed four study sites with different topographies (i.e., flat, undulating, hilly and mountainous terrains to analyze the performance of TPS-F for LiDAR data reduction in the context of DEM construction. These results were compared with those of the TPS-based algorithm without features (TPS-W and two classical data selection methods including maximum z-tolerance (Max-Z and the random method. Results show that irrespective of terrain characteristic, the two versions of TPS-based approaches (i.e., TPS-F and TPS-W are always more accurate than the classical methods in terms of error range and root means square error. Moreover, in terms of streamline matching rate (SMR, TPS-F has a better ability of preserving geomorphological features, especially for the mountainous terrain. For example, the average SMR of TPS-F is 89.2% in the mountainous area, while those of TPS-W, max-Z and the random method are 56.6%, 34.7% and 35.3%, respectively.

  10. An adaptive segment method for smoothing lidar signal based on noise estimation

    Science.gov (United States)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  11. New supervised learning theory applied to cerebellar modeling for suppression of variability of saccade end points.

    Science.gov (United States)

    Fujita, Masahiko

    2013-06-01

    A new supervised learning theory is proposed for a hierarchical neural network with a single hidden layer of threshold units, which can approximate any continuous transformation, and applied to a cerebellar function to suppress the end-point variability of saccades. In motor systems, feedback control can reduce noise effects if the noise is added in a pathway from a motor center to a peripheral effector; however, it cannot reduce noise effects if the noise is generated in the motor center itself: a new control scheme is necessary for such noise. The cerebellar cortex is well known as a supervised learning system, and a novel theory of cerebellar cortical function developed in this study can explain the capability of the cerebellum to feedforwardly reduce noise effects, such as end-point variability of saccades. This theory assumes that a Golgi-granule cell system can encode the strength of a mossy fiber input as the state of neuronal activity of parallel fibers. By combining these parallel fiber signals with appropriate connection weights to produce a Purkinje cell output, an arbitrary continuous input-output relationship can be obtained. By incorporating such flexible computation and learning ability in a process of saccadic gain adaptation, a new control scheme in which the cerebellar cortex feedforwardly suppresses the end-point variability when it detects a variation in saccadic commands can be devised. Computer simulation confirmed the efficiency of such learning and showed a reduction in the variability of saccadic end points, similar to results obtained from experimental data.

  12. Cellulase production by two mutant strain of Trichoderma longibranchiatum QM 9414 and Rut C30

    International Nuclear Information System (INIS)

    Blanco, M. J.

    1991-01-01

    Native or pretreated biomass from Onopordum nervosum Boiss, has been examined as candidate feedstock for cellulase production by two mutant strain of Trichoderma Ionqibrachiatum QM9414 and Rut C30. Batch cultivation methods were evaluated and compared with previous experiments using ball-milled, crystalline cellulose (Solka floc). Batch cultivation of T. Ionqibrachiatum Rut C30 on 5% (w/v) acid pretreated O. nervosum biomass yielded enzyme productivities and activities comparable to those obtained on Solka floc. However, the overall enzyme production performance was lower than on Solka floc at comparable cellulose concentrations. This fact may be due to the accumulation of pretreated by products and lignin in the ferment. (Author) 40 refs

  13. Computation of Hydration Free Energies Using the Multiple Environment Single System Quantum Mechanical/Molecular Mechanical Method.

    Science.gov (United States)

    König, Gerhard; Mei, Ye; Pickard, Frank C; Simmonett, Andrew C; Miller, Benjamin T; Herbert, John M; Woodcock, H Lee; Brooks, Bernard R; Shao, Yihan

    2016-01-12

    A recently developed MESS-E-QM/MM method (multiple-environment single-system quantum mechanical molecular/mechanical calculations with a Roothaan-step extrapolation) is applied to the computation of hydration free energies for the blind SAMPL4 test set and for 12 small molecules. First, free energy simulations are performed with a classical molecular mechanics force field using fixed-geometry solute molecules and explicit TIP3P solvent, and then the non-Boltzmann-Bennett method is employed to compute the QM/MM correction (QM/MM-NBB) to the molecular mechanical hydration free energies. For the SAMPL4 set, MESS-E-QM/MM-NBB corrections to the hydration free energy can be obtained 2 or 3 orders of magnitude faster than fully converged QM/MM-NBB corrections, and, on average, the hydration free energies predicted with MESS-E-QM/MM-NBB fall within 0.10-0.20 kcal/mol of full-converged QM/MM-NBB results. Out of five density functionals (BLYP, B3LYP, PBE0, M06-2X, and ωB97X-D), the BLYP functional is found to be most compatible with the TIP3P solvent model and yields the most accurate hydration free energies against experimental values for solute molecules included in this study.

  14. Post-Processing in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars Vabbersgaard

    The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...

  15. SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests

    Energy Technology Data Exchange (ETDEWEB)

    Wen, N; Chetty, I [Henry Ford Health System, Detroit, MI (United States); Snyder, K [Henry Ford Hospital System, Detroit, MI (United States); Scheib, S [Varian Medical System, Barton (Switzerland); Qin, Y; Li, H [Henry Ford Health System, Detroit, Michigan (United States)

    2016-06-15

    Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL) tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American

  16. Cellulase production by two mutant strain of Trichoderma longibranchiatum QM9414 and Rut C30; Produccion de celulasas a partir de dos cepas hiperproductoras de trichoderma longibranchiatum Qm9-41 4 y Rut C30

    Energy Technology Data Exchange (ETDEWEB)

    Blanco, M J

    1991-07-01

    Native or pretreated biomass from Onopordum nervosum Boiss, has been examined as candidate feedstock for cellulase production by two mutant strain of Trichoderma Ionqibrachiatum QM9414 and Rut C30. Batch cultivation methods were evaluated and compared with previous experiments using ball-milled, crystalline cellulose (Solka floc). Batch cultivation of T. Ionqibrachiatum Rut C30 on 5% (w/v) acid pretreated O. nervosum biomass yielded enzyme productivities and activities comparable to those obtained on Solka floc. However, the overall enzyme production performance was lower than on Solka floc at comparable cellulose concentrations. This fact may be due to the accumulation of pretreated by products and lignin in the ferment. (Author) 40 refs.

  17. Testing of an End-Point Control Unit Designed to Enable Precision Control of Manipulator-Coupled Spacecraft

    Science.gov (United States)

    Montgomery, Raymond C.; Ghosh, Dave; Tobbe, Patrick A.; Weathers, John M.; Manouchehri, Davoud; Lindsay, Thomas S.

    1994-01-01

    This paper presents an end-point control concept designed to enable precision telerobotic control of manipulator-coupled spacecraft. The concept employs a hardware unit (end-point control unit EPCU) that is positioned between the end-effector of the Space Shuttle Remote Manipulator System and the payload. Features of the unit are active compliance (control of the displacement between the end-effector and the payload), to allow precision control of payload motions, and inertial load relief, to prevent the transmission of loads between the end-effector and the payload. This paper presents the concept and studies the active compliance feature using a simulation and hardware. Results of the simulation show the effectiveness of the EPCU in smoothing the motion of the payload. Results are presented from initial, limited tests of a laboratory hardware unit on a robotic arm testbed at the l Space Flight Center. Tracking performance of the arm in a constant speed automated retraction and extension maneuver of a heavy payload with and without the unit active is compared for the design speed and higher speeds. Simultaneous load reduction and tracking performance are demonstrated using the EPCU.

  18. NNAlign: A Web-Based Prediction Method Allowing Non-Expert End-User Discovery of Sequence Motifs in Quantitative Peptide Data

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Schafer-Nielsen, Claus; Lund, Ole

    2011-01-01

    Recent advances in high-throughput technologies have made it possible to generate both gene and protein sequence data at an unprecedented rate and scale thereby enabling entirely new "omics"-based approaches towards the analysis of complex biological processes. However, the amount and complexity...... to interpret large data sets. We have recently developed a method, NNAlign, which is generally applicable to any biological problem where quantitative peptide data is available. This method efficiently identifies underlying sequence patterns by simultaneously aligning peptide sequences and identifying motifs...... associated with quantitative readouts. Here, we provide a web-based implementation of NNAlign allowing non-expert end-users to submit their data (optionally adjusting method parameters), and in return receive a trained method (including a visual representation of the identified motif) that subsequently can...

  19. A location-based multiple point statistics method: modelling the reservoir with non-stationary characteristics

    Directory of Open Access Journals (Sweden)

    Yin Yanshu

    2017-12-01

    Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.

  20. End Point of the Ultraspinning Instability and Violation of Cosmic Censorship

    Science.gov (United States)

    Figueras, Pau; Kunesch, Markus; Lehner, Luis; Tunyasuvunakool, Saran

    2017-04-01

    We determine the end point of the axisymmetric ultraspinning instability of asymptotically flat Myers-Perry black holes in D =6 spacetime dimensions. In the nonlinear regime, this instability gives rise to a sequence of concentric rings connected by segments of black membrane on the rotation plane. The latter become thinner over time, resulting in the formation of a naked singularity in finite asymptotic time and hence a violation of the weak cosmic censorship conjecture in asymptotically flat higher-dimensional spaces.

  1. General base catalysis for cleavage by the active-site cytosine of the hepatitis delta virus ribozyme: QM/MM calculations establish chemical feasibility

    Czech Academy of Sciences Publication Activity Database

    Banáš, Pavel; Rulíšek, Lubomír; Hánošová, V.; Svozil, Daniel; Walter, N.G.; Šponer, Jiří; Otyepka, Michal

    2008-01-01

    Roč. 112, č. 35 (2008), s. 11177-11187 ISSN 1520-6106 R&D Projects: GA MŠk LC512; GA MŠk(CZ) LC06030; GA AV ČR(CZ) IAA400040802; GA AV ČR 1QS500040581 Grant - others:NIH(US) GM62357 Institutional research plan: CEZ:AV0Z40550506; CEZ:AV0Z50040507; CEZ:AV0Z50040702 Keywords : HDV ribozyme * catalysis * QM/MM calculations Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.189, year: 2008

  2. Early warning of climate tipping points from critical slowing down: comparing methods to improve robustness

    Science.gov (United States)

    Lenton, T. M.; Livina, V. N.; Dakos, V.; Van Nes, E. H.; Scheffer, M.

    2012-01-01

    We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings. PMID:22291229

  3. MDEP Common Position CP-VICWG-01. Common Position: Establishment of Common QA/QM Criteria for the Multinational Vendor Inspection CP-VICWG-01

    International Nuclear Information System (INIS)

    2015-01-01

    This document provides a set of common positions for harmonizing inspection criteria called 'Common QA/QM Criteria' which will be used in Multinational Vendor Inspections. This document was prepared by the Vendor Inspection Co-operation Working Group (VICWG) of the Multinational Design Evaluation Program (MDEP). The 'Common QA/QM Criteria' provides the basic areas for consideration when performing Vendor Inspections. The criteria have been developed in conformity with International Codes and Standards such as IAEA, ISO, etc. that MDEP member countries have adopted

  4. Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality

    Directory of Open Access Journals (Sweden)

    Zhanchao Li

    2013-01-01

    Full Text Available The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model and change of sequence distribution law of nonparametric statistical model. On this basis, through the reduction of change point problem, the establishment of basic nonparametric change point model, and asymptotic analysis on test method of basic change point problem, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is created in consideration of the situation that in practice concrete dam crack behavior may have more abnormality points. And the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is used in the actual project, demonstrating the effectiveness and scientific reasonableness of the method established. Meanwhile, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality has a complete theoretical basis and strong practicality with a broad application prospect in actual project.

  5. Is Chronic Dialysis the Right Hard Renal End Point To Evaluate Renoprotective Drug Effects?

    NARCIS (Netherlands)

    Weldegiorgis, Misghina; de Zeeuw, Dick; Dwyer, Jamie P.; Mol, Peter; Heerspink, Hiddo J. L.

    2017-01-01

    Background and objectives: RRT and doubling of serum creatinine are considered the objective hard end points in nephrology intervention trials. Because both are assumed to reflect changes in the filtration capacity of the kidney, drug effects, if present, are attributed to kidney protection.

  6. Network based control point for UPnP QoS architecture

    DEFF Research Database (Denmark)

    Brewka, Lukasz Jerzy; Wessing, Henrik; Rossello Busquet, Ana

    2011-01-01

    Enabling coexistence of non-UPnP Devices in an UPnP QoS Architecture is an important issue that might have a major impact on the deployment and usability of UPnP in future home networks. The work presented here shows potential issues of placing non-UPnP Device in the network managed by UPnP QoS. We...... address this issue by extensions to the UPnP QoS Architecture that can prevent non-UPnP Devices from degrading the overall QoS level. The obtained results show that deploying Network Based Control Point service with efficient traffic classifier, improves significantly the end-to-end packet delay...

  7. Apparatus and method for applying an end plug to a fuel rod tube end

    International Nuclear Information System (INIS)

    Rieben, S.L.; Wylie, M.E.

    1987-01-01

    An apparatus is described for applying an end plug to a hollow end of a nuclear fuel rod tube, comprising: support means mounted for reciprocal movement between remote and adjacent positions relative to a nuclear fuel rod tube end to which an end plug is to be applied; guide means supported on the support means for movement; and drive means coupled to the support means and being actuatable for movement between retracted and extended positions for reciprocally moving the support means between its respective remote and adjacent positions. A method for applying an end plug to a hollow end of a nuclear fuel rod tube is also described

  8. Omics Analyses of Trichoderma reesei CBS999.97 and QM6a Indicate the Relevance of Female Fertility to Carbohydrate-Active Enzyme and Transporter Levels.

    Science.gov (United States)

    Tisch, Doris; Pomraning, Kyle R; Collett, James R; Freitag, Michael; Baker, Scott E; Chen, Chia-Ling; Hsu, Paul Wei-Che; Chuang, Yu Chien; Schuster, Andre; Dattenböck, Christoph; Stappler, Eva; Sulyok, Michael; Böhmdorfer, Stefan; Oberlerchner, Josua; Wang, Ting-Fang; Schmoll, Monika

    2017-11-15

    The filamentous fungus Trichoderma reesei is found predominantly in the tropics but also in more temperate regions, such as Europe, and is widely known as a producer of large amounts of plant cell wall-degrading enzymes. We sequenced the genome of the sexually competent isolate CBS999.97, which is phenotypically different from the female sterile strain QM6a but can cross sexually with QM6a. Transcriptome data for growth on cellulose showed that entire carbohydrate-active enzyme (CAZyme) families are consistently differentially regulated between these strains. We evaluated backcrossed strains of both mating types, which acquired female fertility from CBS999.97 but maintained a mostly QM6a genetic background, and we could thereby distinguish between the effects of strain background and female fertility or mating type. We found clear regulatory differences associated with female fertility and female sterility, including regulation of CAZyme and transporter genes. Analysis of carbon source utilization, transcriptomes, and secondary metabolites in these strains revealed that only a few changes in gene regulation are consistently correlated with different mating types. Different strain backgrounds (QM6a versus CBS999.97) resulted in the most significant alterations in the transcriptomes and in carbon source utilization, with decreased growth of CBS999.97 on several amino acids (for example proline or alanine), which further correlated with the downregulation of genes involved in the respective pathways. In combination, our findings support a role of fertility-associated processes in physiology and gene regulation and are of high relevance for the use of sexual crossing in combining the characteristics of two compatible strains or quantitative trait locus (QTL) analysis. IMPORTANCE Trichoderma reesei is a filamentous fungus with a high potential for secretion of plant cell wall-degrading enzymes. We sequenced the genome of the fully fertile field isolate CBS999.97 and

  9. Potentiometric end point detection in the EDTA titrimetric determination of gallium

    International Nuclear Information System (INIS)

    Gopinath, N.; Renuka, M.; Aggarwal, S.K.

    2001-01-01

    Gallium is titrated in presence of known amount of Fe (III) with EDTA in HNO 3 solution at pH 2 to 3. The end point is detected potentiometrically employing a bright platinum wire - saturated calomel (SCE) reference electrode system, the redox couple being Fe (III) / Fe (II). Since Fe (III) is also titrated by EDTA, it is, therefore, subtracted from titre value to get the EDTA equivalent to gallium only. Precision and accuracy 0.2 to 0.4% was obtained in the results of gallium in the range of 8 to 2 mg. (author)

  10. [Interactions of DNA bases with individual water molecules. Molecular mechanics and quantum mechanics computation results vs. experimental data].

    Science.gov (United States)

    Gonzalez, E; Lino, J; Deriabina, A; Herrera, J N F; Poltev, V I

    2013-01-01

    To elucidate details of the DNA-water interactions we performed the calculations and systemaitic search for minima of interaction energy of the systems consisting of one of DNA bases and one or two water molecules. The results of calculations using two force fields of molecular mechanics (MM) and correlated ab initio method MP2/6-31G(d, p) of quantum mechanics (QM) have been compared with one another and with experimental data. The calculations demonstrated a qualitative agreement between geometry characteristics of the most of local energy minima obtained via different methods. The deepest minima revealed by MM and QM methods correspond to water molecule position between two neighbor hydrophilic centers of the base and to the formation by water molecule of hydrogen bonds with them. Nevertheless, the relative depth of some minima and peculiarities of mutual water-base positions in' these minima depend on the method used. The analysis revealed insignificance of some differences in the results of calculations performed via different methods and the importance of other ones for the description of DNA hydration. The calculations via MM methods enable us to reproduce quantitatively all the experimental data on the enthalpies of complex formation of single water molecule with the set of mono-, di-, and trimethylated bases, as well as on water molecule locations near base hydrophilic atoms in the crystals of DNA duplex fragments, while some of these data cannot be rationalized by QM calculations.

  11. Taylor's series method for solving the nonlinear point kinetics equations

    International Nuclear Information System (INIS)

    Nahla, Abdallah A.

    2011-01-01

    Highlights: → Taylor's series method for nonlinear point kinetics equations is applied. → The general order of derivatives are derived for this system. → Stability of Taylor's series method is studied. → Taylor's series method is A-stable for negative reactivity. → Taylor's series method is an accurate computational technique. - Abstract: Taylor's series method for solving the point reactor kinetics equations with multi-group of delayed neutrons in the presence of Newtonian temperature feedback reactivity is applied and programmed by FORTRAN. This system is the couples of the stiff nonlinear ordinary differential equations. This numerical method is based on the different order derivatives of the neutron density, the precursor concentrations of i-group of delayed neutrons and the reactivity. The r th order of derivatives are derived. The stability of Taylor's series method is discussed. Three sets of applications: step, ramp and temperature feedback reactivities are computed. Taylor's series method is an accurate computational technique and stable for negative step, negative ramp and temperature feedback reactivities. This method is useful than the traditional methods for solving the nonlinear point kinetics equations.

  12. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  13. Advanced DNA-Based Point-of-Care Diagnostic Methods for Plant Diseases Detection

    OpenAIRE

    Lau, Han Yih; Botella, Jose R.

    2017-01-01

    Diagnostic technologies for the detection of plant pathogens with point-of-care capability and high multiplexing ability are an essential tool in the fight to reduce the large agricultural production losses caused by plant diseases. The main desirable characteristics for such diagnostic assays are high specificity, sensitivity, reproducibility, quickness, cost efficiency and high-throughput multiplex detection capability. This article describes and discusses various DNA-based point-of care di...

  14. A multi points ultrasonic detection method for material flow of belt conveyor

    Science.gov (United States)

    Zhang, Li; He, Rongjun

    2018-03-01

    For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.

  15. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  16. An adaptive quantum mechanics/molecular mechanics method for the infrared spectrum of water: incorporation of the quantum effect between solute and solvent.

    Science.gov (United States)

    Watanabe, Hiroshi C; Banno, Misa; Sakurai, Minoru

    2016-03-14

    Quantum effects in solute-solvent interactions, such as the many-body effect and the dipole-induced dipole, are known to be critical factors influencing the infrared spectra of species in the liquid phase. For accurate spectrum evaluation, the surrounding solvent molecules, in addition to the solute of interest, should be treated using a quantum mechanical method. However, conventional quantum mechanics/molecular mechanics (QM/MM) methods cannot handle free QM solvent molecules during molecular dynamics (MD) simulation because of the diffusion problem. To deal with this problem, we have previously proposed an adaptive QM/MM "size-consistent multipartitioning (SCMP) method". In the present study, as the first application of the SCMP method, we demonstrate the reproduction of the infrared spectrum of liquid-phase water, and evaluate the quantum effect in comparison with conventional QM/MM simulations.

  17. Nutrition content of brisket point end of part Simental Ongole Crossbred meat in boiled various temperature

    Science.gov (United States)

    Riyanto, J.; Sudibya; Cahyadi, M.; Aji, A. P.

    2018-01-01

    This aim of this study was to determine the quality of nutritional contents of beef brisket point end of Simental Ongole Crossbred meat in various boiling temperatures. Simental Ongole Crossbred had been fattened for 9 months. Furthermore, they were slaughtered at slaughterhouse and brisket point end part of meat had been prepared to analyse its nutritional contents using Food Scan. These samples were then boiled at 100°C for 0 (TR), 15 (R15), and 30 (R30) minutes, respectively. The data was analysed using Randomized Complete Design (CRD) and Duncan’s multiple range test (DMRT) had been conducted to differentiate among three treatments. The results showed that boiling temperatures significantly affected moisture, and cholesterol contents of beef (P<0.05) while fat content was not significantly affected by boiling temperatures. The boiling temperature decreased beef water contents from 72.77 to 70.84%, on the other hand, the treatment increased beef protein and cholesterol contents from 20.77 to 25.14% and 47.55 to 50.45 mg/100g samples, respectively. The conclusion of this study was boiling of beef at 100°C for 15 minutes and 30 minutes decreasing water content and increasing protein and cholesterol contents of brisket point end of Simental Ongole Crossbred beef.

  18. How Many Conformations Need To Be Sampled To Obtain Converged QM/MM Energies? The Curse of Exponential Averaging.

    Science.gov (United States)

    Ryde, Ulf

    2017-11-14

    Combined quantum mechanical and molecular mechanical (QM/MM) calculations is a popular approach to study enzymatic reactions. They are often based on a set of minimized structures obtained on snapshots from a molecular dynamics simulation to include some dynamics of the enzyme. It has been much discussed how the individual energies should be combined to obtain a final estimate of the energy, but the current consensus seems to be to use an exponential average. Then, the question is how many snapshots are needed to reach a reliable estimate of the energy. In this paper, I show that the question can be easily be answered if it is assumed that the energies follow a Gaussian distribution. Then, the outcome can be simulated based on a single parameter, σ, the standard deviation of the QM/MM energies from the various snapshots, and the number of required snapshots can be estimated once the desired accuracy and confidence of the result has been specified. Results for various parameters are presented, and it is shown that many more snapshots are required than is normally assumed. The number can be reduced by employing a cumulant approximation to second order. It is shown that most convergence criteria work poorly, owing to the very bad conditioning of the exponential average when σ is large (more than ∼7 kJ/mol), because the energies that contribute most to the exponential average have a very low probability. On the other hand, σ serves as an excellent convergence criterion.

  19. Kick-Off Point (KOP and End of Buildup (EOB Data Analysis in Trajectory Design

    Directory of Open Access Journals (Sweden)

    Novrianti Novrianti

    2017-06-01

    Full Text Available Well X is a development well which is directionally drilled. Directional drilling is choosen because the coordinate target of Well X is above the buffer zone. The directional track plan needs accurate survey calculation in order to make the righ track for directional drilling. There are many survey calculation in directional drilling such as tangential, underbalance, average angle, radius of curvature, and mercury method. Minimum curvature method is used in this directional track plan calculation. This method is used because it gives less error than other method.  Kick-Off Point (KOP and End of Buildup (EOB analysis is done at 200 ft, 400 ft, and 600 ft depth to determine the trajectory design and optimal inclination. The hole problem is also determined in this trajectory track design. Optimal trajectory design determined at 200 ft depth because the inclination below 35º and also already reach the target quite well at 1632.28 ft TVD and 408.16 AHD. The optimal inclination at 200 ft KOP depth because the maximum inclination is 18.87º which is below 35º. Hole problem will occur if the trajectory designed at 600 ft. The problems are stuck pipe and the casing or tubing will not able to bend.

  20. Method for assembling dynamoelectric machine end shield parts

    International Nuclear Information System (INIS)

    Thomson, J.M.

    1984-01-01

    Methods, apparatus, and systems are provided for automatically assembling end shield assemblies of subassemblies for electric motors. In a preferred form, a system and methods are provided that utilize a non-palletized, non-synchronous concept to convey end shields through a number of assembly stations. At process stations situated along a conveyor, operations are performed on components. One method includes controlling traffic of sub-assemblies by toggle type escapements. A stop or latch of unique design stops end shield components in midstream, and ''lifts'' of unique design disengage parts from the conveyor and also support such parts during various operations. Photo-optic devices and proximity and reed switch mechanisms are utilized for control purposes. The work stations involved in one system include a unique assembly and pressing station involving oil well covers; a unique feed wick seating system; a unique lubricant adding operation; and unique ''building block'' mechanisms and methods

  1. GFR Decline as an Alternative End Point to Kidney Failure in Clinical Trials : A Meta-analysis of Treatment Effects From 37 Randomized Trials

    NARCIS (Netherlands)

    Inker, Lesley A.; Lambers Heerspink, Hiddo J.; Mondal, Hasi; Schmid, Christopher H.; Tighiouart, Hocine; Noubary, Farzad; Coresh, Josef; Greene, Tom; Levey, Andrew S.

    2014-01-01

    Background: There is increased interest in using alternative end points for trials of kidney disease progression. The currently established end points of end-stage renal disease and doubling of serum creatinine level, equivalent to a 57% decline in estimated glomerular filtration rate (eGFR), are

  2. Absolute proton hydration free energy, surface potential of water, and redox potential of the hydrogen electrode from first principles: QM/MM MD free-energy simulations of sodium and potassium hydration

    Science.gov (United States)

    Hofer, Thomas S.; Hünenberger, Philippe H.

    2018-06-01

    The absolute intrinsic hydration free energy GH+,w a t ◦ of the proton, the surface electric potential jump χwa t ◦ upon entering bulk water, and the absolute redox potential VH+,w a t ◦ of the reference hydrogen electrode are cornerstone quantities for formulating single-ion thermodynamics on absolute scales. They can be easily calculated from each other but remain fundamentally elusive, i.e., they cannot be determined experimentally without invoking some extra-thermodynamic assumption (ETA). The Born model provides a natural framework to formulate such an assumption (Born ETA), as it automatically factors out the contribution of crossing the water surface from the hydration free energy. However, this model describes the short-range solvation inaccurately and relies on the choice of arbitrary ion-size parameters. In the present study, both shortcomings are alleviated by performing first-principle calculations of the hydration free energies of the sodium (Na+) and potassium (K+) ions. The calculations rely on thermodynamic integration based on quantum-mechanical molecular-mechanical (QM/MM) molecular dynamics (MD) simulations involving the ion and 2000 water molecules. The ion and its first hydration shell are described using a correlated ab initio method, namely resolution-of-identity second-order Møller-Plesset perturbation (RIMP2). The next hydration shells are described using the extended simple point charge water model (SPC/E). The hydration free energy is first calculated at the MM level and subsequently increased by a quantization term accounting for the transformation to a QM/MM description. It is also corrected for finite-size, approximate-electrostatics, and potential-summation errors, as well as standard-state definition. These computationally intensive simulations provide accurate first-principle estimates for GH+,w a t ◦, χwa t ◦, and VH+,w a t ◦, reported with statistical errors based on a confidence interval of 99%. The values obtained

  3. Simulating chemical reactions in ionic liquids using QM/MM methodology.

    Science.gov (United States)

    Acevedo, Orlando

    2014-12-18

    The use of ionic liquids as a reaction medium for chemical reactions has dramatically increased in recent years due in large part to the numerous reported advances in catalysis and organic synthesis. In some extreme cases, ionic liquids have been shown to induce mechanistic changes relative to conventional solvents. Despite the large interest in the solvents, a clear understanding of the molecular factors behind their chemical impact is largely unknown. This feature article reviews our efforts developing and applying mixed quantum and molecular mechanical (QM/MM) methodology to elucidate the microscopic details of how these solvents operate to enhance rates and alter mechanisms for industrially and academically important reactions, e.g., Diels-Alder, Kemp eliminations, nucleophilic aromatic substitutions, and β-eliminations. Explicit solvent representation provided the medium dependence of the activation barriers and atomic-level characterization of the solute-solvent interactions responsible for the experimentally observed "ionic liquid effects". Technical advances are also discussed, including a linear-scaling pairwise electrostatic interaction alternative to Ewald sums, an efficient polynomial fitting method for modeling proton transfers, and the development of a custom ionic liquid OPLS-AA force field.

  4. Low dose response analysis through a cytogenetic end-point

    International Nuclear Information System (INIS)

    Bojtor, I.; Koeteles, G.J.

    1998-01-01

    The effects of low doses were studied on human lymphocytes of various individuals. The frequency of micronuclei in cytokinesis-blocked cultured lymphocytes was taken as end-point. The probability distribution of radiation-induced increment was statistically proved and identified as to be asymmetric when the blood samples had been irradiated with doses of 0.01-0.05 Gy of X-rays, similarly to that in unirradiated control population. On the contrary, at or above 1 Gy the corresponding normal curve could be accepted only reflecting an approximately symmetrical scatter of the increments about their mean value. It was found that the slope as well as the closeness of correlation of the variables considerably changed when lower and lower dose ranges had been selected. Below approximately 0.2 Gy even an unrelatedness was found betwen the absorbed dose and the increment

  5. The interpolation method based on endpoint coordinate for CT three-dimensional image

    International Nuclear Information System (INIS)

    Suto, Yasuzo; Ueno, Shigeru.

    1997-01-01

    Image interpolation is frequently used to improve slice resolution to reach spatial resolution. Improved quality of reconstructed three-dimensional images can be attained with this technique as a result. Linear interpolation is a well-known and widely used method. The distance-image method, which is a non-linear interpolation technique, is also used to convert CT value images to distance images. This paper describes a newly developed method that makes use of end-point coordinates: CT-value images are initially converted to binary images by thresholding them and then sequences of pixels with 1-value are arranged in vertical or horizontal directions. A sequence of pixels with 1-value is defined as a line segment which has starting and end points. For each pair of adjacent line segments, another line segment was composed by spatial interpolation of the start and end points. Binary slice images are constructed from the composed line segments. Three-dimensional images were reconstructed from clinical X-ray CT images, using three different interpolation methods and their quality and processing speed were evaluated and compared. (author)

  6. Treating electrostatics with Wolf summation in combined quantum mechanical and molecular mechanical simulations.

    Science.gov (United States)

    Ojeda-May, Pedro; Pu, Jingzhi

    2015-11-07

    The Wolf summation approach [D. Wolf et al., J. Chem. Phys. 110, 8254 (1999)], in the damped shifted force (DSF) formalism [C. J. Fennell and J. D. Gezelter, J. Chem. Phys. 124, 234104 (2006)], is extended for treating electrostatics in combined quantum mechanical and molecular mechanical (QM/MM) molecular dynamics simulations. In this development, we split the QM/MM electrostatic potential energy function into the conventional Coulomb r(-1) term and a term that contains the DSF contribution. The former is handled by the standard machinery of cutoff-based QM/MM simulations whereas the latter is incorporated into the QM/MM interaction Hamiltonian as a Fock matrix correction. We tested the resulting QM/MM-DSF method for two solution-phase reactions, i.e., the association of ammonium and chloride ions and a symmetric SN2 reaction in which a methyl group is exchanged between two chloride ions. The performance of the QM/MM-DSF method was assessed by comparing the potential of mean force (PMF) profiles with those from the QM/MM-Ewald and QM/MM-isotropic periodic sum (IPS) methods, both of which include long-range electrostatics explicitly. For ion association, the QM/MM-DSF method successfully eliminates the artificial free energy drift observed in the QM/MM-Cutoff simulations, in a remarkable agreement with the two long-range-containing methods. For the SN2 reaction, the free energy of activation obtained by the QM/MM-DSF method agrees well with both the QM/MM-Ewald and QM/MM-IPS results. The latter, however, requires a greater cutoff distance than QM/MM-DSF for a proper convergence of the PMF. Avoiding time-consuming lattice summation, the QM/MM-DSF method yields a 55% reduction in computational cost compared with the QM/MM-Ewald method. These results suggest that, in addition to QM/MM-IPS, the QM/MM-DSF method may serve as another efficient and accurate alternative to QM/MM-Ewald for treating electrostatics in condensed-phase simulations of chemical reactions.

  7. Multiview point clouds denoising based on interference elimination

    Science.gov (United States)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  8. PREFACE: EmQM13: Emergent Quantum Mechanics 2013

    Science.gov (United States)

    2014-04-01

    These proceedings comprise the invited lectures of the second international symposium on Emergent Quantum Mechanics (EmQM13), which was held at the premises of the Austrian Academy of Sciences in Vienna, Austria, 3-6 October 2013. The symposium was held at the ''Theatersaal'' of the Academy of Sciences, and was devoted to the open exploration of emergent quantum mechanics, a possible ''deeper level theory'' that interconnects three fields of knowledge: emergence, the quantum, and information. Could there appear a revised image of physical reality from recognizing new links between emergence, the quantum, and information? Could a novel synthesis pave the way towards a 21st century, ''superclassical'' physics? The symposium provided a forum for discussing (i) important obstacles which need to be overcome as well as (ii) promising developments and research opportunities on the way towards emergent quantum mechanics. Contributions were invited that presented current advances in both standard as well as unconventional approaches to quantum mechanics. The EmQM13 symposium was co-organized by Gerhard Grössing (Austrian Institute for Nonlinear Studies (AINS), Vienna), and by Jan Walleczek (Fetzer Franklin Fund, USA, and Phenoscience Laboratories, Berlin). After a very successful first conference on the same topic in 2011, the new partnership between AINS and the Fetzer Franklin Fund in producing the EmQM13 symposium was able to further expand interest in the promise of emergent quantum mechanics. The symposium consisted of two parts, an opening evening addressing the general public, and the scientific program of the conference proper. The opening evening took place at the Great Ceremonial Hall (Grosser Festsaal) of the Austrian Academy of Sciences, and it presented talks and a panel discussion on ''The Future of Quantum Mechanics'' with three distinguished speakers: Stephen Adler (Princeton), Gerard 't Hooft (Utrecht) and Masanao Ozawa (Nagoya). The articles contained in

  9. Predicting the outcome of oral food challenges with hen's egg through skin test end-point titration.

    Science.gov (United States)

    Tripodi, S; Businco, A Di Rienzo; Alessandri, C; Panetta, V; Restani, P; Matricardi, P M

    2009-08-01

    Oral food challenge (OFC) is the diagnostic 'gold standard' of food allergies but it is laborious and time consuming. Attempts to predict a positive OFC through specific IgE assays or conventional skin tests so far gave suboptimal results. To test whether skin test with titration curves predict with enough confidence the outcome of an oral food challenge. Children (n=47; mean age 6.2 +/- 4.2 years) with suspected and diagnosed allergic reactions to hen's egg (HE) were examined through clinical history, physical examination, oral food challenge, conventional and end-point titrated skin tests with HE white extract and determination of serum specific IgE against HE white. Predictive decision points for a positive outcome of food challenges were calculated through receiver operating characteristic (ROC) analysis for HE white using IgE concentration, weal size and end-point titration (EPT). OFC was positive (Sampson's score >or=3) in 20/47 children (42.5%). The area under the ROC curve obtained with the EPT method was significantly bigger than the one obtained by measuring IgE-specific antibodies (0.99 vs. 0.83, P<0.05) and weal size (0.99 vs. 0.88, P<0.05). The extract's dilution that successfully discriminated a positive from a negative OFC (sensitivity 95%, specificity 100%) was 1 : 256, corresponding to a concentration of 5.9 microg/mL of ovotransferrin, 22.2 microg/mL of ovalbumin, and 1.4 microg/mL of lysozyme. EPT is a promising approach to optimize the use of skin prick tests and to predict the outcome of OFC with HE in children. Further studies are needed to test whether this encouraging finding can be extended to other populations and food allergens.

  10. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    Science.gov (United States)

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  11. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2016-12-01

    Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  12. Integral staggered point-matching method for millimeter-wave reflective diffraction gratings on electron cyclotron heating systems

    International Nuclear Information System (INIS)

    Xia, Donghui; Huang, Mei; Wang, Zhijiang; Zhang, Feng; Zhuang, Ge

    2016-01-01

    Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.

  13. Integral staggered point-matching method for millimeter-wave reflective diffraction gratings on electron cyclotron heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Donghui [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Huang, Mei [Southwestern Institute of Physics, 610041 Chengdu (China); Wang, Zhijiang, E-mail: wangzj@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Zhang, Feng [Southwestern Institute of Physics, 610041 Chengdu (China); Zhuang, Ge [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China)

    2016-10-15

    Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.

  14. Estimated GFR Decline as a Surrogate End Point for Kidney Failure : A Post Hoc Analysis From the Reduction of End Points in Non-Insulin-Dependent Diabetes With the Angiotensin II Antagonist Losartan (RENAAL) Study and Irbesartan Diabetic Nephropathy Trial (IDNT)

    NARCIS (Netherlands)

    Lambers Heerspink, Hiddo; Weldegiorgis, Misghina; Inker, Lesley A.; Gansevoort, Ron; Parving, Hans-Henrik; Dwyer, Jamie P.; Mondal, Hasi; Coresh, Josef; Greene, Tom; Levey, Andrew S.; de Zeeuw, Dick

    Background: A doubling of serum creatinine value, corresponding to a 57% decline in estimated glomerular filtration rate (eGFR), is used frequently as a component of a composite kidney end point in clinical trials in type 2 diabetes. The aim of this study was to determine whether alternative end

  15. Multi-lane detection based on multiple vanishing points detection

    Science.gov (United States)

    Li, Chuanxiang; Nie, Yiming; Dai, Bin; Wu, Tao

    2015-03-01

    Lane detection plays a significant role in Advanced Driver Assistance Systems (ADAS) for intelligent vehicles. In this paper we present a multi-lane detection method based on multiple vanishing points detection. A new multi-lane model assumes that a single lane, which has two approximately parallel boundaries, may not parallel to others on road plane. Non-parallel lanes associate with different vanishing points. A biological plausibility model is used to detect multiple vanishing points and fit lane model. Experimental results show that the proposed method can detect both parallel lanes and non-parallel lanes.

  16. Source splitting via the point source method

    International Nuclear Information System (INIS)

    Potthast, Roland; Fazi, Filippo M; Nelson, Philip A

    2010-01-01

    We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields u j , j = 1, ..., n of n element of N sound sources supported in different bounded domains G 1 , ..., G n in R 3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u 1 + ... + u n on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g 1 ,…, g n , n element of N, to construct u l for l = 1, ..., n from u| Λ in the form u l (x) = ∫ Λ g l,x (y)u(y)ds(y), l=1,... n. (1) We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online

  17. Calculation Method for Equilibrium Points in Dynamical Systems Based on Adaptive Sinchronization

    Directory of Open Access Journals (Sweden)

    Manuel Prian Rodríguez

    2017-12-01

    Full Text Available In this work, a control system is proposed as an equivalent numerical procedure whose aim is to obtain the natural equilibrium points of a dynamical system. These equilibrium points may be employed later as setpoint signal for different control techniques. The proposed procedure is based on the adaptive synchronization between an oscillator and a reference model driven by the oscillator state variables. A stability analysis is carried out and a simplified algorithm is proposed. Finally, satisfactory simulation results are shown.

  18. Section-Based Tree Species Identification Using Airborne LIDAR Point Cloud

    Science.gov (United States)

    Yao, C.; Zhang, X.; Liu, H.

    2017-09-01

    The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65 %. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.

  19. Line Segmentation of 2d Laser Scanner Point Clouds for Indoor Slam Based on a Range of Residuals

    Science.gov (United States)

    Peter, M.; Jafri, S. R. U. N.; Vosselman, G.

    2017-09-01

    Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.

  20. LINE SEGMENTATION OF 2D LASER SCANNER POINT CLOUDS FOR INDOOR SLAM BASED ON A RANGE OF RESIDUALS

    Directory of Open Access Journals (Sweden)

    M. Peter

    2017-09-01

    Full Text Available Indoor mobile laser scanning (IMLS based on the Simultaneous Localization and Mapping (SLAM principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.

  1. Guidelines for time-to-event end point definitions in sarcomas and gastrointestinal stromal tumors (GIST) trials: results of the DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials)†.

    Science.gov (United States)

    Bellera, C A; Penel, N; Ouali, M; Bonvalot, S; Casali, P G; Nielsen, O S; Delannes, M; Litière, S; Bonnetain, F; Dabakuyo, T S; Benjamin, R S; Blay, J-Y; Bui, B N; Collin, F; Delaney, T F; Duffaud, F; Filleron, T; Fiore, M; Gelderblom, H; George, S; Grimer, R; Grosclaude, P; Gronchi, A; Haas, R; Hohenberger, P; Issels, R; Italiano, A; Jooste, V; Krarup-Hansen, A; Le Péchoux, C; Mussi, C; Oberlin, O; Patel, S; Piperno-Neumann, S; Raut, C; Ray-Coquard, I; Rutkowski, P; Schuetze, S; Sleijfer, S; Stoeckle, E; Van Glabbeke, M; Woll, P; Gourgou-Bourgade, S; Mathoulin-Pélissier, S

    2015-05-01

    The use of potential surrogate end points for overall survival, such as disease-free survival (DFS) or time-to-treatment failure (TTF) is increasingly common in randomized controlled trials (RCTs) in cancer. However, the definition of time-to-event (TTE) end points is rarely precise and lacks uniformity across trials. End point definition can impact trial results by affecting estimation of treatment effect and statistical power. The DATECAN initiative (Definition for the Assessment of Time-to-event End points in CANcer trials) aims to provide recommendations for definitions of TTE end points. We report guidelines for RCT in sarcomas and gastrointestinal stromal tumors (GIST). We first carried out a literature review to identify TTE end points (primary or secondary) reported in publications of RCT. An international multidisciplinary panel of experts proposed recommendations for the definitions of these end points. Recommendations were developed through a validated consensus method formalizing the degree of agreement among experts. Recommended guidelines for the definition of TTE end points commonly used in RCT for sarcomas and GIST are provided for adjuvant and metastatic settings, including DFS, TTF, time to progression and others. Use of standardized definitions should facilitate comparison of trials' results, and improve the quality of trial design and reporting. These guidelines could be of particular interest to research scientists involved in the design, conduct, reporting or assessment of RCT such as investigators, statisticians, reviewers, editors or regulatory authorities. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  2. New drugs and patient-centred end-points in old age: setting the wheels in motion.

    Science.gov (United States)

    Mangoni, Arduino A; Pilotto, Alberto

    2016-01-01

    Older patients with various degrees of frailty and disability, a key population target of pharmacological interventions in acute and chronic disease states, are virtually neglected in pre-marketing studies assessing the efficacy and safety of investigational drugs. Moreover, aggressively pursuing established therapeutic targets in old age, e.g. blood pressure, serum glucose or cholesterol concentrations, is not necessarily associated with the beneficial effects, and the acceptable safety, reported in younger patient cohorts. Measures of self-reported health and functional status might represent additional, more meaningful, therapeutic end-points in the older population, particularly in patients with significant frailty and relatively short life expectancy, e.g. in the presence of cancer and/or neurodegenerative disease conditions. Strategies enhancing early knowledge about key pharmacological characteristics of investigational drugs targeting older adults are discussed, together with the rationale for incorporating non-traditional, patient-centred, end-points in this ever-increasing group.

  3. Image mosaicking based on feature points using color-invariant values

    Science.gov (United States)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  4. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    International Nuclear Information System (INIS)

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  5. Neural Network Based Maximum Power Point Tracking Control with Quadratic Boost Converter for PMSG—Wind Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Ramji Tiwari

    2018-02-01

    Full Text Available This paper proposes an artificial neural network (ANN based maximum power point tracking (MPPT control strategy for wind energy conversion system (WECS implemented with a DC/DC converter. The proposed topology utilizes a radial basis function network (RBFN based neural network control strategy to extract the maximum available power from the wind velocity. The results are compared with a classical Perturb and Observe (P&O method and Back propagation network (BPN method. In order to achieve a high voltage rating, the system is implemented with a quadratic boost converter and the performance of the converter is validated with a boost and single ended primary inductance converter (SEPIC. The performance of the MPPT technique along with a DC/DC converter is demonstrated using MATLAB/Simulink.

  6. The end point of the first-order phase transition of the SU(2) gauge-Higgs model on a four-dimensional isotropic lattice

    International Nuclear Information System (INIS)

    Aoki, Y.; Csikor, F.; Fodor, Z.; Ukawa, A.

    1999-01-01

    We report results of a study of the end point of the electroweak phase transition of the SU(2) gauge-Higgs model defined on a four-dimensional isotropic lattice with N t = 2. Finite-size scaling study of Lee-Yang zeros yields λ c = 0.00116(16) for the end point. Combined with a zero-temperature measurement of Higgs and W boson masses, this leads to M H,c = 68.2 ± 6.6 GeV for the critical Higgs boson mass. An independent analysis of Binder cumulant gives a consistent value λ c = 0.00102(3) for the end point

  7. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  8. [10 Years of Quality Management: Perception and Importance from GPs' Point of View].

    Science.gov (United States)

    Kühlein, T; Madlo-Thiess, F; Wambach, V; Schaffer, S

    2018-03-01

    Quality management (QM) became mandatory for the ambulatory sector of the German health care system 10 years ago. The aim of this study was to find out how general practitioners (GPs) perceived the introduction of this measure, how they see it today and what they expect of the future concerning QM. In a qualitative study, interviews following a semi-structured guideline with GPs were conducted. Following transcription, interviews were coded in triangulation, first inductively, then deductively until saturation was reached. Main topics and code families were agreed on after discussion. There was consensus on the necessity of standardization of basic processes like hygiene. However, the application of QM to an activity that emphasizes personal relationships and communication was seen as barely possible. GPs stated that they reduced QM to a tolerable and for them reasonable minimum. GPs mostly refused certification. The next 10 years were seen with pessimism in terms of more bureaucratic guidelines. The statutory introduction of QM was an attack on medical professionalism. Instead of passive resistance and reduction of QM to a minimum, engaged independent quality work might help to regain the trust of society we seem to have lost and restore the professional autonomy we need for our work. Eigentümer und Copyright ©Georg Thieme Verlag KG 2018.

  9. Generic primal-dual interior point methods based on a new kernel function

    NARCIS (Netherlands)

    EL Ghami, M.; Roos, C.

    2008-01-01

    In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the

  10. Modeling a calixarene-crown-6 and its alkali complexes by means of a hybrid quantum mechanical/molecular mechanical method

    International Nuclear Information System (INIS)

    Lamare, V.; Golebiowski, J.; Ruiz-Lopez, M.F.; Martins-Costa, M.; Millot, C.

    2000-01-01

    Calixarene-crown-6s in 1,3-alternate conformation are compounds currently investigated for their ability to selectively extract traces of cesium from acidic or strong salinity aqueous solutions. Studies based on molecular modeling were undertaken on these systems to understand their behavior regarding cesium and other alkali cations, in particular sodium. In this work, a recently developed molecular modeling approach was used to investigate calixarene BC6 and its alkali complexes. The whole calixarene ligand is treated by the semiempirical AM1 quantum method (QM) whereas the cation and solvent are treated by a conventional force field (MM). The total energy of the system is the sum of the QM and MM sub-system contributions plus the QM/MM interaction energy. The latter includes the electrostatic interaction between QM charges (nuclei + electrons) and MM sites, and the non-electrostatic QM/MM van der Weals term, usually expressed by a Lennard-Jones potential. In the QM/MM method, van der Waals interactions between the QM and MM sub-systems are described by empirical Lennard-Jones parameters which must be adapted to the hybrid potential considered. Parameters on oxygen atoms were optimized. For the cations, two sets of Parameters were tested: Aqvist empirical parameters, derived to represent cation/water interactions in classical dynamics (set 2), and a new set of parameters which we calculated from dispersion coefficients available in the literature (set 1). The latter gave better results for the interactions with the crown. In the sodium complex, the cation interacts with only four oxygen atoms of the crown, whereas in the complex with cesium, the interaction involves six oxygen atoms. Distortion of the BC6 is therefore less with sodium and favors the corresponding complex by 4 kcal/mol. The cation/BC6 van der Waals energy is very weak for the two complexes. Hence the interaction between the cation and BC6 is primarily electrostatic. The BC6 polarization energy due

  11. Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation

    Science.gov (United States)

    An, Lu; Guo, Baolong

    2018-03-01

    Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).

  12. Nambu-Goto string with the Gauss-Bonnet term and point-like masses at the ends

    Science.gov (United States)

    Hadasz, Leszek; Róg, Tomasz

    1996-02-01

    We investigate classical dynamics of the Nambu-Goto string with Gauss-Bonnet term in the action and point-like masses at the ends in the context of effective QCD string. The configuration of rigidly rotating string is studied and its application to phenomenological description of meson spectroscopy is discussed.

  13. A self-consistent MoD-WM/MM structural refinement method: characterization of hydrogen bonding in the orytricha nova G-1uar

    Energy Technology Data Exchange (ETDEWEB)

    Batista, Enrique R [Los Alamos National Laboratory; Newcomer, Micharel B [YALE UNIV; Raggin, Christina M [YALE UNIV; Gascon, Jose A [YALE UNIV; Loria, J Patrick [YALE UNIV; Batista, Victor S [YALE UNIV

    2008-01-01

    This paper generalizes the MoD-QM/MM hybrid method, developed for ab initio computations of protein electrostatic potentials [Gasc6n, l.A.; Leung, S.S.F.; Batista, E.R.; Batista, V.S. J. Chem. Theory Comput. 2006,2, 175-186], as a practical algorithm for structural refinement of extended systems. The computational protocol involves a space-domain decomposition scheme for the formal fragmentation of extended systems into smaller, partially overlapping, molecular domains and the iterative self-consistent energy minimization of the constituent domains by relaxation of their geometry and electronic structure. The method accounts for mutual polarization of the molecular domains, modeled as Quantum-Mechanical (QM) layers embedded in the otherwise classical Molecular-Mechanics (MM) environment according to QM/MM hybrid methods. The method is applied to the description of benchmark models systems that allow for direct comparisons with full QM calculations, and subsequently applied to the structural characterization of the DNA Oxytricha nova Guanine quadruplex (G4). The resulting MoD-QM/MM structural model of the DNA G4 is compared to recently reported highresolution X-ray diffraction and NMR models, and partially validated by direct comparisons between {sup 1}H NMR chemical shifts that are highly sensitive to hydrogen-bonding and stacking interactions and the corresponding theoretical values obtained at the density functional theory DFT QM/MM (BH&H/6-31 G*:Amber) level in conjunction with the gauge independent atomic orbital (GIAO) method for the ab initio self consistent-field (SCF) calculation of NMR chemical shifts.

  14. Managing distance and covariate information with point-based clustering

    Directory of Open Access Journals (Sweden)

    Peter A. Whigham

    2016-09-01

    Full Text Available Abstract Background Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley’s K and applied to the problem of clustering with deliberate self-harm (DSH, is presented. Methods Point-based Monte-Carlo simulation of Ripley’s K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years’ emergency hospital presentations (n = 136 in a New Zealand town (population ~50,000. Study area was defined by residential (housing land parcels representing a finite set of possible point addresses. Results Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Conclusions Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley’s K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for

  15. A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator

    Science.gov (United States)

    Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai

    2017-05-01

    To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.

  16. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Science.gov (United States)

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  17. Interior-Point Methods for Linear Programming: A Review

    Science.gov (United States)

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  18. A Classification-oriented Method of Feature Image Generation for Vehicle-borne Laser Scanning Point Clouds

    Directory of Open Access Journals (Sweden)

    YANG Bisheng

    2016-02-01

    Full Text Available An efficient method of feature image generation of point clouds to automatically classify dense point clouds into different categories is proposed, such as terrain points, building points. The method first uses planar projection to sort points into different grids, then calculates the weights and feature values of grids according to the distribution of laser scanning points, and finally generates the feature image of point clouds. Thus, the proposed method adopts contour extraction and tracing means to extract the boundaries and point clouds of man-made objects (e.g. buildings and trees in 3D based on the image generated. Experiments show that the proposed method provides a promising solution for classifying and extracting man-made objects from vehicle-borne laser scanning point clouds.

  19. Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks

    Science.gov (United States)

    Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji

    High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.

  20. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Science.gov (United States)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  1. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    International Nuclear Information System (INIS)

    Pereira, N F; Sitek, A

    2010-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  2. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)

    2010-09-21

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  3. FPFH-based graph matching for 3D point cloud registration

    Science.gov (United States)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  4. Defining the end-point of mastication: A conceptual model.

    Science.gov (United States)

    Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E

    2017-10-01

    properties define the end-point texture and enduring sensory perception of the food. © 2017 Wiley Periodicals, Inc.

  5. Dysglycemia and Index60 as Prediagnostic End Points for Type 1 Diabetes Prevention Trials.

    Science.gov (United States)

    Nathan, Brandon M; Boulware, David; Geyer, Susan; Atkinson, Mark A; Colman, Peter; Goland, Robin; Russell, William; Wentworth, John M; Wilson, Darrell M; Evans-Molina, Carmella; Wherrett, Diane; Skyler, Jay S; Moran, Antoinette; Sosenko, Jay M

    2017-11-01

    We assessed dysglycemia and a T1D Diagnostic Index60 (Index60) ≥1.00 (on the basis of fasting C-peptide, 60-min glucose, and 60-min C-peptide levels) as prediagnostic end points for type 1 diabetes among Type 1 Diabetes TrialNet Pathway to Prevention Study participants. Two cohorts were analyzed: 1 ) baseline normoglycemic oral glucose tolerance tests (OGTTs) with an incident dysglycemic OGTT and 2 ) baseline Index60 <1.00 OGTTs with an incident Index60 ≥1.00 OGTT. Incident dysglycemic OGTTs were divided into those with (DYS/IND+) and without (DYS/IND-) concomitant Index60 ≥1.00. Incident Index60 ≥1.00 OGTTs were divided into those with (IND/DYS+) and without (IND/DYS-) concomitant dysglycemia. The cumulative incidence for type 1 diabetes was greater after IND/DYS- than after DYS/IND- ( P < 0.01). Within the normoglycemic cohort, the cumulative incidence of type 1 diabetes was higher after DYS/IND+ than after DYS/IND- ( P < 0.001), whereas within the Index60 <1.00 cohort, the cumulative incidence after IND/DYS+ and after IND/DYS- did not differ significantly. Among nonprogressors, type 1 diabetes risk at the last OGTT was greater for IND/DYS- than for DYS/IND- ( P < 0.001). Hazard ratios (HRs) of DYS/IND- with age and 30- to 0-min C-peptide were positive ( P < 0.001 for both), whereas HRs of type 1 diabetes with these variables were inverse ( P < 0.001 for both). In contrast, HRs of IND/DYS- and type 1 diabetes with age and 30- to 0-min C-peptide were consistent (all inverse [ P < 0.01 for all]). The findings suggest that incident dysglycemia without Index60 ≥1.00 is a suboptimal prediagnostic end point for type 1 diabetes. Measures that include both glucose and C-peptide levels, such as Index60 ≥1.00, appear better suited as prediagnostic end points. © 2017 by the American Diabetes Association.

  6. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Directory of Open Access Journals (Sweden)

    Khang Jie Liew

    Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  7. UST-ID robotics: Wireless communication and minimum conductor technology, and end-point tracking technology surveys

    International Nuclear Information System (INIS)

    Holliday, M.A.

    1993-10-01

    This report is a technology review of the current state-of-the-art in two technologies applicable to the Underground Storage Tank (UST) program at the Hanford Nuclear Reservation. The first review is of wireless and minimal conductor technologies for in-tank communications. The second review is of advanced concepts for independent tool-point tracking. This study addresses the need to provide wireless transmission media or minimum conductor technology for in-tank communications and robot control. At present, signals are conducted via contacting transmission media, i.e., cables. Replacing wires with radio frequencies or invisible light are commonplace in the communication industry. This technology will be evaluated for its applicability to the needs of robotics. Some of these options are radio signals, leaky coax, infrared, microwave, and optical fiber systems. Although optical fiber systems are contacting transmission media, they will be considered because of their ability to reduce the number of conductors. In this report we will identify, evaluate, and recommend the requirements for wireless and minimum conductor technology to replace the present cable system. The second section is a technology survey of concepts for independent end-point tracking (tracking the position of robot end effectors). The position of the end effector in current industrial robots is determined by computing that position from joint information, which is basically a problem of locating a point in three-dimensional space. Several approaches are presently being used in industrial robotics, including: stereo-triangulation with a theodolite network and electrocamera system, photogrammetry, and multiple-length measurement with laser interferometry and wires. The techniques that will be evaluated in this survey are advanced applications of the aforementioned approaches. These include laser tracking (3-D and 5-D), ultrasonic tracking, vision-guided servoing, and adaptive robotic visual tracking

  8. A new maximum power point method based on a sliding mode approach for solar energy harvesting

    International Nuclear Information System (INIS)

    Farhat, Maissa; Barambones, Oscar; Sbita, Lassaad

    2017-01-01

    Highlights: • Create a simple, easy of implement and accurate V_M_P_P estimator. • Stability analysis of the proposed system based on the Lyapunov’s theory. • A comparative study versus P&O, highlight SMC good performances. • Construct a new PS-SMC algorithm to include the partial shadow case. • Experimental validation of the SMC MPP tracker. - Abstract: This paper presents a photovoltaic (PV) system with a maximum power point tracking (MPPT) facility. The goal of this work is to maximize power extraction from the photovoltaic generator (PVG). This goal is achieved using a sliding mode controller (SMC) that drives a boost converter connected between the PVG and the load. The system is modeled and tested under MATLAB/SIMULINK environment. In simulation, the sliding mode controller offers fast and accurate convergence to the maximum power operating point that outperforms the well-known perturbation and observation method (P&O). The sliding mode controller performance is evaluated during steady-state, against load varying and panel partial shadow (PS) disturbances. To confirm the above conclusion, a practical implementation of the maximum power point tracker based sliding mode controller on a hardware setup is performed on a dSPACE real time digital control platform. The data acquisition and the control system are conducted all around dSPACE 1104 controller board and its RTI environment. The experimental results demonstrate the validity of the proposed control scheme over a stand-alone real photovoltaic system.

  9. Challenges and promises of integrating knowledge engineering and qualitative methods

    Science.gov (United States)

    Lundberg, C. Gustav; Holm, Gunilla

    Our goal is to expose some of the close ties that exist between knowledge engineering (KE) and qualitative methodology (QM). Many key concepts of qualitative research, for example meaning, commonsense, understanding, and everyday life, overlap with central research concerns in artificial intelligence. These shared interests constitute a largely unexplored avenue for interdisciplinary cooperation. We compare and take some steps toward integrating two historically diverse methodologies by exploring the commonalities of KE and QM both from a substantive and a methodological/technical perspective. In the second part of this essay, we address knowledge acquisition problems and procedures. Knowledge acquisition within KE has been based primarily on cognitive psychology/science foundations, whereas knowledge acquisition within QM has a broader foundation in phenomenology, symbolic interactionism, and ethnomethodology. Our discussion and examples are interdisciplinary in nature. We do not suggest that there is a clash between the KE and QM frameworks, but rather that the lack of communication potentially may limit each framework's future development.

  10. The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces

    KAUST Repository

    Chen, Yujia

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general curved surfaces. Based on the closest point representation of the underlying surface, we formulate an embedding equation for the surface elliptic problem, then discretize it using standard finite differences and interpolation schemes on banded but uniform Cartesian grids. We prove the convergence of the difference scheme for the Poisson\\'s equation on a smooth closed curve. In order to solve the resulting large sparse linear systems, we propose a specific geometric multigrid method in the setting of the closest point method. Convergence studies in both the accuracy of the difference scheme and the speed of the multigrid algorithm show that our approaches are effective.

  11. Converging ligand-binding free energies obtained with free-energy perturbations at the quantum mechanical level.

    Science.gov (United States)

    Olsson, Martin A; Söderhjelm, Pär; Ryde, Ulf

    2016-06-30

    In this article, the convergence of quantum mechanical (QM) free-energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa-acid deep-cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158-224 atoms). We use single-step exponential averaging (ssEA) and the non-Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi-empirical PM6-DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free-energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  12. Converging ligand‐binding free energies obtained with free‐energy perturbations at the quantum mechanical level

    Science.gov (United States)

    Olsson, Martin A.; Söderhjelm, Pär

    2016-01-01

    In this article, the convergence of quantum mechanical (QM) free‐energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa‐acid deep‐cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158–224 atoms). We use single‐step exponential averaging (ssEA) and the non‐Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi‐empirical PM6‐DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free‐energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27117350

  13. Material-Point-Method Analysis of Collapsing Slopes

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised-interpolation mat......To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised......, a deformed material description is introduced, based on time integration of the deformation gradient and utilising Gauss quadrature over the volume associated with each material point. The method has been implemented in a Fortran code and employed for the analysis of a landslide that took place during...

  14. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    Science.gov (United States)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  15. A Multi-Point Method Considering the Maximum Power Point Tracking Dynamic Process for Aerodynamic Optimization of Variable-Speed Wind Turbine Blades

    Directory of Open Access Journals (Sweden)

    Zhiqiang Yang

    2016-05-01

    Full Text Available Due to the dynamic process of maximum power point tracking (MPPT caused by turbulence and large rotor inertia, variable-speed wind turbines (VSWTs cannot maintain the optimal tip speed ratio (TSR from cut-in wind speed up to the rated speed. Therefore, in order to increase the total captured wind energy, the existing aerodynamic design for VSWT blades, which only focuses on performance improvement at a single TSR, needs to be improved to a multi-point design. In this paper, based on a closed-loop system of VSWTs, including turbulent wind, rotor, drive train and MPPT controller, the distribution of operational TSR and its description based on inflow wind energy are investigated. Moreover, a multi-point method considering the MPPT dynamic process for the aerodynamic optimization of VSWT blades is proposed. In the proposed method, the distribution of operational TSR is obtained through a dynamic simulation of the closed-loop system under a specific turbulent wind, and accordingly the multiple design TSRs and the corresponding weighting coefficients in the objective function are determined. Finally, using the blade of a National Renewable Energy Laboratory (NREL 1.5 MW wind turbine as the baseline, the proposed method is compared with the conventional single-point optimization method using the commercial software Bladed. Simulation results verify the effectiveness of the proposed method.

  16. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  17. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  18. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    Directory of Open Access Journals (Sweden)

    Lujiang Liu

    2016-06-01

    Full Text Available Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective.

  19. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy

    Directory of Open Access Journals (Sweden)

    Dong Zhou

    2016-01-01

    Full Text Available Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  20. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy

    Science.gov (United States)

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator. PMID:27110274

  1. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy.

    Science.gov (United States)

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  2. Flexible End2End Workflow Automation of Hit-Discovery Research.

    Science.gov (United States)

    Holzmüller-Laue, Silke; Göde, Bernd; Thurow, Kerstin

    2014-08-01

    The article considers a new approach of more complex laboratory automation at the workflow layer. The authors purpose the automation of end2end workflows. The combination of all relevant subprocesses-whether automated or manually performed, independently, and in which organizational unit-results in end2end processes that include all result dependencies. The end2end approach focuses on not only the classical experiments in synthesis or screening, but also on auxiliary processes such as the production and storage of chemicals, cell culturing, and maintenance as well as preparatory activities and analyses of experiments. Furthermore, the connection of control flow and data flow in the same process model leads to reducing of effort of the data transfer between the involved systems, including the necessary data transformations. This end2end laboratory automation can be realized effectively with the modern methods of business process management (BPM). This approach is based on a new standardization of the process-modeling notation Business Process Model and Notation 2.0. In drug discovery, several scientific disciplines act together with manifold modern methods, technologies, and a wide range of automated instruments for the discovery and design of target-based drugs. The article discusses the novel BPM-based automation concept with an implemented example of a high-throughput screening of previously synthesized compound libraries. © 2014 Society for Laboratory Automation and Screening.

  3. Analytical Solution of Dirac Equation for q-Deformed Hyperbolic Manning-Rosen Potential in D Dimensions using SUSY QM and its Thermodynamics Application

    International Nuclear Information System (INIS)

    Cari, C; Suparmi, A; Yunianto, M; Pratiwi, B N

    2016-01-01

    The Dirac equation of q-deformed hyperbolic Manning Rosen potential in D dimension was solved by using Supersymmetric Quantum Mechanics (SUSY QM). The D dimensional relativistic energy spectra were obtained by using SUSY QM and shape invariant properties and D dimensional wave functions of q-deformed hyperbolic Manning Rosen potential were obtained by using the SUSY raising and lowering operators. In the nonrelativistic limit, the relativistic energy spectra for exact spin symmetry case reduced into nonrelativistic energy spectra and so for the wave functions. In the classical regime, the partition function, the vibrational specific heat, and the vibrational mean energy of some diatomic molecules were calculated from the non-relativistic energy spectra with the help of error function and imaginary error function. (paper)

  4. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.

    Science.gov (United States)

    Shen, Lin; Wu, Jingheng; Yang, Weitao

    2016-10-11

    Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.

  5. Energetics and dynamics of the non-natural fluorescent 4AP:DAP base pair

    KAUST Repository

    Chawla, Mohit; Autiero, Ida; Oliva, Romina; Cavallo, Luigi

    2018-01-01

    the experimental studies and rationalize the impact of the above non-natural bases on the structure, stability and dynamics of nucleic acid structures, we performed quantum mechanics (QM) calculations along with classical molecular dynamics (MD) simulations. QM

  6. Problem-based learning at the receiving end: a 'mixed methods' study of junior medical students' perspectives.

    Science.gov (United States)

    Maudsley, Gillian; Williams, Evelyn M I; Taylor, David C M

    2008-11-01

    Qualitative insights about students' personal experience of inconsistencies in implementation of problem-based learning (PBL) might help refocus expert discourse about good practice. This study explored how junior medical students conceptualize: PBL; good tutoring; and less effective sessions. Participants comprised junior medical students in Liverpool 5-year problem-based, community-orientated curriculum. Data collection and analysis were mostly cross-sectional, using inductive analysis of qualitative data from four brief questionnaires and a 'mixed' qualitative/quantitative approach to data handling. The 1999 cohort (end-Year 1) explored PBL, generated 'good tutor' themes, and identified PBL (dis)advantages (end-Year 1 then mid-Year 3). The 2001 cohort (start-Year 1) described critical incidents, and subsequently (end-Year 1) factors in less effective sessions. These factors were coded using coding-frames generated from the answers about critical incidents and 'good tutoring'. Overall, 61.2% (137), 77.9% (159), 71.0% (201), and 71.0% (198) responded to the four surveys, respectively. Responders perceived PBL as essentially process-orientated, focused on small-groupwork/dynamics and testing understanding through discussion. They described 'good tutors' as knowing when and how to intervene without dominating (51.1%). In longitudinal data (end-Year 1 to mid-Year 3), the main perceived disadvantage remained lack of 'syllabus' (and related uncertainty). For less effective sessions (end-Year 1), tutor transgressions reflected unfulfilled expectations of good tutors, mostly intervening poorly (42.6% of responders). Student transgressions reflected the critical incident themes, mostly students' own lack of work/preparation (54.8%) and other students participating poorly (33.7%) or dominating/being self-centred (31.6%). Compelling individual accounts of uncomfortable PBL experiences should inform improvements in implementation.

  7. Three-dimensional digital imaging based on shifted point-array encoding.

    Science.gov (United States)

    Tian, Jindong; Peng, Xiang

    2005-09-10

    An approach to three-dimensional (3D) imaging based on shifted point-array encoding is presented. A kind of point-array structure light is projected sequentially onto the reference plane and onto the object surface to be tested and thus forms a pair of point-array images. A mathematical model is established to formulize the imaging process with the pair of point arrays. This formulation allows for a description of the relationship between the range image of the object surface and the lateral displacement of each point in the point-array image. Based on this model, one can reconstruct each 3D range image point by computing the lateral displacement of the corresponding point on the two point-array images. The encoded point array can be shifted digitally along both the lateral and the longitudinal directions step by step to achieve high spatial resolution. Experimental results show good agreement with the theoretical predictions. This method is applicable for implementing 3D imaging of object surfaces with complex topology or large height discontinuities.

  8. Comparative Theoretical Study of the Ring-Opening Polymerization of Caprolactam vs Caprolactone Using QM/MM Methods

    Energy Technology Data Exchange (ETDEWEB)

    Elsasser, Brigitta M.; Schoenen, Iris; Fels, Gregor

    2013-06-07

    Candida antarctica lipase B (CALB) efficiently catalyzes the ring-opening polymerization of lactones to high molecular weight products in good yield. In contrast, an efficient enzymatic synthesis of polyamides has so far not been described in the literature. This obvious difference in enzyme catalysis is the subject of our comparative study of the initial steps of a CALB catalyzed ring-opening polymerization of ε- caprolactone and ε-caprolactam. We have applied docking tools to generate the reactant state complex and performed quantum mechanical/molecular mechanical (QM/MM) calculations at the density functional theory (DFT) PBE0 level of theory to simulate the acylation of Ser105 by the lactone and the lactam, respectively, via the corresponding first tetrahedral intermediates. We could identify a decisive difference in the accessibility of the two substrates in the ring-opening to the respective acyl enzyme complex as the attack of ε-caprolactam is hindered because of an energetically disfavored proton transfer during this part of the catalytic reaction while ε-caprolactone is perfectly processed along the widely accepted pathway using the catalytic triade of Ser105, His224, and Asp187. Since the generation of an acylated Ser105 species is the crucial step of the polymerization procedure, our results give an explanation for the unsatisfactory enzymatic polyamide formation and opens up new possibilities for targeted rational catalyst redesign in hope of an experimentally useful CALB catalyzed polyamide synthesis.

  9. How does the long G·G* Watson-Crick DNA base mispair comprising keto and enol tautomers of the guanine tautomerise? The results of a QM/QTAIM investigation.

    Science.gov (United States)

    Brovarets', Ol'ha O; Hovorun, Dmytro M

    2014-08-14

    The double proton transfer (DPT) in the long G·G* Watson-Crick base mispair (|C6N1(G*)N1C6(G)| = 36.4°; C1 symmetry), involving keto and enol tautomers of the guanine (G) nucleobase, along two intermolecular neighboring O6H···O6 (8.39) and N1···HN1 (6.14 kcal mol(-1)) H-bonds that were established to be slightly anti-cooperative, leads to its transformation into the G*·G base mispair through a single transition state (|C6N1N1C6| = 37.1°; C1), namely to the interconversion into itself. It was shown that the G·G* ↔ G*·G tautomerisation via the DPT is assisted by the third specific contact, that sequentially switches along the intrinsic reaction coordinate (IRC) in an original way: (G)N2H···N2(G*) H-bond (-25.13 to -10.37) → N2···N2 van der Waals contact (-10.37 to -9.23) → (G)N2···HN2(G*) H-bond (-9.23 to 0.79) → (G*)N2···HN2(G) H-bond (0.79 to 7.35 Bohr). The DPT tautomerisation was found to proceed through the asynchronous concerted mechanism by employing the QM/QTAIM approach and the methodology of the scans of the geometric, electron-topological, energetic, polar and NBO properties along the IRC. Nine key points, that can be considered as part of the tautomerisation repertoire, have been established and analyzed in detail. Furthermore, it was shown that the G·G* or G*·G base mispair is a thermodynamically and dynamically stable structure with a lifetime of 8.22 × 10(-10) s and all 6 low-frequency intermolecular vibrations are able to develop during this time span. Lastly, our results highlight the importance of the G·G* ↔ G*·G DPT tautomerisation, which can have implications for biological and chemical sensing applications.

  10. Development and evaluation of spatial point process models for epidermal nerve fibers.

    Science.gov (United States)

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Alive and kicking–but will Quality Management be around tomorrow? A Swedish academia perspective

    Directory of Open Access Journals (Sweden)

    Bjarne Bergquist

    2012-12-01

    Full Text Available The purpose of this article is to describe how Quality Management (QM is perceived today by scholars at three Swedish universities, and into what QM is expected to develop into in twenty years. Data were collected through structured workshops using affinity diagrams with scholars teaching and performing research in the QM field. The results show that QM currently is perceived as consisting of a set of core of principles, methods and tools. The future outlook includes three possible development directions for QM are seen: [1] searching for a “discipline X” where QM can contribute while keeping its toolbox, [2] focus on a core based on the traditional quality technology toolbox with methods and tools, and [3] a risk that QM, as it is today, may seize to exist and be diffused into other disciplines.

  12. Use of Nonequilibrium Work Methods to Compute Free Energy Differences Between Molecular Mechanical and Quantum Mechanical Representations of Molecular Systems.

    Science.gov (United States)

    Hudson, Phillip S; Woodcock, H Lee; Boresch, Stefan

    2015-12-03

    Carrying out free energy simulations (FES) using quantum mechanical (QM) Hamiltonians remains an attractive, albeit elusive goal. Renewed efforts in this area have focused on using "indirect" thermodynamic cycles to connect "low level" simulation results to "high level" free energies. The main obstacle to computing converged free energy results between molecular mechanical (MM) and QM (ΔA(MM→QM)), as recently demonstrated by us and others, is differences in the so-called "stiff" degrees of freedom (e.g., bond stretching) between the respective energy surfaces. Herein, we demonstrate that this problem can be efficiently circumvented using nonequilibrium work (NEW) techniques, i.e., Jarzynski's and Crooks' equations. Initial applications of computing ΔA(NEW)(MM→QM), for blocked amino acids alanine and serine as well as to generate butane's potentials of mean force via the indirect QM/MM FES method, showed marked improvement over traditional FES approaches.

  13. Comprehensive Evaluation of the Sustainable Development of Power Grid Enterprises Based on the Model of Fuzzy Group Ideal Point Method and Combination Weighting Method with Improved Group Order Relation Method and Entropy Weight Method

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2017-10-01

    Full Text Available As an important implementing body of the national energy strategy, grid enterprises bear the important responsibility of optimizing the allocation of energy resources and serving the economic and social development, and their levels of sustainable development have a direct impact on the national economy and social life. In this paper, the model of fuzzy group ideal point method and combination weighting method with improved group order relation method and entropy weight method is proposed to evaluate the sustainable development of power grid enterprises. Firstly, on the basis of consulting a large amount of literature, the important criteria of the comprehensive evaluation of the sustainable development of power grid enterprises are preliminarily selected. The opinions of the industry experts are consulted and fed back for many rounds through the Delphi method and the evaluation criteria system for sustainable development of power grid enterprises is determined, then doing the consistent and non dimensional processing of the evaluation criteria. After that, based on the basic order relation method, the weights of each expert judgment matrix are synthesized to construct the compound matter elements. By using matter element analysis, the subjective weights of the criteria are obtained. And entropy weight method is used to determine the objective weights of the preprocessed criteria. Then, combining the subjective and objective information with the combination weighting method based on the subjective and objective weighted attribute value consistency, a more comprehensive, reasonable and accurate combination weight is calculated. Finally, based on the traditional TOPSIS method, the triangular fuzzy numbers are introduced to better realize the scientific processing of the data information which is difficult to quantify, and the queuing indication value of each object and the ranking result are obtained. A numerical example is taken to prove that the

  14. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  15. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints

    Directory of Open Access Journals (Sweden)

    Junhui Huang

    2016-12-01

    Full Text Available Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.

  16. Developing points-based risk-scoring systems in the presence of competing risks.

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S; D'Agostino, Ralph B; Fine, Jason P

    2016-09-30

    Predicting the occurrence of an adverse event over time is an important issue in clinical medicine. Clinical prediction models and associated points-based risk-scoring systems are popular statistical methods for summarizing the relationship between a multivariable set of patient risk factors and the risk of the occurrence of an adverse event. Points-based risk-scoring systems are popular amongst physicians as they permit a rapid assessment of patient risk without the use of computers or other electronic devices. The use of such points-based risk-scoring systems facilitates evidence-based clinical decision making. There is a growing interest in cause-specific mortality and in non-fatal outcomes. However, when considering these types of outcomes, one must account for competing risks whose occurrence precludes the occurrence of the event of interest. We describe how points-based risk-scoring systems can be developed in the presence of competing events. We illustrate the application of these methods by developing risk-scoring systems for predicting cardiovascular mortality in patients hospitalized with acute myocardial infarction. Code in the R statistical programming language is provided for the implementation of the described methods. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  17. Hardware-accelerated Point Generation and Rendering of Point-based Impostors

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2005-01-01

    This paper presents a novel scheme for generating points from triangle models. The method is fast and lends itself well to implementation using graphics hardware. The triangle to point conversion is done by rendering the models, and the rendering may be performed procedurally or by a black box API....... I describe the technique in detail and discuss how the generated point sets can easily be used as impostors for the original triangle models used to create the points. Since the points reside solely in GPU memory, these impostors are fairly efficient. Source code is available online....

  18. Automatic entry point planning for robotic post-mortem CT-based needle placement.

    Science.gov (United States)

    Ebert, Lars C; Fürst, Martin; Ptacek, Wolfgang; Ruder, Thomas D; Gascho, Dominic; Schweitzer, Wolf; Thali, Michael J; Flach, Patricia M

    2016-09-01

    Post-mortem computed tomography guided placement of co-axial introducer needles allows for the extraction of tissue and liquid samples for histological and toxicological analyses. Automation of this process can increase the accuracy and speed of the needle placement, thereby making it more feasible for routine examinations. To speed up the planning process and increase safety, we developed an algorithm that calculates an optimal entry point and end-effector orientation for a given target point, while taking constraints such as accessibility or bone collisions into account. The algorithm identifies the best entry point for needle trajectories in three steps. First, the source CT data is prepared and bone as well as surface data are extracted and optimized. All vertices of the generated surface polygon are considered to be potential entry points. Second, all surface points are tested for validity within the defined hard constraints (reachability, bone collision as well as collision with other needles) and removed if invalid. All remaining vertices are reachable entry points and are rated with respect to needle insertion angle. Third, the vertex with the highest rating is selected as the final entry point, and the best end-effector rotation is calculated to avoid collisions with the body and already set needles. In most cases, the algorithm is sufficiently fast with approximately 5-6 s per entry point. This is the case if there is no collision between the end-effector and the body. If the end-effector has to be rotated to avoid collision, calculation times can increase up to 24 s due to the inefficient collision detection used here. In conclusion, the algorithm allows for fast and facilitated trajectory planning in forensic imaging.

  19. Self-Similarity Based Corresponding-Point Extraction from Weakly Textured Stereo Pairs

    Directory of Open Access Journals (Sweden)

    Min Mao

    2014-01-01

    Full Text Available For the areas of low textured in image pairs, there is nearly no point that can be detected by traditional methods. The information in these areas will not be extracted by classical interest-point detectors. In this paper, a novel weakly textured point detection method is presented. The points with weakly textured characteristic are detected by the symmetry concept. The proposed approach considers the gray variability of the weakly textured local regions. The detection mechanism can be separated into three steps: region-similarity computation, candidate point searching, and refinement of weakly textured point set. The mechanism of radius scale selection and texture strength conception are used in the second step and the third step, respectively. The matching algorithm based on sparse representation (SRM is used for matching the detected points in different images. The results obtained on image sets with different objects show high robustness of the method to background and intraclass variations as well as to different photometric and geometric transformations; the points detected by this method are also the complement of points detected by classical detectors from the literature. And we also verify the efficacy of SRM by comparing with classical algorithms under the occlusion and corruption situations for matching the weakly textured points. Experiments demonstrate the effectiveness of the proposed weakly textured point detection algorithm.

  20. QoC-based Optimization of End-to-End M-Health Data Delivery Services

    NARCIS (Netherlands)

    Widya, I.A.; van Beijnum, Bernhard J.F.; Salden, Alfons

    2006-01-01

    This paper addresses how Quality of Context (QoC) can be used to optimize end-to-end mobile healthcare (m-health) data delivery services in the presence of alternative delivery paths, which is quite common in a pervasive computing and communication environment. We propose min-max-plus based

  1. FMIT test-end instrumentation development bases

    International Nuclear Information System (INIS)

    Fuller, J.L.

    1982-06-01

    FMIT test-end measurements proposed for deuteron beam control, target diagnostics, and irradiation sample dosimetry are listed. The test-end refers to the area inside the test cell, but includes measurements inside and outside the cell. Justification, categorization, and limits qualification are presented for each measurement. Methods are purposefully de-emphasized in order to clarify the measurement needs, not techniques. Some discussion of techniques currently under investigation is given in the last section of the report

  2. Electron dynamics in complex environments with real-time time dependent density functional theory in a QM-MM framework

    International Nuclear Information System (INIS)

    Morzan, Uriel N.; Ramírez, Francisco F.; Scherlis, Damián A.; Oviedo, M. Belén; Sánchez, Cristián G.; Lebrero, Mariano C. González

    2014-01-01

    This article presents a time dependent density functional theory (TDDFT) implementation to propagate the Kohn-Sham equations in real time, including the effects of a molecular environment through a Quantum-Mechanics Molecular-Mechanics (QM-MM) hamiltonian. The code delivers an all-electron description employing Gaussian basis functions, and incorporates the Amber force-field in the QM-MM treatment. The most expensive parts of the computation, comprising the commutators between the hamiltonian and the density matrix—required to propagate the electron dynamics—, and the evaluation of the exchange-correlation energy, were migrated to the CUDA platform to run on graphics processing units, which remarkably accelerates the performance of the code. The method was validated by reproducing linear-response TDDFT results for the absorption spectra of several molecular species. Two different schemes were tested to propagate the quantum dynamics: (i) a leap-frog Verlet algorithm, and (ii) the Magnus expansion to first-order. These two approaches were confronted, to find that the Magnus scheme is more efficient by a factor of six in small molecules. Interestingly, the presence of iron was found to seriously limitate the length of the integration time step, due to the high frequencies associated with the core-electrons. This highlights the importance of pseudopotentials to alleviate the cost of the propagation of the inner states when heavy nuclei are present. Finally, the methodology was applied to investigate the shifts induced by the chemical environment on the most intense UV absorption bands of two model systems of general relevance: the formamide molecule in water solution, and the carboxy-heme group in Flavohemoglobin. In both cases, shifts of several nanometers are observed, consistently with the available experimental data

  3. Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method

    Directory of Open Access Journals (Sweden)

    Darae Jeong

    2018-01-01

    Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.

  4. A periodic point-based method for the analysis of Nash equilibria in 2 x 2 symmetric quantum games

    International Nuclear Information System (INIS)

    Schneider, David

    2011-01-01

    We present a novel method of looking at Nash equilibria in 2 x 2 quantum games. Our method is based on a mathematical connection between the problem of identifying Nash equilibria in game theory, and the topological problem of the periodic points in nonlinear maps. To adapt our method to the original protocol designed by Eisert et al (1999 Phys. Rev. Lett. 83 3077-80) to study quantum games, we are forced to extend the space of strategies from the initial proposal. We apply our method to the extended strategy space version of the quantum Prisoner's dilemma and find that a new set of Nash equilibria emerge in a natural way. Nash equilibria in this set are optimal as Eisert's solution of the quantum Prisoner's dilemma and include this solution as a limit case.

  5. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  6. A Workflow-based Intelligent Network Data Movement Advisor with End-to-end Performance Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Michelle M. [Southern Illinois Univ., Carbondale, IL (United States); Wu, Chase Q. [Univ. of Memphis, TN (United States)

    2013-11-07

    Next-generation eScience applications often generate large amounts of simulation, experimental, or observational data that must be shared and managed by collaborative organizations. Advanced networking technologies and services have been rapidly developed and deployed to facilitate such massive data transfer. However, these technologies and services have not been fully utilized mainly because their use typically requires significant domain knowledge and in many cases application users are even not aware of their existence. By leveraging the functionalities of an existing Network-Aware Data Movement Advisor (NADMA) utility, we propose a new Workflow-based Intelligent Network Data Movement Advisor (WINDMA) with end-to-end performance optimization for this DOE funded project. This WINDMA system integrates three major components: resource discovery, data movement, and status monitoring, and supports the sharing of common data movement workflows through account and database management. This system provides a web interface and interacts with existing data/space management and discovery services such as Storage Resource Management, transport methods such as GridFTP and GlobusOnline, and network resource provisioning brokers such as ION and OSCARS. We demonstrate the efficacy of the proposed transport-support workflow system in several use cases based on its implementation and deployment in DOE wide-area networks.

  7. Evaluation of a Distributed Photovoltaic System in Grid-Connected and Standalone Applications by Different MPPT Algorithms

    Directory of Open Access Journals (Sweden)

    Ru-Min Chao

    2018-06-01

    Full Text Available Due to the shortage of fossil fuel and the environmental pollution problem, solar energy applications have drawn a lot of attention worldwide. This paper reports the use of the latest patented distributed photovoltaic (PV power system design, including the two possible maximum power point tracking (MPPT algorithms, a power optimizer, and a PV power controller, in grid-connected and standalone applications. A distributed PV system with four amorphous silicon thin-film solar panels is used to evaluate both the quadratic maximization (QM and the Steepest descent (SD MPPT algorithms. The system’s design is different for the QM or the SD MPPT algorithm being used. The test result for the grid-connected silicon-based PV panels will also be reported. Considering the settling time for the power optimizer to be 20 ms, the test result shows that the tracking time for the QM method is close to 200 ms, which is faster when compared with the SD method whose tracking time is 500 ms. Besides this, the use of the QM method provides a more stable power output since the tracking is restricted by a local power optimizer rather than the global tracking the SD method uses. For a standalone PV application, a solar-powered boat design with 18 PV panels using a cascaded MPPT controller is introduced, and it provides flexibility in system design and the effective use of photovoltaic energy.

  8. Fixed-point data-collection method of video signal

    International Nuclear Information System (INIS)

    Tang Yu; Yin Zejie; Qian Weiming; Wu Xiaoyi

    1997-01-01

    The author describes a Fixed-point data-collection method of video signal. The method provides an idea of fixed-point data-collection, and has been successfully applied in the research of real-time radiography on dose field, a project supported by National Science Fund

  9. Multiscale Modeling using Molecular Dynamics and Dual Domain Material Point Method

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division. Fluid Dynamics and Solid Mechanics Group, T-3; Rice Univ., Houston, TX (United States)

    2016-07-07

    For problems involving large material deformation rate, the material deformation time scale can be shorter than the material takes to reach a thermodynamical equilibrium. For such problems, it is difficult to obtain a constitutive relation. History dependency become important because of thermodynamic non-equilibrium. Our goal is to build a multi-scale numerical method which can bypass the need for a constitutive relation. In conclusion, multi-scale simulation method is developed based on the dual domain material point (DDMP). Molecular dynamics (MD) simulation is performed to calculate stress. Since the communication among material points is not necessary, the computation can be done embarrassingly parallel in CPU-GPU platform.

  10. A new comparison method for dew-point generators

    Science.gov (United States)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  11. Analysis of Stress Updates in the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    The material-point method (MPM) is a new numerical method for analysis of large strain engineering problems. The MPM applies a dual formulation, where the state of the problem (mass, stress, strain, velocity etc.) is tracked using a finite set of material points while the governing equations...... are solved on a background computational grid. Several references state, that one of the main advantages of the material-point method is the easy application of complicated material behaviour as the constitutive response is updated individually for each material point. However, as discussed here, the MPM way...

  12. THE GROWTH POINTS OF STATISTICAL METHODS

    OpenAIRE

    Orlov A. I.

    2014-01-01

    On the basis of a new paradigm of applied mathematical statistics, data analysis and economic-mathematical methods are identified; we have also discussed five topical areas in which modern applied statistics is developing as well as the other statistical methods, i.e. five "growth points" – nonparametric statistics, robustness, computer-statistical methods, statistics of interval data, statistics of non-numeric data

  13. Activity-based protein profiling of secreted cellulolytic enzyme activity dynamics in Trichoderma reesei QM6a, NG14, and RUT-C30

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Lindsey N.; Culley, David E.; Hofstad, Beth A.; Chauvigne-Hines, Lacie M.; Zink, Erika M.; Purvine, Samuel O.; Smith, Richard D.; Callister, Stephen J.; Magnuson, Jon M.; Wright, Aaron T.

    2013-12-01

    Development of alternative, non-petroleum based sources of bioenergy that can be applied in the short-term find great promise in the use of highly abundant and renewable lignocellulosic plant biomass.1 This material obtained from different feedstocks, such as forest litter or agricultural residues, can yield liquid fuels and other chemical products through biorefinery processes.2 Biofuels are obtained from lignocellulosic materials by chemical pretreatment of the biomass, followed by enzymatic decomposition of cellulosic and hemicellulosic compounds into soluble sugars that are converted to desired chemical products via microbial metabolism and fermentation.3, 4 To release soluble sugars from polymeric cellulose multiple enzymes are required, including endoglucanase, exoglucanase, and β-glucosidase.5, 6 However, the enzymatic hydrolysis of cellulose into soluble sugars remains a significant limiting factor to the efficient and economically viable utilization of lignocellulosic biomass for transport fuels.7, 8 The primary industrial source of cellulose and hemicellulases is the mesophilic soft-rot fungus Trichoderma reesei,9 having widespread applications in food, feed, textile, pulp, and paper industries.10 The genome encodes 200 glycoside hydrolases, including 10 cellulolytic and 16 hemicellulolytic enzymes.11 The hypercellulolytic catabolite derepressed strain RUT-C30 was obtained through a three-step UV and chemical mutagenesis of the original T. reesei strain QM6a,12, 13 in which strains M7 and NG14 were intermediate, having higher cellulolytic activity than the parent strain but less activity and higher catabolite repression than RUT-C30.14 Numerous methods have been employed to optimize the secreted enzyme cocktail of T. reesei including cultivation conditions, operational parameters, and mutagenesis.3 However, creating an optimal and economical enzyme mixture for production-scale biofuels synthesis may take thousands of experiments to identify.

  14. A periodic point-based method for the analysis of Nash equilibria in 2 x 2 symmetric quantum games

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, David, E-mail: schneide@tandar.cnea.gov.ar [Departamento de Fisica, Comision Nacional de EnergIa Atomica. Av. del Libertador 8250, 1429 Buenos Aires (Argentina)

    2011-03-04

    We present a novel method of looking at Nash equilibria in 2 x 2 quantum games. Our method is based on a mathematical connection between the problem of identifying Nash equilibria in game theory, and the topological problem of the periodic points in nonlinear maps. To adapt our method to the original protocol designed by Eisert et al (1999 Phys. Rev. Lett. 83 3077-80) to study quantum games, we are forced to extend the space of strategies from the initial proposal. We apply our method to the extended strategy space version of the quantum Prisoner's dilemma and find that a new set of Nash equilibria emerge in a natural way. Nash equilibria in this set are optimal as Eisert's solution of the quantum Prisoner's dilemma and include this solution as a limit case.

  15. Hepatic fat quantification using the two-point Dixon method and fat color maps based on non-alcoholic fatty liver disease activity score.

    Science.gov (United States)

    Hayashi, Tatsuya; Saitoh, Satoshi; Takahashi, Junji; Tsuji, Yoshinori; Ikeda, Kenji; Kobayashi, Masahiro; Kawamura, Yusuke; Fujii, Takeshi; Inoue, Masafumi; Miyati, Tosiaki; Kumada, Hiromitsu

    2017-04-01

    The two-point Dixon method for magnetic resonance imaging (MRI) is commonly used to non-invasively measure fat deposition in the liver. The aim of the present study was to assess the usefulness of MRI-fat fraction (MRI-FF) using the two-point Dixon method based on the non-alcoholic fatty liver disease activity score. This retrospective study included 106 patients who underwent liver MRI and MR spectroscopy, and 201 patients who underwent liver MRI and histological assessment. The relationship between MRI-FF and MR spectroscopy-fat fraction was used to estimate the corrected MRI-FF for hepatic multi-peaks of fat. Then, a color FF map was generated with the corrected MRI-FF based on the non-alcoholic fatty liver disease activity score. We defined FF variability as the standard deviation of FF in regions of interest. Uniformity of hepatic fat was visually graded on a three-point scale using both gray-scale and color FF maps. Confounding effects of histology (iron, inflammation and fibrosis) on corrected MRI-FF were assessed by multiple linear regression. The linear correlations between MRI-FF and MR spectroscopy-fat fraction, and between corrected MRI-FF and histological steatosis were strong (R 2  = 0.90 and R 2  = 0.88, respectively). Liver fat variability significantly increased with visual fat uniformity grade using both of the maps (ρ = 0.67-0.69, both P Hepatic iron, inflammation and fibrosis had no significant confounding effects on the corrected MRI-FF (all P > 0.05). The two-point Dixon method and the gray-scale or color FF maps based on the non-alcoholic fatty liver disease activity score were useful for fat quantification in the liver of patients without severe iron deposition. © 2016 The Japan Society of Hepatology.

  16. Parametrization of Combined Quantum Mechanical and Molecular Mechanical Methods: Bond-Tuned Link Atoms

    Directory of Open Access Journals (Sweden)

    Xin-Ping Wu

    2018-05-01

    Full Text Available Combined quantum mechanical and molecular mechanical (QM/MM methods are the most powerful available methods for high-level treatments of subsystems of very large systems. The treatment of the QM−MM boundary strongly affects the accuracy of QM/MM calculations. For QM/MM calculations having covalent bonds cut by the QM−MM boundary, it has been proposed previously to use a scheme with system-specific tuned fluorine link atoms. Here, we propose a broadly parametrized scheme where the parameters of the tuned F link atoms depend only on the type of bond being cut. In the proposed new scheme, the F link atom is tuned for systems with a certain type of cut bond at the QM−MM boundary instead of for a specific target system, and the resulting link atoms are call bond-tuned link atoms. In principle, the bond-tuned link atoms can be as convenient as the popular H link atoms, and they are especially well adapted for high-throughput and accurate QM/MM calculations. Here, we present the parameters for several kinds of cut bonds along with a set of validation calculations that confirm that the proposed bond-tuned link-atom scheme can be as accurate as the system-specific tuned F link-atom scheme.

  17. Can't Count or Won't Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment.

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2016-06-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through 'doing' quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a 'magic bullet' and that a wider programme of content and assessment diversification across the curriculum is preferential.

  18. Development of safe mechanism for surgical robots using equilibrium point control method.

    Science.gov (United States)

    Park, Shinsuk; Lim, Hokjin; Kim, Byeong-sang; Song, Jae-bok

    2006-01-01

    This paper introduces a novel mechanism for surgical robotic systems to generate human arm-like compliant motion. The mechanism is based on the idea of the equilibrium point control hypothesis which claims that multi-joint limb movements are achieved by shifting the limbs' equilibrium positions defined by neuromuscular activity. The equilibrium point control can be implemented on a robot manipulator by installing two actuators at each joint of the manipulator, one to control the joint position, and the other to control the joint stiffness. This double-actuator mechanism allows us to arbitrarily manipulate the stiffness (or impedance) of a robotic manipulator as well as its position. Also, the force at the end-effector can be estimated based on joint stiffness and joint angle changes without using force transducers. A two-link manipulator and a three-link manipulator with the double-actuator units have been developed, and experiments and simulation results show the potential of the proposed approach. By creating the human arm-like behavior, this mechanism can improve the performance of robot manipulators to execute stable and safe movement in surgical environments by using a simple control scheme.

  19. A direct method of natural frequency analysis on pipeline conveying fluid with both ends supported

    International Nuclear Information System (INIS)

    Huang Yimin; Ge Seng; Wu Wei; Jie He

    2012-01-01

    Highlights: ► A direct method which derived from Ferrari's method was used to solve quartic equations. ► Frequency equations of pipeline conveying fluid with both ends supported was studied. ► Each order natural frequencies can be obtained by using the direct method. ► The first five critical flow velocities were obtained by using numerical method. - Abstract: The natural frequency equations of fluid–structure interaction in pipeline conveying fluid with both ends supported is investigated by a direct method, and the direct method is derived from Ferrari's method which is used to solve quartic equations. The dynamic equation of pipeline conveying fluid with two variables is obtained by Hamilton's variation principle based on Euler–Bernoulli Beam theory. By using the separation of variables method and the derived method from Ferrari's method, the natural frequency equations and the critical flow velocity equations of pipeline conveying fluid with both ends supported are obtained in mathematical decoupling. Each order natural frequencies and critical flow velocities can be obtained by using numerical method. The first five order dimensionless critical flow velocities are obtained, and the results indicate that clamped–simply supported is less stable than clamped–clamped supported and more stable than simply–simply supported. All the conclusions can be applied to nuclear installations and other engineering fields of improving the vibration.

  20. Front end design of smartphone-based mobile health

    Science.gov (United States)

    Zhang, Changfan; He, Lingsong; Gao, Zhiqiang; Ling, Cong; Du, Jianhao

    2015-02-01

    Mobile health has been a new trend all over the world with the rapid development of intelligent terminals and mobile internet. It can help patients monitor health in-house and is convenient for doctors to diagnose remotely. Smart-phone-based mobile health has big advantages in cost and data sharing. Front end design of it mainly focuses on two points: one is implementation of medical sensors aimed at measuring kinds of medical signal; another is acquisition of medical signal from sensors to smart phone. In this paper, the above two aspects were both discussed. First, medical sensor implementation was proposed to refer to mature measurement solutions with ECG (electrocardiograph) sensor design taken for example. And integrated chip using can simplify design. Then second, typical data acquisition architecture of smart phones, namely Bluetooth and MIC (microphone)-based architecture, were compared. Bluetooth architecture should be equipped with an acquisition card; MIC design uses sound card of smart phone instead. Smartphone-based virtual instrument app design corresponding to above acquisition architecture was discussed. In experiments, Bluetooth and MIC architecture were used to acquire blood pressure and ECG data respectively. The results showed that Bluetooth design can guarantee high accuracy during the acquisition and transmission process, and MIC design is competitive because of low cost and convenience.

  1. Evaluation of spatial dependence of point spread function-based PET reconstruction using a traceable point-like 22Na source

    Directory of Open Access Journals (Sweden)

    Taisuke Murata

    2016-10-01

    Full Text Available Abstract Background The point spread function (PSF of positron emission tomography (PET depends on the position across the field of view (FOV. Reconstruction based on PSF improves spatial resolution and quantitative accuracy. The present study aimed to quantify the effects of PSF correction as a function of the position of a traceable point-like 22Na source over the FOV on two PET scanners with a different detector design. Methods We used Discovery 600 and Discovery 710 (GE Healthcare PET scanners and traceable point-like 22Na sources (<1 MBq with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons. The source was moved in three directions at intervals of 1 cm from the center towards the peripheral FOV using a three-dimensional (3D-positioning robot, and data were acquired over a period of 2 min per point. The PET data were reconstructed by filtered back projection (FBP, the ordered subset expectation maximization (OSEM, OSEM + PSF, and OSEM + PSF + time-of-flight (TOF. Full width at half maximum (FWHM was determined according to the NEMA method, and total counts in regions of interest (ROI for each reconstruction were quantified. Results The radial FWHM of FBP and OSEM increased towards the peripheral FOV, whereas PSF-based reconstruction recovered the FWHM at all points in the FOV of both scanners. The radial FWHM for PSF was 30–50 % lower than that of OSEM at the center of the FOV. The accuracy of PSF correction was independent of detector design. Quantitative values were stable across the FOV in all reconstruction methods. The effect of TOF on spatial resolution and quantitation accuracy was less noticeable. Conclusions The traceable 22Na point-like source allowed the evaluation of spatial resolution and quantitative accuracy across the FOV using different reconstruction methods and scanners. PSF-based reconstruction reduces dependence of the spatial resolution on the

  2. Effect of feedstock end boiling point on product sulphur during ultra deep diesel hydrodesulphurization

    Energy Technology Data Exchange (ETDEWEB)

    Stratiev, D.; Ivanov, A.; Jelyaskova, M. [Lukoil Neftochim Bourgas AD, Bourgas (Bulgaria)

    2004-12-01

    An investigation was carried out to test the feasibility of producing 50 and 10 ppm sulphur diesel in a conventional hydrotreating unit operating at low pressure conditions by varying the feedstock end boiling point. Middle distillate fractions distilled from a mixture of Ural crude oil, reduced crude, vacuum gas oil, naphtha and low sulphur crude oils with 95% vol. points of 274, 359, 343, 333, and 322 C (ASTM D-86 method) and sulphur contents of 0.36, 0.63, 0.99, 0.57, and 0.47%, respectively, were hydrotreated using the Akzo Nobel Stars family Co-Mo KF-757 catalyst in a trickle bed pilot plant at following conditions: reactor inlet temperature range of 320-360 C; liquid hourly space velocity (LHSV) range of 1-2 h{sup -1}; total reactor pressure of 3.5 MPa; treating gas: feedstock ratio of 250 Nm{sup 3}/m{sup 3}. It was found that the determinant factor for the attainment of ultra low sulphur levels during middle distillate hydrodesulphurization was not the total sulphur content in the feed but the content of the material boiling above 340 C (according to TBP). For all LHSVs and reactor inlet temperatures studied the product sulphur dependence on the feed 340 C+ fraction content was approximated by second order power law. The specification of 50 ppm sulphur was achieved with all studied feedstocks. However the 10ppm sulphur limit could be met only by feedstocks with 95% vol. points below 333 C, which is accompanied by about 10% reduction of the diesel potential. The hydrotreatment tests on a blend 80% straight run gas oil (ASTM D-86 95% vol. of 274 C)/20%FCC LCO (ASTM D-86 95% vol. of 284 C) showed product sulphur levels which were not higher than those obtained by hydrotreatment of the straight run gas oil, indicating that undercutting the FCC LCO gives the refiner the opportunity to increase the potential for the production of 10 ppm sulphur diesel at the conditions of the conventional hydrotreating unit operating at low pressure. The product cetane index was

  3. Point-to-point radio link variation at E-band and its effect on antenna design

    NARCIS (Netherlands)

    Al-Rawi, A.; Dubok, A.; Herben, M.H.A.J.; Smolders, A.B.

    2015-01-01

    Radio propagation will strongly influence the design of the antenna and front-end components of E-band point-to-point communication systems. Based on the ITU rain model, the rain attenuation is estimated in a statistical sense and it is concluded that for backhaul links of 1–10 km, antennas with a

  4. The effective QCD phase diagram and the critical end point

    Directory of Open Access Journals (Sweden)

    Alejandro Ayala

    2015-08-01

    Full Text Available We study the QCD phase diagram on the temperature T and quark chemical potential μ plane, modeling the strong interactions with the linear sigma model coupled to quarks. The phase transition line is found from the effective potential at finite T and μ taking into account the plasma screening effects. We find the location of the critical end point (CEP to be (μCEP/Tc,TCEP/Tc∼(1.2,0.8, where Tc is the (pseudocritical temperature for the crossover phase transition at vanishing μ. This location lies within the region found by lattice inspired calculations. The results show that in the linear sigma model, the CEP's location in the phase diagram is expectedly determined solely through chiral symmetry breaking. The same is likely to be true for all other models which do not exhibit confinement, provided the proper treatment of the plasma infrared properties for the description of chiral symmetry restoration is implemented. Similarly, we also expect these corrections to be substantially relevant in the QCD phase diagram.

  5. Automatic detection of end-diastole and end-systole from echocardiography images using manifold learning

    International Nuclear Information System (INIS)

    Gifani, Parisa; Behnam, Hamid; Shalbaf, Ahmad; Sani, Zahra Alizadeh

    2010-01-01

    The automatic detection of end-diastole and end-systole frames of echocardiography images is the first step for calculation of the ejection fraction, stroke volume and some other features related to heart motion abnormalities. In this paper, the manifold learning algorithm is applied on 2D echocardiography images to find out the relationship between the frames of one cycle of heart motion. By this approach the nonlinear embedded information in sequential images is represented in a two-dimensional manifold by the LLE algorithm and each image is depicted by a point on reconstructed manifold. There are three dense regions on the manifold which correspond to the three phases of cardiac cycle ('isovolumetric contraction', 'isovolumetric relaxation', 'reduced filling'), wherein there is no prominent change in ventricular volume. By the fact that the end-systolic and end-diastolic frames are in isovolumic phases of the cardiac cycle, the dense regions can be used to find these frames. By calculating the distance between consecutive points in the manifold, the isovolumic frames are mapped on the three minimums of the distance diagrams which were used to select the corresponding images. The minimum correlation between these images leads to detection of end-systole and end-diastole frames. The results on six healthy volunteers have been validated by an experienced echo cardiologist and depict the usefulness of the presented method

  6. Effect of material constants on power output in piezoelectric vibration-based generators.

    Science.gov (United States)

    Takeda, Hiroaki; Mihara, Kensuke; Yoshimura, Tomohiro; Hoshina, Takuya; Tsurumi, Takaaki

    2011-09-01

    A possible power output estimation based on material constants in piezoelectric vibration-based generators is proposed. A modified equivalent circuit model of the generator was built and was validated by the measurement results in the generator fabricated using potassium sodium niobate-based and lead zirconate titanate (PZT) ceramics. Subsequently, generators with the same structure using other PZT-based and bismuth-layered structure ferroelectrics ceramics were fabricated and tested. The power outputs of these generators were expressed as a linear functions of the term composed of electromechanical coupling coefficients k(sys)(2) and mechanical quality factors Q*(m) of the generator. The relationship between device constants (k(sys)(2) and Q*(m)) and material constants (k(31)(2) and Q(m)) was clarified. Estimation of the power output using material constants is demonstrated and the appropriate piezoelectric material for the generator is suggested.

  7. ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    C. Li

    2012-07-01

    Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  8. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    Science.gov (United States)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  9. Mapping hospice patients' perception and verbal communication of end-of-life needs: an exploratory mixed methods inquiry

    Directory of Open Access Journals (Sweden)

    Arnold Bruce L

    2011-01-01

    Full Text Available Abstract Background Comprehensive "Total Pain" assessments of patients' end-of-life needs are critical for providing improved patient-clinician communication, assessing needs, and offering high quality palliative care. However, patients' needs-based research methodologies and findings remain highly diverse with their lack of consensus preventing optimum needs assessments and care planning. Mixed-methods is an underused yet robust "patient-based" approach for reported lived experiences to map both the incidence and prevalence of what patients perceive as important end of life needs. Methods Findings often include methodological artifacts and their own selection bias. Moving beyond diverse findings therefore requires revisiting methodological choices. A mixed methods research cross-sectional design is therefore used to reduce limitations inherent in both qualitative and quantitative methodologies. Audio-taped phenomenological "thinking aloud" interviews of a purposive sample of 30 hospice patients are used to identify their vocabulary for communicating perceptions of end-of-life needs. Grounded theory procedures assisted by QSR-NVivo software is then used for discovering domains of needs embedded in the interview narratives. Summary findings are translated into quantified format for presentation and analytical purposes. Results Findings from this mixed-methods feasibility study indicate patients' narratives represent 7 core domains of end-of-life needs. These are (1 time, (2 social, (3 physiological, (4 death and dying, (5 safety, (6 spirituality, (7 change & adaptation. The prevalence, rather than just the occurrence, of patients' reported needs provides further insight into their relative importance. Conclusion Patients' perceptions of end-of-life needs are multidimensional, often ambiguous and uncertain. Mixed methodology appears to hold considerable promise for unpacking both the occurrence and prevalence of cognitive structures represented by

  10. Increasing Literacy in Quantitative Methods: The Key to the Future of Canadian Psychology

    Science.gov (United States)

    Counsell, Alyssa; Cribbie, Robert A.; Harlow, Lisa. L.

    2016-01-01

    Quantitative methods (QM) dominate empirical research in psychology. Unfortunately most researchers in psychology receive inadequate training in QM. This creates a challenge for researchers who require advanced statistical methods to appropriately analyze their data. Many of the recent concerns about research quality, replicability, and reporting practices are directly tied to the problematic use of QM. As such, improving quantitative literacy in psychology is an important step towards eliminating these concerns. The current paper will include two main sections that discuss quantitative challenges and opportunities. The first section discusses training and resources for students and presents descriptive results on the number of quantitative courses required and available to graduate students in Canadian psychology departments. In the second section, we discuss ways of improving quantitative literacy for faculty, researchers, and clinicians. This includes a strong focus on the importance of collaboration. The paper concludes with practical recommendations for improving quantitative skills and literacy for students and researchers in Canada. PMID:28042199

  11. Increasing Literacy in Quantitative Methods: The Key to the Future of Canadian Psychology.

    Science.gov (United States)

    Counsell, Alyssa; Cribbie, Robert A; Harlow, Lisa L

    2016-08-01

    Quantitative methods (QM) dominate empirical research in psychology. Unfortunately most researchers in psychology receive inadequate training in QM. This creates a challenge for researchers who require advanced statistical methods to appropriately analyze their data. Many of the recent concerns about research quality, replicability, and reporting practices are directly tied to the problematic use of QM. As such, improving quantitative literacy in psychology is an important step towards eliminating these concerns. The current paper will include two main sections that discuss quantitative challenges and opportunities. The first section discusses training and resources for students and presents descriptive results on the number of quantitative courses required and available to graduate students in Canadian psychology departments. In the second section, we discuss ways of improving quantitative literacy for faculty, researchers, and clinicians. This includes a strong focus on the importance of collaboration. The paper concludes with practical recommendations for improving quantitative skills and literacy for students and researchers in Canada.

  12. The business process management software for successful quality management and organization: A case study from the University of Split School of Medicine

    Directory of Open Access Journals (Sweden)

    Damir Sapunar

    2016-05-01

    Full Text Available Objective. Our aim was to describe a comprehensive model of internal quality management (QM at a medical school founded on the business process analysis (BPA software tool. Methods. BPA software tool was used as the core element for description of all working processes in our medical school, and subsequently the system served as the comprehensive model of internal QM. Results. The quality management system at the University of Split School of Medicine included the documentation and analysis of all business processes within the School. The analysis revealed 80 weak points related to one or several business processes. Conclusion. A precise analysis of medical school business processes allows identification of unfinished, unclear and inadequate points in these processes, and subsequently the respective improvements and increase of the QM level and ultimately a rationalization of the institution’s work. Our approach offers a potential reference model for development of common QM framework allowing a continuous quality control, i.e. the adjustments and adaptation to contemporary educational needs of medical students.

  13. Dual reference point temperature interrogating method for distributed temperature sensor

    International Nuclear Information System (INIS)

    Ma, Xin; Ju, Fang; Chang, Jun; Wang, Weijie; Wang, Zongliang

    2013-01-01

    A novel method based on dual temperature reference points is presented to interrogate the temperature in a distributed temperature sensing (DTS) system. This new method is suitable to overcome deficiencies due to the impact of DC offsets and the gain difference in the two signal channels of the sensing system during temperature interrogation. Moreover, this method can in most cases avoid the need to calibrate the gain and DC offsets in the receiver, data acquisition and conversion. An improved temperature interrogation formula is presented and the experimental results show that this method can efficiently estimate the channel amplification and system DC offset, thus improving the system accuracy. (letter)

  14. Gender preference between traditional and PowerPoint methods of teaching gross anatomy.

    Science.gov (United States)

    Nuhu, Saleh; Adamu, Lawan Hassan; Buba, Mohammed Alhaji; Garba, Sani Hyedima; Dalori, Babagana Mohammed; Yusuf, Ashiru Hassan

    2018-01-01

    Teaching and learning process is increasingly metamorphosing from the traditional chalk and talk to the modern dynamism in the information and communication technology. Medical education is no exception to this dynamism more especially in the teaching of gross anatomy, which serves as one of the bases of understanding the human structure. This study was conducted to determine the gender preference of preclinical medical students on the use of traditional (chalk and talk) and PowerPoint presentation in the teaching of gross anatomy. This was cross-sectional and prospective study, which was conducted among preclinical medical students in the University of Maiduguri, Nigeria. Using simple random techniques, a questionnaire was circulated among 280 medical students, where 247 students filled the questionnaire appropriately. The data obtained was analyzed using SPSS version 20 (IBM Corporation, Armonk, NY, USA) to find the method preferred by the students among other things. Majority of the preclinical medical students in the University of Maiduguri preferred PowerPoint method in the teaching of gross anatomy over the conventional methods. The Cronbach alpha value of 0.76 was obtained which is an acceptable level of internal consistency. A statistically significant association was found between gender and preferred method of lecture delivery on the clarity of lecture content where females prefer the conventional method of lecture delivery whereas males prefer the PowerPoint method, On the reproducibility of text and diagram, females prefer PowerPoint method of teaching gross anatomy while males prefer the conventional method of teaching gross anatomy. There are gender preferences with regard to clarity of lecture contents and reproducibility of text and diagram. It was also revealed from this study that majority of the preclinical medical students in the University of Maiduguri prefer PowerPoint presentation over the traditional chalk and talk method in most of the

  15. Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media

    Science.gov (United States)

    Park, Ju-Won; Kim, JongWon

    2004-10-01

    As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.

  16. Casimir Energy of the Nambu-Goto String with Gauss-Bonnet Term and Point-Like Masses at the Ends

    Science.gov (United States)

    Hadasz, Leszek

    1999-09-01

    We calculate the Casimir energy of the rotating Nambu-Goto string with the Gauss-Bonnet term in the action and point-like masses at the ends. This energy turns out to be negative for every values of the parameters of the model.

  17. Casimir Energy of the Nambu-Goto String with Gauss-Bonnet Term and Point-Like Masses at the Ends

    International Nuclear Information System (INIS)

    Hadasz, L.

    1999-01-01

    We calculate the Casimir energy of the rotating Nambu-Goto string with the Gauss-Bonnet term in the action and point-like masses at the ends. This energy turns out to be negative for every values of the parameters of the model. (author)

  18. The Contact State Monitoring for Seal End Faces Based on Acoustic Emission Detection

    Directory of Open Access Journals (Sweden)

    Xiaohui Li

    2016-01-01

    Full Text Available Monitoring the contact state of seal end faces would help the early warning of the seal failure. In the acoustic emission (AE detection for mechanical seal, the main difficulty is to reduce the background noise and to classify the dispersed features. To solve these problems and achieve higher detection rates, a new approach based on genetic particle filter with autoregression (AR-GPF and hypersphere support vector machine (HSSVM is presented. First, AR model is used to build the dynamic state space (DSS of the AE signal, and GPF is used for signal filtering. Then, multiple features are extracted, and a classification model based on HSSVM is constructed for state recognition. In this approach, AR-GPF is an excellent time-domain method for noise reduction, and HSSVM has advantage on those dispersed features. Finally experimental data shows that the proposed method can effectively detect the contact state of the seal end faces and has higher accuracy rates than some other existing methods.

  19. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  20. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    International Nuclear Information System (INIS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-01-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  1. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    Science.gov (United States)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  2. POINT-CLOUD COMPRESSION FOR VEHICLE-BASED MOBILE MAPPING SYSTEMS USING PORTABLE NETWORK GRAPHICS

    Directory of Open Access Journals (Sweden)

    K. Kohira

    2017-09-01

    Full Text Available A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects.Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  3. GPU-based implementation of an accelerated SR-NLUT based on N-point one-dimensional sub-principal fringe patterns in computer-generated holograms

    Directory of Open Access Journals (Sweden)

    Hee-Min Choi

    2015-06-01

    Full Text Available An accelerated spatial redundancy-based novel-look-up-table (A-SR-NLUT method based on a new concept of the N-point one-dimensional sub-principal fringe pattern (N-point1-D sub-PFP is implemented on a graphics processing unit (GPU for fast calculation of computer-generated holograms (CGHs of three-dimensional (3-Dobjects. Since the proposed method can generate the N-point two-dimensional (2-D PFPs for CGH calculation from the pre-stored N-point 1-D PFPs, the loading time of the N-point PFPs on the GPU can be dramatically reduced, which results in a great increase of the computational speed of the proposed method. Experimental results confirm that the average calculation time for one-object point has been reduced by 49.6% and 55.4% compared to those of the conventional 2-D SR-NLUT methods for each case of the 2-point and 3-point SR maps, respectively.

  4. 2.5D Multi-View Gait Recognition Based on Point Cloud Registration

    Science.gov (United States)

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-01-01

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727

  5. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    Science.gov (United States)

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  6. Inventory classification based on decoupling points

    Directory of Open Access Journals (Sweden)

    Joakim Wikner

    2015-01-01

    Full Text Available The ideal state of continuous one-piece flow may never be achieved. Still the logistics manager can improve the flow by carefully positioning inventory to buffer against variations. Strategies such as lean, postponement, mass customization, and outsourcing all rely on strategic positioning of decoupling points to separate forecast-driven from customer-order-driven flows. Planning and scheduling of the flow are also based on classification of decoupling points as master scheduled or not. A comprehensive classification scheme for these types of decoupling points is introduced. The approach rests on identification of flows as being either demand based or supply based. The demand or supply is then combined with exogenous factors, classified as independent, or endogenous factors, classified as dependent. As a result, eight types of strategic as well as tactical decoupling points are identified resulting in a process-based framework for inventory classification that can be used for flow design.

  7. Molecular Dynamics Simulations with Quantum Mechanics/Molecular Mechanics and Adaptive Neural Networks.

    Science.gov (United States)

    Shen, Lin; Yang, Weitao

    2018-03-13

    Direct molecular dynamics (MD) simulation with ab initio quantum mechanical and molecular mechanical (QM/MM) methods is very powerful for studying the mechanism of chemical reactions in a complex environment but also very time-consuming. The computational cost of QM/MM calculations during MD simulations can be reduced significantly using semiempirical QM/MM methods with lower accuracy. To achieve higher accuracy at the ab initio QM/MM level, a correction on the existing semiempirical QM/MM model is an attractive idea. Recently, we reported a neural network (NN) method as QM/MM-NN to predict the potential energy difference between semiempirical and ab initio QM/MM approaches. The high-level results can be obtained using neural network based on semiempirical QM/MM MD simulations, but the lack of direct MD samplings at the ab initio QM/MM level is still a deficiency that limits the applications of QM/MM-NN. In the present paper, we developed a dynamic scheme of QM/MM-NN for direct MD simulations on the NN-predicted potential energy surface to approximate ab initio QM/MM MD. Since some configurations excluded from the database for NN training were encountered during simulations, which may cause some difficulties on MD samplings, an adaptive procedure inspired by the selection scheme reported by Behler [ Behler Int. J. Quantum Chem. 2015 , 115 , 1032 ; Behler Angew. Chem., Int. Ed. 2017 , 56 , 12828 ] was employed with some adaptions to update NN and carry out MD iteratively. We further applied the adaptive QM/MM-NN MD method to the free energy calculation and transition path optimization on chemical reactions in water. The results at the ab initio QM/MM level can be well reproduced using this method after 2-4 iteration cycles. The saving in computational cost is about 2 orders of magnitude. It demonstrates that the QM/MM-NN with direct MD simulations has great potentials not only for the calculation of thermodynamic properties but also for the characterization of

  8. Buildings and Terrain of Urban Area Point Cloud Segmentation based on PCL

    International Nuclear Information System (INIS)

    Liu, Ying; Zhong, Ruofei

    2014-01-01

    One current problem with laser radar point data classification is building and urban terrain segmentation, this paper proposes a point cloud segmentation method base on PCL libraries. PCL is a large cross-platform open source C++ programming library, which implements a large number of point cloud related efficient data structures and generic algorithms involving point cloud retrieval, filtering, segmentation, registration, feature extraction and curved surface reconstruction, visualization, etc. Due to laser radar point cloud characteristics with large amount of data, unsymmetrical distribution, this paper proposes using the data structure of kd-tree to organize data; then using Voxel Grid filter for point cloud resampling, namely to reduce the amount of point cloud data, and at the same time keep the point cloud shape characteristic; use PCL Segmentation Module, we use a Euclidean Cluster Extraction class with Europe clustering for buildings and ground three-dimensional point cloud segmentation. The experimental results show that this method avoids the multiple copy system existing data needs, saves the program storage space through the call of PCL library method and class, shortens the program compiled time and improves the running speed of the program

  9. Can’t Count or Won’t Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2015-01-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through ‘doing’ quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a ‘magic bullet’ and that a wider programme of content and assessment diversification across the curriculum is preferential. PMID:27330225

  10. Efficient point cloud data processing in shipbuilding: Reformative component extraction method and registration method

    Directory of Open Access Journals (Sweden)

    Jingyu Sun

    2014-07-01

    Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.

  11. Security Considerations around End-to-End Security in the IP-based Internet of Things

    NARCIS (Netherlands)

    Brachmann, M.; Garcia-Mochon, O.; Keoh, S.L.; Kumar, S.S.

    2012-01-01

    The IP-based Internet of Things refers to the interconnection of smart objects in a Low-power and Lossy Network (LLN) with the Internetby means of protocols such as 6LoWPAN or CoAP. The provisioning of an end-to-end security connection is the key to ensure basic functionalities such as software

  12. Material-Point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2007-01-01

    The aim of this paper is to test different types of spatial interpolation for the material-point method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  13. MIN-CUT BASED SEGMENTATION OF AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    S. Ural

    2012-07-01

    Full Text Available Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance

  14. A new design for SLAM front-end based on recursive SOM

    Science.gov (United States)

    Yang, Xuesi; Xia, Shengping

    2015-12-01

    Aiming at the graph optimization-based monocular SLAM, a novel design for the front-end in single camera SLAM is proposed, based on the recursive SOM. Pixel intensities are directly used to achieve image registration and motion estimation, which can save time compared with the current appearance-based frameworks, usually including feature extraction and matching. Once a key-frame is identified, a recursive SOM is used to actualize loop-closure detecting, resulting a more precise location. The experiment on a public dataset validates our method on a computer with a quicker and effective result.

  15. Fixed point theorems in locally convex spaces—the Schauder mapping method

    Directory of Open Access Journals (Sweden)

    S. Cobzaş

    2006-03-01

    Full Text Available In the appendix to the book by F. F. Bonsal, Lectures on Some Fixed Point Theorems of Functional Analysis (Tata Institute, Bombay, 1962 a proof by Singbal of the Schauder-Tychonoff fixed point theorem, based on a locally convex variant of Schauder mapping method, is included. The aim of this note is to show that this method can be adapted to yield a proof of Kakutani fixed point theorem in the locally convex case. For the sake of completeness we include also the proof of Schauder-Tychonoff theorem based on this method. As applications, one proves a theorem of von Neumann and a minimax result in game theory.

  16. No significant effect of angiotensin II receptor blockade on intermediate cardiovascular end points in hemodialysis patients

    DEFF Research Database (Denmark)

    Peters, Christian Daugaard; Kjaergaard, Krista D; Jensen, Jens D

    2014-01-01

    Agents blocking the renin-angiotensin-aldosterone system are frequently used in patients with end-stage renal disease, but whether they exert beneficial cardiovascular effects is unclear. Here the long-term effects of the angiotensin II receptor blocker, irbesartan, were studied in hemodialysis......, and residual renal function. Brachial blood pressure decreased significantly in both groups, but there was no significant difference between placebo and irbesartan. Use of additional antihypertensive medication, ultrafiltration volume, and dialysis dosage were not different. Intermediate cardiovascular end...... points such as central aortic blood pressure, carotid-femoral pulse wave velocity, left ventricular mass index, N-terminal brain natriuretic prohormone, heart rate variability, and plasma catecholamines were not significantly affected by irbesartan treatment. Changes in systolic blood pressure during...

  17. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States); Tan, Jun [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75490 (United States); Olsen, Lindsey A. [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States)

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

  18. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    International Nuclear Information System (INIS)

    Shiraishi, Satomi; Moore, Kevin L.; Tan, Jun; Olsen, Lindsey A.

    2015-01-01

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V 10Gy (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM clin − QM pred , and a coefficient of determination, R 2 . For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are stratified based on

  19. Casimir energy of the Nambu-Goto string with Gauss-Bonnet term and point-like masses at the ends

    OpenAIRE

    Hadasz, Leszek

    1999-01-01

    We calculate (using zeta function regularization) the Casimir energy of the rotating Nambu-Goto string with the Gauss-Bonnet term in the action and point-like masses at the ends. The resulting value turns out to be negative for all values of the parameters of the model.

  20. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  1. Lack of Bystander Effects From High-LET Radiation For Early Cytogenetic End Points

    International Nuclear Information System (INIS)

    Groesser, Torsten; Cooper, Brian; Rydberg, Bjorn

    2008-01-01

    The aim of this work was to study radiation-induced bystander effects for early cytogenetic end points in various cell lines using the medium transfer technique after exposure to high- and low-LET radiation. Cells were exposed to 20 MeV/ nucleon nitrogen ions, 968 MeV/nucleon iron ions, or 575 MeV/nucleon iron ions followed by transfer of the conditioned medium from the irradiated cells to unirradiated test cells. The effects studied included DNA double-strand break induction, γ-H2AX focus formation, induction of chromatid breaks in prematurely condensed chromosomes, and micronucleus formation using DNA repair-proficient and -deficient hamster and human cell lines (xrs6, V79, SW48, MO59K and MO59J). Cell survival was also measured in SW48 bystander cells using X rays. Although it was occasionally possible to detect an increase in chromatid break levels using nitrogen ions and to see a higher number of γ-H2AX foci using nitrogen and iron ions in xrs6 bystander cells in single experiments, the results were not reproducible. After we pooled all the data, we could not verify a significant bystander effect for any of these end points. Also, we did not detect a significant bystander effect for DSB induction or micronucleus formation in these cell lines or for clonogenic survival in SW48 cells. The data suggest that DNA damage and cytogenetic changes are not induced in bystander cells. In contrast, data in the literature show pronounced bystander effects in a variety of cell lines, including clonogenic survival in SW48 cells and induction of chromatid breaks and micronuclei in hamster cells. To reconcile these conflicting data, it is possible that the epigenetic status of the specific cell line or the precise culture conditions and medium supplements, such as serum, may be critical for inducing bystander effects.

  2. Feature Extraction from 3D Point Cloud Data Based on Discrete Curves

    Directory of Open Access Journals (Sweden)

    Yi An

    2013-01-01

    Full Text Available Reliable feature extraction from 3D point cloud data is an important problem in many application domains, such as reverse engineering, object recognition, industrial inspection, and autonomous navigation. In this paper, a novel method is proposed for extracting the geometric features from 3D point cloud data based on discrete curves. We extract the discrete curves from 3D point cloud data and research the behaviors of chord lengths, angle variations, and principal curvatures at the geometric features in the discrete curves. Then, the corresponding similarity indicators are defined. Based on the similarity indicators, the geometric features can be extracted from the discrete curves, which are also the geometric features of 3D point cloud data. The threshold values of the similarity indicators are taken from [0,1], which characterize the relative relationship and make the threshold setting easier and more reasonable. The experimental results demonstrate that the proposed method is efficient and reliable.

  3. Inversion of Gravity Anomalies Using Primal-Dual Interior Point Methods

    Directory of Open Access Journals (Sweden)

    Aaron A. Velasco

    2016-06-01

    Full Text Available Structural inversion of gravity datasets based on the use of density anomalies to derive robust images of the subsurface (delineating lithologies and their boundaries constitutes a fundamental non-invasive tool for geological exploration. The use of experimental techniques in geophysics to estimate and interpret di erences in the substructure based on its density properties have proven e cient; however, the inherent non-uniqueness associated with most geophysical datasets make this the ideal scenario for the use of recently developed robust constrained optimization techniques. We present a constrained optimization approach for a least squares inversion problem aimed to characterize 2-Dimensional Earth density structure models based on Bouguer gravity anomalies. The proposed formulation is solved with a Primal-Dual Interior-Point method including equality and inequality physical and structural constraints. We validate our results using synthetic density crustal structure models with varying complexity and illustrate the behavior of the algorithm using di erent initial density structure models and increasing noise levels in the observations. Based on these implementations, we conclude that the algorithm using Primal-Dual Interior-Point methods is robust, and its results always honor the geophysical constraints. Some of the advantages of using this approach for structural inversion of gravity data are the incorporation of a priori information related to the model parameters (coming from actual physical properties of the subsurface and the reduction of the solution space contingent on these boundary conditions.

  4. Point based interactive image segmentation using multiquadrics splines

    Science.gov (United States)

    Meena, Sachin; Duraisamy, Prakash; Palniappan, Kannappan; Seetharaman, Guna

    2017-05-01

    Multiquadrics (MQ) are radial basis spline function that can provide an efficient interpolation of data points located in a high dimensional space. MQ were developed by Hardy to approximate geographical surfaces and terrain modelling. In this paper we frame the task of interactive image segmentation as a semi-supervised interpolation where an interpolating function learned from the user provided seed points is used to predict the labels of unlabeled pixel and the spline function used in the semi-supervised interpolation is MQ. This semi-supervised interpolation framework has a nice closed form solution which along with the fact that MQ is a radial basis spline function lead to a very fast interactive image segmentation process. Quantitative and qualitative results on the standard datasets show that MQ outperforms other regression based methods, GEBS, Ridge Regression and Logistic Regression, and popular methods like Graph Cut,4 Random Walk and Random Forest.6

  5. Mid-point for open-ended income category and the effect of equivalence scales on the income-health relationship.

    Science.gov (United States)

    Celeste, Roger Keller; Bastos, João Luiz

    2013-12-01

    To estimate the mid-point of an open-ended income category and to assess the impact of two equivalence scales on income-health associations. Data were obtained from the 2010 Brazilian Oral Health Survey (Pesquisa Nacional de Saúde Bucal--SBBrasil 2010). Income was converted from categorical to two continuous variables (per capita and equivalized) for each mid-point. The median mid-point was R$ 14,523.50 and the mean, R$ 24,507.10. When per capita income was applied, 53% of the population were below the poverty line, compared with 15% with equivalized income. The magnitude of income-health associations was similar for continuous income, but categorized equivalized income tended to decrease the strength of association.

  6. MSGD: Scalable back-end for indoor magnetic field-based GraphSLAM

    OpenAIRE

    Gao, C; Harle, Robert Keith

    2017-01-01

    Simultaneous Localisation and Mapping (SLAM) systems that recover the trajectory of a robot or mobile device are characterised by a front-end and back-end. The front-end uses sensor observations to identify loop closures; the back-end optimises the estimated trajectory to be consistent with these closures. The GraphSLAM framework formulates the back-end problem as a graph-based optimisation on a pose graph. This paper describes a back-end system optimised for very dense sequence-based lo...

  7. The Pose Estimation of Mobile Robot Based on Improved Point Cloud Registration

    Directory of Open Access Journals (Sweden)

    Yanzi Miao

    2016-03-01

    Full Text Available Due to GPS restrictions, an inertial sensor is usually used to estimate the location of indoor mobile robots. However, it is difficult to achieve high-accuracy localization and control by inertial sensors alone. In this paper, a new method is proposed to estimate an indoor mobile robot pose with six degrees of freedom based on an improved 3D-Normal Distributions Transform algorithm (3D-NDT. First, point cloud data are captured by a Kinect sensor and segmented according to the distance to the robot. After the segmentation, the input point cloud data are processed by the Approximate Voxel Grid Filter algorithm in different sized voxel grids. Second, the initial registration and precise registration are performed respectively according to the distance to the sensor. The most distant point cloud data use the 3D-Normal Distributions Transform algorithm (3D-NDT with large-sized voxel grids for initial registration, based on the transformation matrix from the odometry method. The closest point cloud data use the 3D-NDT algorithm with small-sized voxel grids for precise registration. After the registrations above, a final transformation matrix is obtained and coordinated. Based on this transformation matrix, the pose estimation problem of the indoor mobile robot is solved. Test results show that this method can obtain accurate robot pose estimation and has better robustness.

  8. A Case Study Application of the Aggregate Exposure Pathway (AEP) and Adverse Outcome Pathway (AOP) Frameworks to Facilitate the Integration of Human Health and Ecological End Points for Cumulative Risk Assessment (CRA)

    Science.gov (United States)

    Cumulative risk assessment (CRA) methods promote the use of a conceptual site model (CSM) to apportion exposures and integrate risk from multiple stressors. While CSMs may encompass multiple species, evaluating end points across taxa can be challenging due to data availability an...

  9. Collective mass and zero-point energy in the generator-coordinate method

    International Nuclear Information System (INIS)

    Fiolhais, C.

    1982-01-01

    The aim of the present thesis if the study of the collective mass parameters and the zero-point energies in the GCM framework with special regards to the fission process. After the derivation of the collective Schroedinger equation in the framework of the Gaussian overlap approximation the inertia parameters are compared with those of the adiabatic time-dependent Hartree-Fock method. Then the kinetic and the potential zero-point energy occurring in this formulation are studied. Thereafter the practical application of the described formalism is discussed. Then a numerical calculation of the GCM mass parameter and the zero-point energy for the fission process on the base of a two-center shell model with a pairing force in the BCS approximation is presented. (HSI) [de

  10. Understanding the effects of electronic polarization and delocalization on charge-transport levels in oligoacene systems

    KAUST Repository

    Sutton, Christopher; Tummala, Naga Rajesh; Kemper, Travis; Aziz, Saadullah G.; Sears, John; Coropceanu, Veaceslav; Bredas, Jean-Luc

    2017-01-01

    Electronic polarization and charge delocalization are important aspects that affect the charge-transport levels in organic materials. Here, using a quantum mechanical/ embedded-charge (QM/EC) approach based on a combination of the long-range corrected omega B97X-D exchange-correlation functional (QM) and charge model 5 (CM5) point-charge model (EC), we evaluate the vertical detachment energies and polarization energies of various sizes of crystalline and amorphous anionic oligoacene clusters. Our results indicate that QM/EC calculations yield vertical detachment energies and polarization energies that compare well with the experimental values obtained from ultraviolet photoemission spectroscopy measurements. In order to understand the effect of charge delocalization on the transport levels, we considered crystalline naphthalene systems with QM regions including one or five-molecules. The results for these systems show that the delocalization and polarization effects are additive; therefore, allowing for electron delocalization by increasing the size of the QM region leads to the additional stabilization of the transport levels. Published by AIP Publishing.

  11. Understanding the effects of electronic polarization and delocalization on charge-transport levels in oligoacene systems

    KAUST Repository

    Sutton, Christopher

    2017-06-13

    Electronic polarization and charge delocalization are important aspects that affect the charge-transport levels in organic materials. Here, using a quantum mechanical/ embedded-charge (QM/EC) approach based on a combination of the long-range corrected omega B97X-D exchange-correlation functional (QM) and charge model 5 (CM5) point-charge model (EC), we evaluate the vertical detachment energies and polarization energies of various sizes of crystalline and amorphous anionic oligoacene clusters. Our results indicate that QM/EC calculations yield vertical detachment energies and polarization energies that compare well with the experimental values obtained from ultraviolet photoemission spectroscopy measurements. In order to understand the effect of charge delocalization on the transport levels, we considered crystalline naphthalene systems with QM regions including one or five-molecules. The results for these systems show that the delocalization and polarization effects are additive; therefore, allowing for electron delocalization by increasing the size of the QM region leads to the additional stabilization of the transport levels. Published by AIP Publishing.

  12. Calibration of a single hexagonal NaI(Tl) detector using a new numerical method based on the efficiency transfer method

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Mahmoud I., E-mail: mabbas@physicist.net [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Badawi, M.S. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Ruskov, I.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); El-Khatib, A.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Grozdanov, D.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); Thabet, A.A. [Department of Medical Equipment Technology, Faculty of Allied Medical Sciences, Pharos University in Alexandria (Egypt); Kopatch, Yu.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Gouda, M.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Skoy, V.R. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)

    2015-01-21

    Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.

  13. C-point and V-point singularity lattice formation and index sign conversion methods

    Science.gov (United States)

    Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.

    2017-06-01

    The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an

  14. Sensitive detection of point mutation by electrochemiluminescence and DNA ligase-based assay

    Science.gov (United States)

    Zhou, Huijuan; Wu, Baoyan

    2008-12-01

    The technology of single-base mutation detection plays an increasingly important role in diagnosis and prognosis of genetic-based diseases. Here we reported a new method for the analysis of point mutations in genomic DNA through the integration of allele-specific oligonucleotide ligation assay (OLA) with magnetic beads-based electrochemiluminescence (ECL) detection scheme. In this assay the tris(bipyridine) ruthenium (TBR) labeled probe and the biotinylated probe are designed to perfectly complementary to the mutant target, thus a ligation can be generated between those two probes by Taq DNA Ligase in the presence of mutant target. If there is an allele mismatch, the ligation does not take place. The ligation products are then captured onto streptavidin-coated paramagnetic beads, and detected by measuring the ECL signal of the TBR label. Results showed that the new method held a low detection limit down to 10 fmol and was successfully applied in the identification of point mutations from ASTC-α-1, PANC-1 and normal cell lines in codon 273 of TP53 oncogene. In summary, this method provides a sensitive, cost-effective and easy operation approach for point mutation detection.

  15. Three-point method for measuring the geometric error components of linear and rotary axes based on sequential multilateration

    International Nuclear Information System (INIS)

    Zhang, Zhenjiu; Hu, Hong

    2013-01-01

    The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.

  16. A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout

    International Nuclear Information System (INIS)

    Shao Yiping; Yao Rutao; Ma Tianyu

    2008-01-01

    The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detection condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In

  17. A threshold auto-adjustment algorithm of feature points extraction based on grid

    Science.gov (United States)

    Yao, Zili; Li, Jun; Dong, Gaojie

    2018-02-01

    When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.

  18. Team-Based Models for End-of-Life Care: An Evidence-Based Analysis

    Science.gov (United States)

    2014-01-01

    Background End of life refers to the period when people are living with advanced illness that will not stabilize and from which they will not recover and will eventually die. It is not limited to the period immediately before death. Multiple services are required to support people and their families during this time period. The model of care used to deliver these services can affect the quality of the care they receive. Objectives Our objective was to determine whether an optimal team-based model of care exists for service delivery at end of life. In systematically reviewing such models, we considered their core components: team membership, services offered, modes of patient contact, and setting. Data Sources A literature search was performed on October 14, 2013, using Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid Embase, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), and EBM Reviews, for studies published from January 1, 2000, to October 14, 2013. Review Methods Abstracts were reviewed by a single reviewer and full-text articles were obtained that met the inclusion criteria. Studies were included if they evaluated a team model of care compared with usual care in an end-of-life adult population. A team was defined as having at least 2 health care disciplines represented. Studies were limited to English publications. A meta-analysis was completed to obtain pooled effect estimates where data permitted. The GRADE quality of the evidence was evaluated. Results Our literature search located 10 randomized controlled trials which, among them, evaluated the following 6 team-based models of care: hospital, direct contact home, direct contact home, indirect contact comprehensive, indirect contact comprehensive, direct contact comprehensive, direct, and early contact Direct contact is when team members see the patient; indirect contact is when they advise another health care practitioner (e.g., a family doctor) who sees

  19. A Molecular Dynamics (MD) and Quantum Mechanics/Molecular Mechanics (QM/MM) Study on Ornithine Cyclodeaminase (OCD): A Tale of Two Iminiums

    Science.gov (United States)

    Ion, Bogdan F.; Bushnell, Eric A. C.; De Luna, Phil; Gauld, James W.

    2012-01-01

    Ornithine cyclodeaminase (OCD) is an NAD+-dependent deaminase that is found in bacterial species such as Pseudomonas putida. Importantly, it catalyzes the direct conversion of the amino acid L-ornithine to L-proline. Using molecular dynamics (MD) and a hybrid quantum mechanics/molecular mechanics (QM/MM) method in the ONIOM formalism, the catalytic mechanism of OCD has been examined. The rate limiting step is calculated to be the initial step in the overall mechanism: hydride transfer from the L-ornithine’s Cα–H group to the NAD+ cofactor with concomitant formation of a Cα=NH2 + Schiff base with a barrier of 90.6 kJ mol−1. Importantly, no water is observed within the active site during the MD simulations suitably positioned to hydrolyze the Cα=NH2 + intermediate to form the corresponding carbonyl. Instead, the reaction proceeds via a non-hydrolytic mechanism involving direct nucleophilic attack of the δ-amine at the Cα-position. This is then followed by cleavage and loss of the α-NH2 group to give the Δ1-pyrroline-2-carboxylate that is subsequently reduced to L-proline. PMID:23202934

  20. A Molecular Dynamics (MD and Quantum Mechanics/Molecular Mechanics (QM/MM Study on Ornithine Cyclodeaminase (OCD: A Tale of Two Iminiums

    Directory of Open Access Journals (Sweden)

    James W. Gauld

    2012-10-01

    Full Text Available Ornithine cyclodeaminase (OCD is an NAD+-dependent deaminase that is found in bacterial species such as Pseudomonas putida. Importantly, it catalyzes the direct conversion of the amino acid L-ornithine to L-proline. Using molecular dynamics (MD and a hybrid quantum mechanics/molecular mechanics (QM/MM method in the ONIOM formalism, the catalytic mechanism of OCD has been examined. The rate limiting step is calculated to be the initial step in the overall mechanism: hydride transfer from the L-ornithine’s Cα–H group to the NAD+ cofactor with concomitant formation of a Cα=NH2+ Schiff base with a barrier of 90.6 kJ mol−1. Importantly, no water is observed within the active site during the MD simulations suitably positioned to hydrolyze the Cα=NH2+ intermediate to form the corresponding carbonyl. Instead, the reaction proceeds via a non-hydrolytic mechanism involving direct nucleophilic attack of the δ-amine at the Cα-position. This is then followed by cleavage and loss of the α-NH2 group to give the Δ1-pyrroline-2-carboxylate that is subsequently reduced to L-proline.

  1. Ratio-based estimators for a change point in persistence.

    Science.gov (United States)

    Halunga, Andreea G; Osborn, Denise R

    2012-11-01

    We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.

  2. AN AUTOMATED END-TO-END MULTI-AGENT QOS BASED ARCHITECTURE FOR SELECTION OF GEOSPATIAL WEB SERVICES

    Directory of Open Access Journals (Sweden)

    M. Shah

    2012-07-01

    With the proliferation of web services published over the internet, multiple web services may provide similar functionality, but with different non-functional properties. Thus, Quality of Service (QoS offers a metric to differentiate the services and their service providers. In a quality-driven selection of web services, it is important to consider non-functional properties of the web service so as to satisfy the constraints or requirements of the end users. The main intent of this paper is to build an automated end-to-end multi-agent based solution to provide the best-fit web service to service requester based on QoS.

  3. A new integral method for solving the point reactor neutron kinetics equations

    International Nuclear Information System (INIS)

    Li Haofeng; Chen Wenzhen; Luo Lei; Zhu Qian

    2009-01-01

    A numerical integral method that efficiently provides the solution of the point kinetics equations by using the better basis function (BBF) for the approximation of the neutron density in one time step integrations is described and investigated. The approach is based on an exact analytic integration of the neutron density equation, where the stiffness of the equations is overcome by the fully implicit formulation. The procedure is tested by using a variety of reactivity functions, including step reactivity insertion, ramp input and oscillatory reactivity changes. The solution of the better basis function method is compared to other analytical and numerical solutions of the point reactor kinetics equations. The results show that selecting a better basis function can improve the efficiency and accuracy of this integral method. The better basis function method can be used in real time forecasting for power reactors in order to prevent reactivity accidents.

  4. Strike Point Control on EAST Using an Isoflux Control Method

    International Nuclear Information System (INIS)

    Xing Zhe; Xiao Bingjia; Luo Zhengping; Walker, M. L.; Humphreys, D. A.

    2015-01-01

    For the advanced tokamak, the particle deposition and thermal load on the divertor is a big challenge. By moving the strike points on divertor target plates, the position of particle deposition and thermal load can be shifted. We could adjust the Poloidal Field (PF) coil current to achieve the strike point position feedback control. Using isoflux control method, the strike point position can be controlled by controlling the X point position. On the basis of experimental data, we establish relational expressions between X point position and strike point position. Benchmark experiments are carried out to validate the correctness and robustness of the control methods. The strike point position is successfully controlled following our command in the EAST operation. (paper)

  5. SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2013-05-01

    Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.

  6. Chemical reactivity and spectroscopy explored from QM/MM molecular dynamics simulations using the LIO code

    Science.gov (United States)

    Marcolongo, Juan P.; Zeida, Ari; Semelak, Jonathan A.; Foglia, Nicolás O.; Morzan, Uriel N.; Estrin, Dario A.; González Lebrero, Mariano C.; Scherlis, Damián A.

    2018-03-01

    In this work we present the current advances in the development and the applications of LIO, a lab-made code designed for density functional theory calculations in graphical processing units (GPU), that can be coupled with different classical molecular dynamics engines. This code has been thoroughly optimized to perform efficient molecular dynamics simulations at the QM/MM DFT level, allowing for an exhaustive sampling of the configurational space. Selected examples are presented for the description of chemical reactivity in terms of free energy profiles, and also for the computation of optical properties, such as vibrational and electronic spectra in solvent and protein environments.

  7. Mid-point for open-ended income category and the effect of equivalence scales on the income-health relationship

    Directory of Open Access Journals (Sweden)

    Roger Keller Celeste

    2013-12-01

    Full Text Available To estimate the mid-point of an open-ended income category and to assess the impact of two equivalence scales on income-health associations. Data were obtained from the 2010 Brazilian Oral Health Survey ( Pesquisa Nacional de Saúde Bucal – SBBrasil 2010. Income was converted from categorical to two continuous variables ( per capita and equivalized for each mid-point. The median mid-point was R$ 14,523.50 and the mean, R$ 24,507.10. When per capita income was applied, 53% of the population were below the poverty line, compared with 15% with equivalized income. The magnitude of income-health associations was similar for continuous income, but categorized equivalized income tended to decrease the strength of association.

  8. Novel Ratio Subtraction and Isoabsorptive Point Methods for ...

    African Journals Online (AJOL)

    Purpose: To develop and validate two innovative spectrophotometric methods used for the simultaneous determination of ambroxol hydrochloride and doxycycline in their binary mixture. Methods: Ratio subtraction and isoabsorptive point methods were used for the simultaneous determination of ambroxol hydrochloride ...

  9. Methods and considerations to determine sphere center from terrestrial laser scanner point cloud data

    International Nuclear Information System (INIS)

    Rachakonda, Prem; Muralikrishnan, Bala; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cournoyer, Luc; Cheok, Geraldine

    2017-01-01

    The Dimensional Metrology Group at the National Institute of Standards and Technology is performing research to support the development of documentary standards within the ASTM E57 committee. This committee is addressing the point-to-point performance evaluation of a subclass of 3D imaging systems called terrestrial laser scanners (TLSs), which are laser-based and use a spherical coordinate system. This paper discusses the usage of sphere targets for this effort, and methods to minimize the errors due to the determination of their centers. The key contributions of this paper include methods to segment sphere data from a TLS point cloud, and the study of some of the factors that influence the determination of sphere centers. (paper)

  10. Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2018-03-01

    Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.

  11. Automatic building extraction from LiDAR data fusion of point and grid-based features

    Science.gov (United States)

    Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang

    2017-08-01

    This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

  12. Beginning from the End: Strategies of Composition in Lyrical Improvisation with End Rhyme

    Directory of Open Access Journals (Sweden)

    Venla Sykäri

    2017-03-01

    Full Text Available This essay examines the basic principles of constructing improvised verses with end rhyme in three contemporary cultures: _mandinadhes_, Mallorcan _gloses_, and Finnish freestyle rap. This study is based on ethnographic interviews, in which improvisers analyze their methods of composition. This knowledge is complemented by a textual analysis of examples of performances in the given traditions. Sykäri shows that competent improvisers master complex cognitive methods when they create their lines that end with the poetic device of end rhyme, and in particular when they structure the discourse so that the strong arguments are situated at the end of the structural unit of composition. This “reversed” method witnesses a tendency to use parallel phonic patterns in a way that is largely the opposite of those employed with semantic (or canonical parallelism.

  13. Quantum-Mechanics Methodologies in Drug Discovery: Applications of Docking and Scoring in Lead Optimization.

    Science.gov (United States)

    Crespo, Alejandro; Rodriguez-Granillo, Agustina; Lim, Victoria T

    2017-01-01

    The development and application of quantum mechanics (QM) methodologies in computer- aided drug design have flourished in the last 10 years. Despite the natural advantage of QM methods to predict binding affinities with a higher level of theory than those methods based on molecular mechanics (MM), there are only a few examples where diverse sets of protein-ligand targets have been evaluated simultaneously. In this work, we review recent advances in QM docking and scoring for those cases in which a systematic analysis has been performed. In addition, we introduce and validate a simplified QM/MM expression to compute protein-ligand binding energies. Overall, QMbased scoring functions are generally better to predict ligand affinities than those based on classical mechanics. However, the agreement between experimental activities and calculated binding energies is highly dependent on the specific chemical series considered. The advantage of more accurate QM methods is evident in cases where charge transfer and polarization effects are important, for example when metals are involved in the binding process or when dispersion forces play a significant role as in the case of hydrophobic or stacking interactions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. The Study of Fault Location for Front-End Electronics System

    International Nuclear Information System (INIS)

    Zhang Fan; Wang Dong; Huang Guangming; Zhou Daicui

    2009-01-01

    Since some devices on the latest developed 250 ALICE/PHOS Front-end electronics (FEE) system cards had been partly or completely damaged during lead-free soldering. To alleviate the influence on the performance of FEE system and to locate fault related FPGA accurately, we should find a method for locating fault of FEE system based on the deep study of FPGA configuration scheme. It emphasized on the problems such as JTAG configuration of multi-devices, PS configuration based on EPC series configuration devices and auto re-configuration of FPGA. The result of the massive FEE system cards testing and repairing show that that location method can accurately and quickly target the fault point related FPGA on FEE system cards. (authors)

  15. Calibration method for direct conversion receiver front-ends

    Directory of Open Access Journals (Sweden)

    R. Müller

    2008-05-01

    Full Text Available Technology induced process tolerances in analog circuits cause device characteristics different from specification. For direct conversion receiver front-ends a system level calibration method is presented. The malfunctions of the devices are compensated by tuning dominant circuit parameters. Thereto optimization techniques are applied which use measurement values and special evaluation functions.

  16. Natural Preconditioning and Iterative Methods for Saddle Point Systems

    KAUST Repository

    Pestana, Jennifer

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example, interior point methods and the sequential quadratic programming approach to nonlinear optimization. This survey concerns iterative solution methods for these problems and, in particular, shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness - in terms of rapidity of convergence - is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends.

  17. Genealogical series method. Hyperpolar points screen effect

    International Nuclear Information System (INIS)

    Gorbatov, A.M.

    1991-01-01

    The fundamental values of the genealogical series method -the genealogical integrals (sandwiches) have been investigated. The hyperpolar points screen effect has been found. It allows one to calculate the sandwiches for the Fermion systems with large number of particles and to ascertain the validity of the iterated-potential method as well. For the first time the genealogical-series method has been realized numerically for the central spin-independent potential

  18. An approach of point cloud denoising based on improved bilateral filtering

    Science.gov (United States)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  19. Hand-eye calibration for rigid laparoscopes using an invariant point.

    Science.gov (United States)

    Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2016-06-01

    Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.

  20. Retinal biometrics based on Iterative Closest Point algorithm.

    Science.gov (United States)

    Hatanaka, Yuji; Tajima, Mikiya; Kawasaki, Ryo; Saito, Koko; Ogohara, Kazunori; Muramatsu, Chisako; Sunayama, Wataru; Fujita, Hiroshi

    2017-07-01

    The pattern of blood vessels in the eye is unique to each person because it rarely changes over time. Therefore, it is well known that retinal blood vessels are useful for biometrics. This paper describes a biometrics method using the Jaccard similarity coefficient (JSC) based on blood vessel regions in retinal image pairs. The retinal image pairs were rough matched by the center of their optic discs. Moreover, the image pairs were aligned using the Iterative Closest Point algorithm based on detailed blood vessel skeletons. For registration, perspective transform was applied to the retinal images. Finally, the pairs were classified as either correct or incorrect using the JSC of the blood vessel region in the image pairs. The proposed method was applied to temporal retinal images, which were obtained in 2009 (695 images) and 2013 (87 images). The 87 images acquired in 2013 were all from persons already examined in 2009. The accuracy of the proposed method reached 100%.

  1. The report of Task Group 100 of the AAPM: Application of risk analysis methods to radiation therapy quality management

    International Nuclear Information System (INIS)

    Huq, M. Saiful; Fraass, Benedick A.; Dunscombe, Peter B.; Gibbons, John P.; Ibbott, Geoffrey S.; Mundt, Arno J.; Mutic, Sasa; Palta, Jatinder R.; Rath, Frank; Thomadsen, Bruce R.; Williamson, Jeffrey F.; Yorke, Ellen D.

    2016-01-01

    The increasing complexity of modern radiation therapy planning and delivery challenges traditional prescriptive quality management (QM) methods, such as many of those included in guidelines published by organizations such as the AAPM, ASTRO, ACR, ESTRO, and IAEA. These prescriptive guidelines have traditionally focused on monitoring all aspects of the functional performance of radiotherapy (RT) equipment by comparing parameters against tolerances set at strict but achievable values. Many errors that occur in radiation oncology are not due to failures in devices and software; rather they are failures in workflow and process. A systematic understanding of the likelihood and clinical impact of possible failures throughout a course of radiotherapy is needed to direct limit QM resources efficiently to produce maximum safety and quality of patient care. Task Group 100 of the AAPM has taken a broad view of these issues and has developed a framework for designing QM activities, based on estimates of the probability of identified failures and their clinical outcome through the RT planning and delivery process. The Task Group has chosen a specific radiotherapy process required for “intensity modulated radiation therapy (IMRT)” as a case study. The goal of this work is to apply modern risk-based analysis techniques to this complex RT process in order to demonstrate to the RT community that such techniques may help identify more effective and efficient ways to enhance the safety and quality of our treatment processes. The task group generated by consensus an example quality management program strategy for the IMRT process performed at the institution of one of the authors. This report describes the methodology and nomenclature developed, presents the process maps, FMEAs, fault trees, and QM programs developed, and makes suggestions on how this information could be used in the clinic. The development and implementation of risk-assessment techniques will make radiation

  2. The report of Task Group 100 of the AAPM: Application of risk analysis methods to radiation therapy quality management

    Energy Technology Data Exchange (ETDEWEB)

    Huq, M. Saiful, E-mail: HUQS@UPMC.EDU [Department of Radiation Oncology, University of Pittsburgh Cancer Institute and UPMC CancerCenter, Pittsburgh, Pennsylvania 15232 (United States); Fraass, Benedick A. [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 (United States); Dunscombe, Peter B. [Department of Oncology, University of Calgary, Calgary T2N 1N4 (Canada); Gibbons, John P. [Ochsner Health System, New Orleans, Louisiana 70121 (United States); Ibbott, Geoffrey S. [Department of Radiation Physics, UT MD Anderson Cancer Center, Houston, Texas 77030 (United States); Mundt, Arno J. [Department of Radiation Medicine and Applied Sciences, University of California San Diego, San Diego, California 92093-0843 (United States); Mutic, Sasa [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States); Palta, Jatinder R. [Department of Radiation Oncology, Virginia Commonwealth University, P.O. Box 980058, Richmond, Virginia 23298 (United States); Rath, Frank [Department of Engineering Professional Development, University of Wisconsin, Madison, Wisconsin 53706 (United States); Thomadsen, Bruce R. [Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705-2275 (United States); Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298-0058 (United States); Yorke, Ellen D. [Department of Medical Physics, Memorial Sloan-Kettering Center, New York, New York 10065 (United States)

    2016-07-15

    The increasing complexity of modern radiation therapy planning and delivery challenges traditional prescriptive quality management (QM) methods, such as many of those included in guidelines published by organizations such as the AAPM, ASTRO, ACR, ESTRO, and IAEA. These prescriptive guidelines have traditionally focused on monitoring all aspects of the functional performance of radiotherapy (RT) equipment by comparing parameters against tolerances set at strict but achievable values. Many errors that occur in radiation oncology are not due to failures in devices and software; rather they are failures in workflow and process. A systematic understanding of the likelihood and clinical impact of possible failures throughout a course of radiotherapy is needed to direct limit QM resources efficiently to produce maximum safety and quality of patient care. Task Group 100 of the AAPM has taken a broad view of these issues and has developed a framework for designing QM activities, based on estimates of the probability of identified failures and their clinical outcome through the RT planning and delivery process. The Task Group has chosen a specific radiotherapy process required for “intensity modulated radiation therapy (IMRT)” as a case study. The goal of this work is to apply modern risk-based analysis techniques to this complex RT process in order to demonstrate to the RT community that such techniques may help identify more effective and efficient ways to enhance the safety and quality of our treatment processes. The task group generated by consensus an example quality management program strategy for the IMRT process performed at the institution of one of the authors. This report describes the methodology and nomenclature developed, presents the process maps, FMEAs, fault trees, and QM programs developed, and makes suggestions on how this information could be used in the clinic. The development and implementation of risk-assessment techniques will make radiation

  3. The report of Task Group 100 of the AAPM: Application of risk analysis methods to radiation therapy quality management

    Science.gov (United States)

    Huq, M. Saiful; Fraass, Benedick A.; Dunscombe, Peter B.; Gibbons, John P.; Mundt, Arno J.; Mutic, Sasa; Palta, Jatinder R.; Rath, Frank; Thomadsen, Bruce R.; Williamson, Jeffrey F.; Yorke, Ellen D.

    2016-01-01

    The increasing complexity of modern radiation therapy planning and delivery challenges traditional prescriptive quality management (QM) methods, such as many of those included in guidelines published by organizations such as the AAPM, ASTRO, ACR, ESTRO, and IAEA. These prescriptive guidelines have traditionally focused on monitoring all aspects of the functional performance of radiotherapy (RT) equipment by comparing parameters against tolerances set at strict but achievable values. Many errors that occur in radiation oncology are not due to failures in devices and software; rather they are failures in workflow and process. A systematic understanding of the likelihood and clinical impact of possible failures throughout a course of radiotherapy is needed to direct limit QM resources efficiently to produce maximum safety and quality of patient care. Task Group 100 of the AAPM has taken a broad view of these issues and has developed a framework for designing QM activities, based on estimates of the probability of identified failures and their clinical outcome through the RT planning and delivery process. The Task Group has chosen a specific radiotherapy process required for “intensity modulated radiation therapy (IMRT)” as a case study. The goal of this work is to apply modern risk-based analysis techniques to this complex RT process in order to demonstrate to the RT community that such techniques may help identify more effective and efficient ways to enhance the safety and quality of our treatment processes. The task group generated by consensus an example quality management program strategy for the IMRT process performed at the institution of one of the authors. This report describes the methodology and nomenclature developed, presents the process maps, FMEAs, fault trees, and QM programs developed, and makes suggestions on how this information could be used in the clinic. The development and implementation of risk-assessment techniques will make radiation

  4. Text analysis of open-ended survey responses : a complementary method to preference mapping

    NARCIS (Netherlands)

    ten Kleij, F; Musters, PAD

    The present study illustrates the use of computer-aided text analysis to evaluate the content of open-ended survey responses. During an in-hall test, different varieties of mayonnaise were evaluated by 165 respondents on a 10-point liking scale, with the option to freely comment on these

  5. System of end-to-end symmetric database encryption

    Science.gov (United States)

    Galushka, V. V.; Aydinyan, A. R.; Tsvetkova, O. L.; Fathi, V. A.; Fathi, D. V.

    2018-05-01

    The article is devoted to the actual problem of protecting databases from information leakage, which is performed while bypassing access control mechanisms. To solve this problem, it is proposed to use end-to-end data encryption, implemented at the end nodes of an interaction of the information system components using one of the symmetric cryptographic algorithms. For this purpose, a key management method designed for use in a multi-user system based on the distributed key representation model, part of which is stored in the database, and the other part is obtained by converting the user's password, has been developed and described. In this case, the key is calculated immediately before the cryptographic transformations and is not stored in the memory after the completion of these transformations. Algorithms for registering and authorizing a user, as well as changing his password, have been described, and the methods for calculating parts of a key when performing these operations have been provided.

  6. End points of planar reaching movements are disrupted by small force pulses: an evaluation of the hypothesis of equifinality.

    Science.gov (United States)

    Popescu, F C; Rymer, W Z

    2000-11-01

    A single force pulse was applied unexpectedly to the arms of five normal human subjects during nonvisually guided planar reaching movements of 10-cm amplitude. The pulse was applied by a powered manipulandum in a direction perpendicular to the motion of the hand, which gripped the manipulandum via a handle at the beginning, at the middle, or toward the end the movement. It was small and brief (10 N, 10 ms), so that it was barely perceptible. We found that the end points of the perturbed motions were systematically different from those of the unperturbed movements. This difference, dubbed "terminal error," averaged 14.4 +/- 9.8% (mean +/- SD) of the movement distance. The terminal error was not necessarily in the direction of the perturbation, although it was affected by it, and it did not decrease significantly with practice. For example, while perturbations involving elbow extension resulted in a statistically significant shift in mean end-point and target-acquisition frequency, the flexion perturbations were not clearly affected. We argue that this error distribution is inconsistent with the "equilibrium point hypothesis" (EPH), which predicts minimal terminal error is determined primarily by the variance in the command signal itself, a property referred to as "equifinality." This property reputedly derives from the "spring-like" properties of muscle and is enhanced by reflexes. To ensure that terminal errors were not due to mid-course voluntary corrections, we only accepted trials in which the final position was already established before such a voluntary response to the perturbation could have begun, that is, in a time interval shorter than the minimum reaction time (RT) for that subject. This RT was estimated for each subject in supplementary experiments in which the subject was instructed to move to a new target if perturbed and to the old target if no perturbation was detected. These RT movements were found to either stop or slow greatly at the original

  7. Time discretization of the point kinetic equations using matrix exponential method and First-Order Hold

    International Nuclear Information System (INIS)

    Park, Yujin; Kazantzis, Nikolaos; Parlos, Alexander G.; Chong, Kil To

    2013-01-01

    Highlights: • Numerical solution for stiff differential equations using matrix exponential method. • The approximation is based on First Order Hold assumption. • Various input examples applied to the point kinetics equations. • The method shows superior useful and effective activity. - Abstract: A system of nonlinear differential equations is derived to model the dynamics of neutron density and the delayed neutron precursors within a point kinetics equation modeling framework for a nuclear reactor. The point kinetic equations are mathematically characterized as stiff, occasionally nonlinear, ordinary differential equations, posing significant challenges when numerical solutions are sought and traditionally resulting in the need for smaller time step intervals within various computational schemes. In light of the above realization, the present paper proposes a new discretization method inspired by system-theoretic notions and technically based on a combination of the matrix exponential method (MEM) and the First-Order Hold (FOH) assumption. Under the proposed time discretization structure, the sampled-data representation of the nonlinear point kinetic system of equations is derived. The performance of the proposed time discretization procedure is evaluated using several case studies with sinusoidal reactivity profiles and multiple input examples (reactivity and neutron source function). It is shown, that by applying the proposed method under a First-Order Hold for the neutron density and the precursor concentrations at each time step interval, the stiffness problem associated with the point kinetic equations can be adequately addressed and resolved. Finally, as evidenced by the aforementioned detailed simulation studies, the proposed method retains its validity and accuracy for a wide range of reactor operating conditions, including large sampling periods dictated by physical and/or technical limitations associated with the current state of sensor and

  8. A novel PON based UMTS broadband wireless access network architecture with an algorithm to guarantee end to end QoS

    Science.gov (United States)

    Sana, Ajaz; Hussain, Shahab; Ali, Mohammed A.; Ahmed, Samir

    2007-09-01

    In this paper we proposes a novel Passive Optical Network (PON) based broadband wireless access network architecture to provide multimedia services (video telephony, video streaming, mobile TV, mobile emails etc) to mobile users. In the conventional wireless access networks, the base stations (Node B) and Radio Network Controllers (RNC) are connected by point to point T1/E1 lines (Iub interface). The T1/E1 lines are expensive and add up to operating costs. Also the resources (transceivers and T1/E1) are designed for peak hours traffic, so most of the time the dedicated resources are idle and wasted. Further more the T1/E1 lines are not capable of supporting bandwidth (BW) required by next generation wireless multimedia services proposed by High Speed Packet Access (HSPA, Rel.5) for Universal Mobile Telecommunications System (UMTS) and Evolution Data only (EV-DO) for Code Division Multiple Access 2000 (CDMA2000). The proposed PON based back haul can provide Giga bit data rates and Iub interface can be dynamically shared by Node Bs. The BW is dynamically allocated and the unused BW from lightly loaded Node Bs is assigned to heavily loaded Node Bs. We also propose a novel algorithm to provide end to end Quality of Service (QoS) (between RNC and user equipment).The algorithm provides QoS bounds in the wired domain as well as in wireless domain with compensation for wireless link errors. Because of the air interface there can be certain times when the user equipment (UE) is unable to communicate with Node B (usually referred to as link error). Since the link errors are bursty and location dependent. For a proposed approach, the scheduler at the Node B maps priorities and weights for QoS into wireless MAC. The compensations for errored links is provided by the swapping of services between the active users and the user data is divided into flows, with flows allowed to lag or lead. The algorithm guarantees (1)delay and throughput for error-free flows,(2)short term fairness

  9. Continuously deformation monitoring of subway tunnel based on terrestrial point clouds

    NARCIS (Netherlands)

    Kang, Z.; Tuo, L.; Zlatanova, S.

    2012-01-01

    The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the

  10. Method to Minimize the Low-Frequency Neutral-Point Voltage Oscillations With Time-Offset Injection for Neutral-Point-Clamped Inverters

    DEFF Research Database (Denmark)

    Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum

    2015-01-01

    time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...

  11. Chemical Reactivity and Spectroscopy Explored From QM/MM Molecular Dynamics Simulations Using the LIO Code

    Directory of Open Access Journals (Sweden)

    Juan P. Marcolongo

    2018-03-01

    Full Text Available In this work we present the current advances in the development and the applications of LIO, a lab-made code designed for density functional theory calculations in graphical processing units (GPU, that can be coupled with different classical molecular dynamics engines. This code has been thoroughly optimized to perform efficient molecular dynamics simulations at the QM/MM DFT level, allowing for an exhaustive sampling of the configurational space. Selected examples are presented for the description of chemical reactivity in terms of free energy profiles, and also for the computation of optical properties, such as vibrational and electronic spectra in solvent and protein environments.

  12. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    Science.gov (United States)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  13. A resilience perspective to water risk management: case-study application of the adaptation tipping point method

    Science.gov (United States)

    Gersonius, Berry; Ashley, Richard; Jeuken, Ad; Nasruddin, Fauzy; Pathirana, Assela; Zevenbergen, Chris

    2010-05-01

    In a context of high uncertainty about hydrological variables due to climate change and other factors, the development of updated risk management approaches is as important as—if not more important than—the provision of improved data and forecasts of the future. Traditional approaches to adaptation attempt to manage future water risks to cities with the use of the predict-then-adapt method. This method uses hydrological change projections as the starting point to identify adaptive strategies, which is followed by analysing the cause-effect chain based on some sort of Pressures-State-Impact-Response (PSIR) scheme. The predict-then-adapt method presumes that it is possible to define a singular (optimal) adaptive strategy according to a most likely or average projection of future change. A key shortcoming of the method is, however, that the planning of water management structures is typically decoupled from forecast uncertainties and is, as such, inherently inflexible. This means that there is an increased risk of under- or over-adaptation, resulting in either mal-functioning or unnecessary costs. Rather than taking a traditional approach, responsible water risk management requires an alternative approach to adaptation that recognises and cultivates resiliency for change. The concept of resiliency relates to the capability of complex socio-technical systems to make aspirational levels of functioning attainable despite the occurrence of possible changes. Focusing on resiliency does not attempt to reduce uncertainty associated with future change, but rather to develop better ways of managing it. This makes it a particularly relevant perspective for adaptation to long-term hydrological change. Although resiliency is becoming more refined as a theory, the application of the concept to water risk management is still in an initial phase. Different methods are used in practice to support the implementation of a resilience-focused approach. Typically these approaches

  14. Application of expert-notice dialogue (END) method to assess students’ science communication ability on biology

    Science.gov (United States)

    Sriyati, S.; Amelia, D. N.; Soniyana, G. T.

    2018-05-01

    Student’s science communication ability can be assessed by the Expert-Notice Dialogue (END) method which focusing on verbal explanations using graphs or images as a tool. This study aims to apply the END method to assess students’ science communication ability. The study was conducted in two high schools with each sample of one class at each school (A and B). The number of experts in class A is 8 students and 7 in class B, the number of notice in class A 24 students and 30 in class B. The material chosen for explanation by expert is Ecosystem in class A and plant classification in class B. Research instruments are rubric of science communication ability, observation rubric, notice concept test and notice questionnaire. The implementation recorded with a video camera and then transcribed based on rubric science communication ability. The results showed that the average of science communication ability in class A and B was 60% and 61.8%, respectively, in enough categories. Mastery of the notice concept is in good category with 79.10 averages in class A and 94.64 in class B. Through the questionnaire notice it is known that the END method generally helps notice in understanding the concept.

  15. Material-point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  16. DEF: an automated dead-end filling approach based on quasi-endosymbiosis.

    Science.gov (United States)

    Liu, Lili; Zhang, Zijun; Sheng, Taotao; Chen, Ming

    2017-02-01

    Gap filling for the reconstruction of metabolic networks is to restore the connectivity of metabolites via finding high-confidence reactions that could be missed in target organism. Current methods for gap filling either fall into the network topology or have limited capability in finding missing reactions that are indirectly related to dead-end metabolites but of biological importance to the target model. We present an automated dead-end filling (DEF) approach, which is derived from the wisdom of endosymbiosis theory, to fill gaps by finding the most efficient dead-end utilization paths in a constructed quasi-endosymbiosis model. The recalls of reactions and dead ends of DEF reach around 73% and 86%, respectively. This method is capable of finding indirectly dead-end-related reactions with biological importance for the target organism and is applicable to any given metabolic model. In the E. coli iJR904 model, for instance, about 42% of the dead-end metabolites were fixed by our proposed method. DEF is publicly available at http://bis.zju.edu.cn/DEF/. mchen@zju.edu.cn Supplementary data are available at Bioinformatics online.

  17. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    Science.gov (United States)

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  18. BPP: a sequence-based algorithm for branch point prediction.

    Science.gov (United States)

    Zhang, Qing; Fan, Xiaodan; Wang, Yejun; Sun, Ming-An; Shao, Jianlin; Guo, Dianjing

    2017-10-15

    Although high-throughput sequencing methods have been proposed to identify splicing branch points in the human genome, these methods can only detect a small fraction of the branch points subject to the sequencing depth, experimental cost and the expression level of the mRNA. An accurate computational model for branch point prediction is therefore an ongoing objective in human genome research. We here propose a novel branch point prediction algorithm that utilizes information on the branch point sequence and the polypyrimidine tract. Using experimentally validated data, we demonstrate that our proposed method outperforms existing methods. Availability and implementation: https://github.com/zhqingit/BPP. djguo@cuhk.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. Fragment-based quantum mechanical calculation of protein-protein binding affinities.

    Science.gov (United States)

    Wang, Yaqian; Liu, Jinfeng; Li, Jinjin; He, Xiao

    2018-04-29

    The electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method has been successfully utilized for efficient linear-scaling quantum mechanical (QM) calculation of protein energies. In this work, we applied the EE-GMFCC method for calculation of binding affinity of Endonuclease colicin-immunity protein complex. The binding free energy changes between the wild-type and mutants of the complex calculated by EE-GMFCC are in good agreement with experimental results. The correlation coefficient (R) between the predicted binding energy changes and experimental values is 0.906 at the B3LYP/6-31G*-D level, based on the snapshot whose binding affinity is closest to the average result from the molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) calculation. The inclusion of the QM effects is important for accurate prediction of protein-protein binding affinities. Moreover, the self-consistent calculation of PB solvation energy is required for accurate calculations of protein-protein binding free energies. This study demonstrates that the EE-GMFCC method is capable of providing reliable prediction of relative binding affinities for protein-protein complexes. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  20. Simple Approaches to Minimally-Instrumented, Microfluidic-Based Point-of-Care Nucleic Acid Amplification Tests

    Science.gov (United States)

    Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.

    2018-01-01

    Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424