WorldWideScience

Sample records for clomp accurately characterizing

  1. CLOMP: Accurately Characterizing OpenMP Application Overheads

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B

    2008-02-11

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC. Further, we show that CLOMP also captures limitations for OpenMP parallelization on NUMA systems.

  2. CLOMP: Accurately Characterizing OpenMP Application Overheads

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B R

    2008-11-10

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC.We also show that CLOMP also captures limitations for OpenMP parallelization on SMT and NUMA systems. Finally, CLOMPI, our MPI extension of CLOMP, demonstrates which aspects of OpenMP interact poorly with MPI when MPI helper threads cannot run on the NIC.

  3. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  4. Does a pneumotach accurately characterize voice function?

    Science.gov (United States)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  5. Phase rainbow refractometry for accurate droplet variation characterization.

    Science.gov (United States)

    Wu, Yingchun; Promvongsa, Jantarat; Saengkaew, Sawitree; Wu, Xuecheng; Chen, Jia; Gréhan, Gérard

    2016-10-15

    We developed a one-dimensional phase rainbow refractometer for the accurate trans-dimensional measurements of droplet size on the micrometer scale as well as the tiny droplet diameter variations at the nanoscale. The dependence of the phase shift of the rainbow ripple structures on the droplet variations is revealed. The phase-shifting rainbow image is recorded by a telecentric one-dimensional rainbow imaging system. Experiments on the evaporating monodispersed droplet stream show that the phase rainbow refractometer can measure the tiny droplet diameter changes down to tens of nanometers. This one-dimensional phase rainbow refractometer is capable of measuring the droplet refractive index and diameter, as well as variations.

  6. Highly accurate spectral retardance characterization of a liquid crystal retarder including Fabry-Perot interference effects

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, Asticio [Departamento de Ciencias Físicas, Universidad de La Frontera, Temuco (Chile); Center for Optics and Photonics, Universidad de Concepción, Casilla 4016, Concepción (Chile); Mar Sánchez-López, María del [Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Elche (Spain); García-Martínez, Pascuala [Departament d' Òptica, Universitat de València, 45100 Burjassot (Spain); Arias, Julia; Moreno, Ignacio [Departamento de Ciencia de Materiales, Óptica y Tecnología Electrónica, Universidad Miguel Hernández, 03202 Elche (Spain)

    2014-01-21

    Multiple-beam Fabry-Perot (FP) interferences occur in liquid crystal retarders (LCR) devoid of an antireflective coating. In this work, a highly accurate method to obtain the spectral retardance of such devices is presented. On the basis of a simple model of the LCR that includes FP effects and by using a voltage transfer function, we show how the FP features in the transmission spectrum can be used to accurately retrieve the ordinary and extraordinary spectral phase delays, and the voltage dependence of the latter. As a consequence, the modulation characteristics of the device are fully determined with high accuracy by means of a few off-state physical parameters which are wavelength-dependent, and a single voltage transfer function that is valid within the spectral range of characterization.

  7. Highly accurate apparatus for electrochemical characterization of the felt electrodes used in redox flow batteries

    Science.gov (United States)

    Park, Jong Ho; Park, Jung Jin; Park, O. Ok; Jin, Chang-Soo; Yang, Jung Hoon

    2016-04-01

    Because of the rise in renewable energy use, the redox flow battery (RFB) has attracted extensive attention as an energy storage system. Thus, many studies have focused on improving the performance of the felt electrodes used in RFBs. However, existing analysis cells are unsuitable for characterizing felt electrodes because of their complex 3-dimensional structure. Analysis is also greatly affected by the measurement conditions, viz. compression ratio, contact area, and contact strength between the felt and current collector. To address the growing need for practical analytical apparatus, we report a new analysis cell for accurate electrochemical characterization of felt electrodes under various conditions, and compare it with previous ones. In this cell, the measurement conditions can be exhaustively controlled with a compression supporter. The cell showed excellent reproducibility in cyclic voltammetry analysis and the results agreed well with actual RFB charge-discharge performance.

  8. Accurate spectroscopic characterization of protonated oxirane: a potential prebiotic species in Titan's atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Puzzarini, Cristina [Dipartimento di Chimica " Giacomo Ciamician," Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy); Ali, Ashraf [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Biczysko, Malgorzata; Barone, Vincenzo, E-mail: cristina.puzzarini@unibo.it [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2014-09-10

    An accurate spectroscopic characterization of protonated oxirane has been carried out by means of state-of-the-art computational methods and approaches. The calculated spectroscopic parameters from our recent computational investigation of oxirane together with the corresponding experimental data available were used to assess the accuracy of our predicted rotational and IR spectra of protonated oxirane. We found an accuracy of about 10 cm{sup –1} for vibrational transitions (fundamentals as well as overtones and combination bands) and, in relative terms, of 0.1% for rotational transitions. We are therefore confident that the spectroscopic data provided herein are a valuable support for the detection of protonated oxirane not only in Titan's atmosphere but also in the interstellar medium.

  9. Very Fast and Accurate Procedure for the Characterization of Photovoltaic Panels from Datasheet Information

    Directory of Open Access Journals (Sweden)

    Antonino Laudani

    2014-01-01

    Full Text Available In recent years several numerical methods have been proposed to identify the five-parameter model of photovoltaic panels from manufacturer datasheets also by introducing simplification or approximation techniques. In this paper we present a fast and accurate procedure for obtaining the parameters of the five-parameter model by starting from its reduced form. The procedure allows characterizing, in few seconds, thousands of photovoltaic panels present on the standard databases. It introduces and takes advantage of further important mathematical considerations without any model simplifications or data approximations. In particular the five parameters are divided in two groups, independent and dependent parameters, in order to reduce the dimensions of the search space. The partitioning of the parameters provides a strong advantage in terms of convergence, computational costs, and execution time of the present approach. Validations on thousands of photovoltaic panels are presented that show how it is possible to make easy and efficient the extraction process of the five parameters, without taking care of choosing a specific solver algorithm but simply by using any deterministic optimization/minimization technique.

  10. Study on Factors for Accurate Open Circuit Voltage Characterizations in Mn-Type Li-Ion Batteries

    OpenAIRE

    Natthawuth Somakettarin; Tsuyoshi Funaki

    2017-01-01

    Open circuit voltage (OCV) of lithium batteries has been of interest since the battery management system (BMS) requires an accurate knowledge of the voltage characteristics of any Li-ion batteries. This article presents an OCV characteristic for lithium manganese oxide (LMO) batteries under several experimental operating conditions, and discusses factors for accurate OCV determination. A test system is developed for OCV characterization based on the OCV pulse test method. Various factors for ...

  11. In-depth glycoproteomic characterization of γ-conglutin by high-resolution accurate mass spectrometry.

    Directory of Open Access Journals (Sweden)

    Silvia Schiarea

    Full Text Available The molecular characterization of bioactive food components is necessary for understanding the mechanisms of their beneficial or detrimental effects on human health. This study focused on γ-conglutin, a well-known lupin seed N-glycoprotein with health-promoting properties and controversial allergenic potential. Given the importance of N-glycosylation for the functional and structural characteristics of proteins, we studied the purified protein by a mass spectrometry-based glycoproteomic approach able to identify the structure, micro-heterogeneity and attachment site of the bound N-glycan(s, and to provide extensive coverage of the protein sequence. The peptide/N-glycopeptide mixtures generated by enzymatic digestion (with or without N-deglycosylation were analyzed by high-resolution accurate mass liquid chromatography-multi-stage mass spectrometry. The four main micro-heterogeneous variants of the single N-glycan bound to γ-conglutin were identified as Man2(Xyl (Fuc GlcNAc2, Man3(Xyl (Fuc GlcNAc2, GlcNAcMan3(Xyl (Fuc GlcNAc2 and GlcNAc 2Man3(Xyl (Fuc GlcNAc2. These carry both core β1,2-xylose and core α1-3-fucose (well known Cross-Reactive Carbohydrate Determinants, but corresponding fucose-free variants were also identified as minor components. The N-glycan was proven to reside on Asn131, one of the two potential N-glycosylation sites. The extensive coverage of the γ-conglutin amino acid sequence suggested three alternative N-termini of the small subunit, that were later confirmed by direct-infusion Orbitrap mass spectrometry analysis of the intact subunit.

  12. In-Depth Glycoproteomic Characterization of γ-Conglutin by High-Resolution Accurate Mass Spectrometry

    Science.gov (United States)

    Schiarea, Silvia; Arnoldi, Lolita; Fanelli, Roberto; De Combarieu, Eric; Chiabrando, Chiara

    2013-01-01

    The molecular characterization of bioactive food components is necessary for understanding the mechanisms of their beneficial or detrimental effects on human health. This study focused on γ-conglutin, a well-known lupin seed N-glycoprotein with health-promoting properties and controversial allergenic potential. Given the importance of N-glycosylation for the functional and structural characteristics of proteins, we studied the purified protein by a mass spectrometry-based glycoproteomic approach able to identify the structure, micro-heterogeneity and attachment site of the bound N-glycan(s), and to provide extensive coverage of the protein sequence. The peptide/N-glycopeptide mixtures generated by enzymatic digestion (with or without N-deglycosylation) were analyzed by high-resolution accurate mass liquid chromatography–multi-stage mass spectrometry. The four main micro-heterogeneous variants of the single N-glycan bound to γ-conglutin were identified as Man2(Xyl) (Fuc) GlcNAc2, Man3(Xyl) (Fuc) GlcNAc2, GlcNAcMan3(Xyl) (Fuc) GlcNAc2 and GlcNAc 2Man3(Xyl) (Fuc) GlcNAc2. These carry both core β1,2-xylose and core α1-3-fucose (well known Cross-Reactive Carbohydrate Determinants), but corresponding fucose-free variants were also identified as minor components. The N-glycan was proven to reside on Asn131, one of the two potential N-glycosylation sites. The extensive coverage of the γ-conglutin amino acid sequence suggested three alternative N-termini of the small subunit, that were later confirmed by direct-infusion Orbitrap mass spectrometry analysis of the intact subunit. PMID:24069245

  13. "True" physical optics for the accurate characterization of antenna radomes and lenses

    NARCIS (Netherlands)

    Neto, A.

    2003-01-01

    With the introduction of the "True" Physical Optics, the source in the far field of the canonical structure was not assumed but instead to use the exact dielectric slab's GF's in orde to obtain a more accurate approximation of the PO currents in both the external and the internal sides of the radome

  14. Efficient design, accurate fabrication and effective characterization of plasmonic quasicrystalline arrays of nano-spherical particles

    Science.gov (United States)

    Namin, Farhad A.; Yuwen, Yu A.; Liu, Liu; Panaretos, Anastasios H.; Werner, Douglas H.; Mayer, Theresa S.

    2016-02-01

    In this paper, the scattering properties of two-dimensional quasicrystalline plasmonic lattices are investigated. We combine a newly developed synthesis technique, which allows for accurate fabrication of spherical nanoparticles, with a recently published variation of generalized multiparticle Mie theory to develop the first quantitative model for plasmonic nano-spherical arrays based on quasicrystalline morphologies. In particular, we study the scattering properties of Penrose and Ammann- Beenker gold spherical nanoparticle array lattices. We demonstrate that by using quasicrystalline lattices, one can obtain multi-band or broadband plasmonic resonances which are not possible in periodic structures. Unlike previously published works, our technique provides quantitative results which show excellent agreement with experimental measurements.

  15. Collision-induced fragmentation accurate mass spectrometric analysis methods to rapidly characterize phytochemicals in plant extracts

    Science.gov (United States)

    The rapid advances in analytical chromatography equipment have made the reliable and reproducible measurement of a wide range of plant chemical components possible. Full chemical characterization of a given plant material is possible with the new mass spectrometers currently available. New methods a...

  16. Collision-induced fragmentation accurate mass spectrometric analysis methods to rapidly characterize plant extracts

    Science.gov (United States)

    The rapid advances in analytical chromatography equipment have made the reliable and reproducible measurement of a wide range of plant chemical components possible. Full chemical characterization of a given plant material is possible with the new mass spectrometers currently available. For phytochem...

  17. CAST: a new program package for the accurate characterization of large and flexible molecular systems.

    Science.gov (United States)

    Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd

    2014-09-15

    The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request.

  18. Characterization of Thin Film Materials using SCAN meta-GGA, an Accurate Nonempirical Density Functional

    Science.gov (United States)

    Buda, I. G.; Lane, C.; Barbiellini, B.; Ruzsinszky, A.; Sun, J.; Bansil, A.

    2017-03-01

    We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functional for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.

  19. Accurate characterization of weak neutron fields by using a Bayesian approach.

    Science.gov (United States)

    Medkour Ishak-Boushaki, G; Allab, M

    2017-04-01

    A Bayesian analysis of data derived from neutron spectrometric measurements provides the advantage of determining rigorously integral physical quantities characterizing the neutron field and their respective related uncertainties. The first and essential step in a Bayesian approach is the parameterization of the investigated neutron spectrum. The aim of this paper is to investigate the sensitivity of the Bayesian results, mainly the neutron dose H(*)(10) required for radiation protection purposes and its correlated uncertainty, to the selected neutron spectrum parameterization.

  20. Accurate spectroscopic characterization of ethyl mercaptan and dimethyl sulfide isotopologues: a route toward their astrophysical detection

    Energy Technology Data Exchange (ETDEWEB)

    Puzzarini, C. [Dipartimento di Chimica, " Giacomo Ciamician," Università diBologna, Via F. Selmi 2, I-40126 Bologna (Italy); Senent, M. L. [Departamento de Química y Física Teóricas, Institsuto de Estructura de la Materia, IEM-C.S.I.C., Serrano 121, Madrid E-28006 (Spain); Domínguez-Gómez, R. [Doctora Vinculada IEM-CSIC, Departamento de Ingeniería Civil, Cátedra de Química, E.U.I.T. Obras Públicas, Universidad Politécnica de Madrid (Spain); Carvajal, M. [Departamento de Física Aplicada, Facultad de Ciencias Experimentales, Unidad Asociada IEM-CSIC-U.Huelva, Universidad de Huelva, E-21071 Huelva (Spain); Hochlaf, M. [Université Paris-Est, Laboratoire de Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 boulevard Descartes, F-77454 Marne-la-Vallée (France); Al-Mogren, M. Mogren, E-mail: cristina.puzzarini@unibo.it, E-mail: senent@iem.cfmac.csic.es, E-mail: rosa.dominguez@upm.es, E-mail: miguel.carvajal@dfa.uhu.es, E-mail: majdi.hochlaf@u-pem.fr, E-mail: mmogren@ksu.edu.sa [Chemistry Department, Faculty of Science, King Saud University, PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2014-11-20

    Using state-of-the-art computational methodologies, we predict a set of reliable rotational and torsional parameters for ethyl mercaptan and dimethyl sulfide monosubstituted isotopologues. This includes rotational, quartic, and sextic centrifugal-distortion constants, torsional levels, and torsional splittings. The accuracy of the present data was assessed from a comparison to the available experimental data. Generally, our computed parameters should help in the characterization and the identification of these organo-sulfur molecules in laboratory settings and in the interstellar medium.

  1. Accurate characterization and modeling of transmission lines for GaAs MMIC's

    Science.gov (United States)

    Finlay, Hugh J.; Jansen, Rolf H.; Jenkins, John A.; Eddison, Ian G.

    1988-06-01

    The authors discuss computer-aided design (CAD) tools together with high-accuracy microwave measurements to realize improved design data for GaAs monolithic microwave integrated circuits (MMICs). In particular, a combined theoretical and experimental approach to the generation of an accurate design database for transmission lines on GaAs MMICs is presented. The theoretical approach is based on an improved transmission-line theory which is part of the spectral-domain hybrid-mode computer program MCLINE. The benefit of this approach in the design of multidielectric-media transmission lines is described. The program was designed to include loss mechanisms in all dielectric layers and to include conductor and surface roughness loss contributions. As an example, using GaAs ring resonator techniques covering 2 to 24 GHz, accuracies in effective dielectric constant and loss of 1 percent and 15 percent respectively, are presented. By combining theoretical and experimental techniques, a generalized MMIC microstrip design database is outlined.

  2. Study on Factors for Accurate Open Circuit Voltage Characterizations in Mn-Type Li-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Natthawuth Somakettarin

    2017-03-01

    Full Text Available Open circuit voltage (OCV of lithium batteries has been of interest since the battery management system (BMS requires an accurate knowledge of the voltage characteristics of any Li-ion batteries. This article presents an OCV characteristic for lithium manganese oxide (LMO batteries under several experimental operating conditions, and discusses factors for accurate OCV determination. A test system is developed for OCV characterization based on the OCV pulse test method. Various factors for the OCV behavior, such as resting period, step-size of the pulse test, testing current amplitude, hysteresis phenomena, and terminal voltage relationship, are investigated and evaluated. To this end, a general OCV model based on state of charge (SOC tracking is developed and validated with satisfactory results.

  3. An X-Band Waveguide Measurement Technique for the Accurate Characterization of Materials with Low Dielectric Loss Permittivity

    CERN Document Server

    Allen, Kenneth W; Reid, David R; Bean, Jeffrey A; Ellis, Jeremy D; Morris, Andrew P; Marsh, Jeramy M

    2016-01-01

    In this work, we present a new X-band waveguide (WR90) measurement method that permits the broadband characterization of the complex permittivity for low dielectric loss tangent material specimens with improved accuracy. An electrically-long polypropylene specimen that partially fills the cross-section is inserted into the waveguide and the transmitted scattering parameter (S21) is measured. The extraction method relies on computational electromagnetic simulations, coupled with a genetic algorithm, to match the experimental S21 measurement. The sensitivity of the technique to sample length was explored by simulating specimen lengths from 2.54 to 15.24 cm, in 2.54 cm increments. Analysis of our simulated data predicts the technique will have the sensitivity to measure loss tangent values on the order of 10e-3 for materials such as polymers with relatively low real permittivity values. The ability to accurately characterize low-loss dielectric material specimens of polypropylene is demonstrated experimentally. ...

  4. An X-band waveguide measurement technique for the accurate characterization of materials with low dielectric loss permittivity

    Science.gov (United States)

    Allen, Kenneth W.; Scott, Mark M.; Reid, David R.; Bean, Jeffrey A.; Ellis, Jeremy D.; Morris, Andrew P.; Marsh, Jeramy M.

    2016-05-01

    In this work, we present a new X-band waveguide (WR90) measurement method that permits the broadband characterization of the complex permittivity for low dielectric loss tangent material specimens with improved accuracy. An electrically long polypropylene specimen that partially fills the cross-section is inserted into the waveguide and the transmitted scattering parameter (S21) is measured. The extraction method relies on computational electromagnetic simulations, coupled with a genetic algorithm, to match the experimental S21 measurement. The sensitivity of the technique to sample length was explored by simulating specimen lengths from 2.54 to 15.24 cm, in 2.54 cm increments. Analysis of our simulated data predicts the technique will have the sensitivity to measure loss tangent values on the order of 10-3 for materials such as polymers with relatively low real permittivity values. The ability to accurately characterize low-loss dielectric material specimens of polypropylene is demonstrated experimentally. The method was validated by excellent agreement with a free-space focused-beam system measurement of a polypropylene sheet. This technique provides the material measurement community with the ability to accurately extract material properties of low-loss material specimen over the entire X-band range. This technique could easily be extended to other frequency bands.

  5. Fourier Transform Mass Spectrometry and Nuclear Magnetic Resonance Analysis for the Rapid and Accurate Characterization of Hexacosanoylceramide

    Directory of Open Access Journals (Sweden)

    Charles W. Ross

    2016-06-01

    Full Text Available Ceramides are a central unit of all sphingolipids which have been identified as sites of biological recognition on cellular membranes mediating cell growth and differentiation. Several glycosphingolipids have been isolated, displaying immunomodulatory and anti-tumor activities. These molecules have generated considerable interest as potential vaccine adjuvants in humans. Accurate analyses of these and related sphingosine analogues are important for the characterization of structure, biological function, and metabolism. We report the complementary use of direct laser desorption ionization (DLDI, sheath flow electrospray ionization (ESI Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS and high-field nuclear magnetic resonance (NMR analysis for the rapid, accurate identification of hexacosanoylceramide and starting materials. DLDI does not require stringent sample preparation and yields representative ions. Sheath-flow ESI yields ions of the product and byproducts and was significantly better than monospray ESI due to improved compound solubility. Negative ion sheath flow ESI provided data of starting materials and products all in one acquisition as hexacosanoic acid does not ionize efficiently when ceramides are present. NMR provided characterization of these lipid molecules complementing the results obtained from MS analyses. NMR data was able to differentiate straight chain versus branched chain alkyl groups not easily obtained from mass spectrometry.

  6. Fourier Transform Mass Spectrometry and Nuclear Magnetic Resonance Analysis for the Rapid and Accurate Characterization of Hexacosanoylceramide.

    Science.gov (United States)

    Ross, Charles W; Simonsick, William J; Bogusky, Michael J; Celikay, Recep W; Guare, James P; Newton, Randall C

    2016-06-28

    Ceramides are a central unit of all sphingolipids which have been identified as sites of biological recognition on cellular membranes mediating cell growth and differentiation. Several glycosphingolipids have been isolated, displaying immunomodulatory and anti-tumor activities. These molecules have generated considerable interest as potential vaccine adjuvants in humans. Accurate analyses of these and related sphingosine analogues are important for the characterization of structure, biological function, and metabolism. We report the complementary use of direct laser desorption ionization (DLDI), sheath flow electrospray ionization (ESI) Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS) and high-field nuclear magnetic resonance (NMR) analysis for the rapid, accurate identification of hexacosanoylceramide and starting materials. DLDI does not require stringent sample preparation and yields representative ions. Sheath-flow ESI yields ions of the product and byproducts and was significantly better than monospray ESI due to improved compound solubility. Negative ion sheath flow ESI provided data of starting materials and products all in one acquisition as hexacosanoic acid does not ionize efficiently when ceramides are present. NMR provided characterization of these lipid molecules complementing the results obtained from MS analyses. NMR data was able to differentiate straight chain versus branched chain alkyl groups not easily obtained from mass spectrometry.

  7. Accurate Characterization of the Peptide Linkage in the Gas Phase: a Joint Quantum-Chemical and Rotational Spectroscopy Study of the Glycine Dipeptide Analogue

    Science.gov (United States)

    Puzzarini, Cristina; Biczysko, Malgorzata; Barone, Vincenzo; Largo, Laura; Peña, Isabel; Cabezas, Carlos; Alonso, José L.

    2014-06-01

    Accurate structures of aminoacids in the gas phase have been obtained by joint microwave and quantum-chemical investigations. However, the structure and conformational behavior of α-aminoacids once incorporated into peptide chains are completely different and have not yet been characterized with the same accuracy. To fill this gap, we present here an accurate characterization of the simplest dipeptide analogue (N-acetylglycinamide) involving peptidic bonds. State-of-the-art quantum-chemical computations are complemented by a comprehensive study of the rotational spectrum using a combination of Fourier transform microwave spectroscopy with laser ablation. The coexistence of the C_7 and C_5 conformers has been proved and energetically as well as spectroscopically characterized. This joint theoretical-experimental investigation demonstrated the feasibility of obtaining accurate structures for flexible small biomolecules, thus paving the route to the elucidation of the inherent behavior of peptides.

  8. Accurate particle speed prediction by improved particle speed measurement and 3-dimensional particle size and shape characterization technique

    DEFF Research Database (Denmark)

    Cernuschi, Federico; Rothleitner, Christian; Clausen, Sønnik

    2017-01-01

    Accurate particle mass and velocity measurement is needed for interpreting test results in erosion tests of materials and coatings. The impact and damage of a surface is influenced by the kinetic energy of a particle, i.e. particle mass and velocity. Particle mass is usually determined with optic...

  9. Reconstruction of the activity of point sources for the accurate characterization of nuclear waste drums by segmented gamma scanning.

    Science.gov (United States)

    Krings, Thomas; Mauerhofer, Eric

    2011-06-01

    This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.

  10. Accurate spectroscopic characterization of oxirane: A valuable route to its identification in Titan's atmosphere and the assignment of unidentified infrared bands

    Energy Technology Data Exchange (ETDEWEB)

    Puzzarini, Cristina [Dipartimento di Chimica " Giacomo Ciamician," Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy); Biczysko, Malgorzata; Bloino, Julien; Barone, Vincenzo, E-mail: cristina.puzzarini@unibo.it [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2014-04-20

    In an effort to provide an accurate spectroscopic characterization of oxirane, state-of-the-art computational methods and approaches have been employed to determine highly accurate fundamental vibrational frequencies and rotational parameters. Available experimental data were used to assess the reliability of our computations, and an accuracy on average of 10 cm{sup –1} for fundamental transitions as well as overtones and combination bands has been pointed out. Moving to rotational spectroscopy, relative discrepancies of 0.1%, 2%-3%, and 3%-4% were observed for rotational, quartic, and sextic centrifugal-distortion constants, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be useful for identification of oxirane in Titan's atmosphere and the assignment of unidentified infrared bands. Since oxirane was already observed in the interstellar medium and some astronomical objects are characterized by very high D/H ratios, we also considered the accurate determination of the spectroscopic parameters for the mono-deuterated species, oxirane-d1. For the latter, an empirical scaling procedure allowed us to improve our computed data and to provide predictions for rotational transitions with a relative accuracy of about 0.02% (i.e., an uncertainty of about 40 MHz for a transition lying at 200 GHz).

  11. Accurate characterization of the stellar and orbital parameters of the exoplanetary system WASP-33 b from orbital dynamics

    CERN Document Server

    Iorio, Lorenzo

    2016-01-01

    By using the most recently published Doppler tomography measurements and accurate theoretical modeling of the oblateness-driven orbital precessions, we tightly constrain some of the physical and orbital parameters of the planetary system hosted by the fast rotating star WASP-33. In particular, the measurements of the orbital inclination $i_{\\rm p}$ to the plane of the sky and of the sky-projected spin-orbit misalignment $\\lambda$ at two epochs six years apart allowed for the determination of the longitude of the ascending node $\\Omega$ and of the orbital inclination $I$ to the apparent equatorial plane at the same epochs. As a consequence, average rates of change $\\dot\\Omega_{\\rm exp},~\\dot I_{\\rm exp}$ of this two orbital elements, accurate to a $\\approx 10^{-2}~\\textrm{deg}~\\textrm{yr}^{-1}$ level, were calculated as well. By comparing them to general theoretical expressions $\\dot\\Omega_{J_2},~\\dot I_{J_2}$ for their precessions induced by an arbitrarily oriented quadrupole mass moment, we were able to dete...

  12. SAXS Combined with UV-vis Spectroscopy and QELS: Accurate Characterization of Silver Sols Synthesized in Polymer Matrices

    Science.gov (United States)

    Bulavin, Leonid; Kutsevol, Nataliya; Chumachenko, Vasyl; Soloviov, Dmytro; Kuklin, Alexander; Marynin, Andrii

    2016-01-01

    The present work demonstrates a validation of small-angle X-ray scattering (SAXS) combining with ultra violet and visible (UV-vis) spectroscopy and quasi-elastic light scattering (QELS) analysis for characterization of silver sols synthesized in polymer matrices. Polymer matrix internal structure and polymer chemical nature actually controlled the sol size characteristics. It was shown that for precise analysis of nanoparticle size distribution these techniques should be used simultaneously. All applied methods were in good agreement for the characterization of size distribution of small particles (less than 60 nm) in the sols. Some deviations of the theoretical curves from the experimental ones were observed. The most probable cause is that nanoparticles were not entirely spherical in form.

  13. SAXS Combined with UV-vis Spectroscopy and QELS: Accurate Characterization of Silver Sols Synthesized in Polymer Matrices.

    Science.gov (United States)

    Bulavin, Leonid; Kutsevol, Nataliya; Chumachenko, Vasyl; Soloviov, Dmytro; Kuklin, Alexander; Marynin, Andrii

    2016-12-01

    The present work demonstrates a validation of small-angle X-ray scattering (SAXS) combining with ultra violet and visible (UV-vis) spectroscopy and quasi-elastic light scattering (QELS) analysis for characterization of silver sols synthesized in polymer matrices. Polymer matrix internal structure and polymer chemical nature actually controlled the sol size characteristics. It was shown that for precise analysis of nanoparticle size distribution these techniques should be used simultaneously. All applied methods were in good agreement for the characterization of size distribution of small particles (less than 60 nm) in the sols. Some deviations of the theoretical curves from the experimental ones were observed. The most probable cause is that nanoparticles were not entirely spherical in form.

  14. MRI-aided tissues interface characterization: An accurate signal propagation time calculation method for UWB breast tumor imaging

    Science.gov (United States)

    Wang, Liang; Xiao, Xia; Kikkawa, Takamaro

    2016-12-01

    Radar-based ultrawideband (UWB) microwave imaging is expected to be a safe, low-cost tool for breast cancer detection. However, since radar wave travels at different speeds in different tissues, propagation time is hard to be estimated in heterogeneous breast. Wrongly estimated propagation time leads to error of tumor location in resulting image, aka imaging error. In this paper, we develop a magnetic resonance imaging-aided (MRI-aided) propagation time calculation technique which is independent from radar imaging system but can help decrease the imaging error. The technique can eliminate the influence of the rough interface between fat layer and gland layer in breast and get relative accurate thicknesses of two layers. The propagation time in each layer is calculated and summed. The summed propagation time is used in Confocal imaging algorithm to increase the accuracy of resulting image. 25 patients' breast models with glands of varying size are classified into four categories for imaging simulation tests. Imaging accuracy in terms of tumor location along x-direction has been improved for 21 among 25 cases, as a result, overall around 50% improvement compared to conventional UWB imaging.

  15. Accurately characterizing the importance of wave‐particle interactions in radiation belt dynamics: The pitfalls of statistical wave representations

    Science.gov (United States)

    Mann, Ian R.; Rae, I. Jonathan; Sibeck, David G.; Watt, Clare E. J.

    2016-01-01

    Abstract Wave‐particle interactions play a crucial role in energetic particle dynamics in the Earth's radiation belts. However, the relative importance of different wave modes in these dynamics is poorly understood. Typically, this is assessed during geomagnetic storms using statistically averaged empirical wave models as a function of geomagnetic activity in advanced radiation belt simulations. However, statistical averages poorly characterize extreme events such as geomagnetic storms in that storm‐time ultralow frequency wave power is typically larger than that derived over a solar cycle and Kp is a poor proxy for storm‐time wave power. PMID:27867798

  16. Accurately characterizing the importance of wave-particle interactions in radiation belt dynamics: The pitfalls of statistical wave representations.

    Science.gov (United States)

    Murphy, Kyle R; Mann, Ian R; Rae, I Jonathan; Sibeck, David G; Watt, Clare E J

    2016-08-01

    Wave-particle interactions play a crucial role in energetic particle dynamics in the Earth's radiation belts. However, the relative importance of different wave modes in these dynamics is poorly understood. Typically, this is assessed during geomagnetic storms using statistically averaged empirical wave models as a function of geomagnetic activity in advanced radiation belt simulations. However, statistical averages poorly characterize extreme events such as geomagnetic storms in that storm-time ultralow frequency wave power is typically larger than that derived over a solar cycle and Kp is a poor proxy for storm-time wave power.

  17. Efficient floating diffuse functions for accurate characterization of the surface-bound excess electrons in water cluster anions.

    Science.gov (United States)

    Zhang, Changzhe; Bu, Yuxiang

    2017-01-25

    In this work, the effect of diffuse function types (atom-centered diffuse functions versus floating functions and s-type versus p-type diffuse functions) on the structures and properties of three representative water cluster anions featuring a surface-bound excess electron is studied and we find that an effective combination of such two kinds of diffuse functions can not only reduce the computational cost but also, most importantly, considerably improve the accuracy of results and even avoid incorrect predictions of spectra and the EE shape. Our results indicate that (a) simple augmentation of atom-centered diffuse functions is beneficial for the vertical detachment energy convergence, but it leads to very poor descriptions for the singly occupied molecular orbital (SOMO) and lowest unoccupied molecular orbital (LUMO) distributions of the water cluster anions featuring a surface-bound excess electron and thus a significant ultraviolet spectrum redshift; (b) the ghost-atom-based floating diffuse functions can not only contribute to accurate electronic calculations of the ground state but also avoid poor and even incorrect descriptions of the SOMO and the LUMO induced by excessive augmentation of atom-centered diffuse functions; (c) the floating functions can be realized by ghost atoms and their positions could be determined through an optimization routine along the dipole moment vector direction. In addition, both the s- and p-type floating functions are necessary to supplement in the basis set which are responsible for the ground (s-type character) and excited (p-type character) states of the surface-bound excess electron, respectively. The exponents of the diffuse functions should also be determined to make the diffuse functions cover the main region of the excess electron distribution. Note that excessive augmentation of such diffuse functions is redundant and even can lead to unreasonable LUMO characteristics.

  18. Full-waveform modeling of Zero-Offset Electromagnetic Induction for Accurate Characterization of Subsurface Electrical Properties

    Science.gov (United States)

    Moghadas, D.; André, F.; Vereecken, H.; Lambot, S.

    2009-04-01

    singularities. We tested the model in controlled laboratory conditions for EMI measurements at different heights above a copper sheet, playing the role of a perfect electrical conductor. Good agreement was obtained between the measurements and the model, especially for the resonance frequency of the loop antenna. The loop antenna height could be retrieved by inversion of the Green's function. For practical applications, the method is still limited by the low sensitivity of the antenna with respect to the dynamic range of the VNA. Once this will be resolved, we believe that the proposed method should be very flexible and promising for accurate, multi-frequency EMI data inversion.

  19. Accurate Characterization of Winter Precipitation Using Multi-Angle Snowflake Camera, Visual Hull, Advanced Scattering Methods and Polarimetric Radar

    Directory of Open Access Journals (Sweden)

    Branislav M. Notaroš

    2016-06-01

    Full Text Available This article proposes and presents a novel approach to the characterization of winter precipitation and modeling of radar observables through a synergistic use of advanced optical disdrometers for microphysical and geometrical measurements of ice and snow particles (in particular, a multi-angle snowflake camera—MASC, image processing methodology, advanced method-of-moments scattering computations, and state-of-the-art polarimetric radars. The article also describes the newly built and established MASCRAD (MASC + Radar in-situ measurement site, under the umbrella of CSU-CHILL Radar, as well as the MASCRAD project and 2014/2015 winter campaign. We apply a visual hull method to reconstruct 3D shapes of ice particles based on high-resolution MASC images, and perform “particle-by-particle” scattering computations to obtain polarimetric radar observables. The article also presents and discusses selected illustrative observation data, results, and analyses for three cases with widely-differing meteorological settings that involve contrasting hydrometeor forms. Illustrative results of scattering calculations based on MASC images captured during these events, in comparison with radar data, as well as selected comparative studies of snow habits from MASC, 2D video-disdrometer, and CHILL radar data, are presented, along with the analysis of microphysical characteristics of particles. In the longer term, this work has potential to significantly improve the radar-based quantitative winter-precipitation estimation.

  20. Accurate Characterization of Winter Precipitation Using In-Situ Instrumentation, CSU-CHILL Radar, and Advanced Scattering Methods

    Science.gov (United States)

    Newman, A. J.; Notaros, B. M.; Bringi, V. N.; Kleinkort, C.; Huang, G. J.; Kennedy, P.; Thurai, M.

    2015-12-01

    We present a novel approach to remote sensing and characterization of winter precipitation and modeling of radar observables through a synergistic use of advanced in-situ instrumentation for microphysical and geometrical measurements of ice and snow particles, image processing methodology to reconstruct complex particle three-dimensional (3D) shapes, computational electromagnetics to analyze realistic precipitation scattering, and state-of-the-art polarimetric radar. Our in-situ measurement site at the Easton Valley View Airport, La Salle, Colorado, shown in the figure, consists of two advanced optical imaging disdrometers within a 2/3-scaled double fence intercomparison reference wind shield, and also includes PLUVIO snow measuring gauge, VAISALA weather station, and collocated NCAR GPS advanced upper-air system sounding system. Our primary radar is the CSU-CHILL radar, with a dual-offset Gregorian antenna featuring very high polarization purity and excellent side-lobe performance in any plane, and the in-situ instrumentation site being very conveniently located at a range of 12.92 km from the radar. A multi-angle snowflake camera (MASC) is used to capture multiple different high-resolution views of an ice particle in free-fall, along with its fall speed. We apply a visual hull geometrical method for reconstruction of 3D shapes of particles based on the images collected by the MASC, and convert these shapes into models for computational electromagnetic scattering analysis, using a higher order method of moments. A two-dimensional video disdrometer (2DVD), collocated with the MASC, provides 2D contours of a hydrometeor, along with the fall speed and other important parameters. We use the fall speed from the MASC and the 2DVD, along with state parameters measured at the Easton site, to estimate the particle mass (Böhm's method), and then the dielectric constant of particles, based on a Maxwell-Garnet formula. By calculation of the "particle-by-particle" scattering

  1. Accurate characterization and understanding of interface trap density trends between atomic layer deposited dielectrics and AlGaN/GaN with bonding constraint theory

    Energy Technology Data Exchange (ETDEWEB)

    Ramanan, Narayanan; Lee, Bongmook; Misra, Veena, E-mail: vmisra@ncsu.edu [Department of Electrical and Computer Engineering, North Carolina State University, 2410 Campus Shore Drive, Raleigh, North Carolina 27695 (United States)

    2015-06-15

    Many dielectrics have been proposed for the gate stack or passivation of AlGaN/GaN based metal oxide semiconductor heterojunction field effect transistors, to reduce gate leakage and current collapse, both for power and RF applications. Atomic Layer Deposition (ALD) is preferred for dielectric deposition as it provides uniform, conformal, and high quality films with precise monolayer control of film thickness. Identification of the optimum ALD dielectric for the gate stack or passivation requires a critical investigation of traps created at the dielectric/AlGaN interface. In this work, a pulsed-IV traps characterization method has been used for accurate characterization of interface traps with a variety of ALD dielectrics. High-k dielectrics (HfO{sub 2}, HfAlO, and Al{sub 2}O{sub 3}) are found to host a high density of interface traps with AlGaN. In contrast, ALD SiO{sub 2} shows the lowest interface trap density (<2 × 10{sup 12 }cm{sup −2}) after annealing above 600 °C in N{sub 2} for 60 s. The trend in observed trap densities is subsequently explained with bonding constraint theory, which predicts a high density of interface traps due to a higher coordination state and bond strain in high-k dielectrics.

  2. Calibration method of the tomographic gamma scan techniques available for accurately characterizing {sup 137}Cs from 110mAg interference

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Sung Yeop [GODO TECH Co. Ltd., Hanam (Korea, Republic of)

    2016-03-15

    The Tomographic Gamma Scan (TGS) technique partitions radioactive waste drums into 10 x 10 x 16 voxels and assays both the density and concentration of radioactivity for each voxel thus providing for improved accuracy, when compared to the traditional Non-Destructive Assay(NDA) techniques. It could decrease the degree of precision measurement since there is a trade-off between spatial resolution and precision. This latter drawback is compensated by expanding the Region of Interest (ROI) that differentiates the full energy peaks, which, in turn, results in an optimized degree of precision. The enlarged ROI, however, increases the probability of interference among those nuclides that emit energies in the adjacent spectrum. This study has identified the cause of such interference for the reference nuclide of the TGS technique, {sup 137}Cs (661.66 keV, halflife 30.5 years), to be 110mAg (657.75 keV, half-life 249.76 days). A new calibration method of determining the optimized ROI was developed, and its effectiveness in accurately characterizing {sup 137}Cs and eliminating the interference was further ascertained.

  3. Accurate Characterization of Rain Drop Size Distribution Using Meteorological Particle Spectrometer and 2D Video Disdrometer for Propagation and Remote Sensing Applications

    Science.gov (United States)

    Thurai, Merhala; Bringi, Viswanathan; Kennedy, Patrick; Notaros, Branislav; Gatlin, Patrick

    2017-01-01

    Accurate measurements of rain drop size distributions (DSD), with particular emphasis on small and tiny drops, are presented. Measurements were conducted in two very different climate regions, namely Northern Colorado and Northern Alabama. Both datasets reveal a combination of (i) a drizzle mode for drop diameters less than 0.7 mm and (ii) a precipitation mode for larger diameters. Scattering calculations using the DSDs are performed at S and X bands and compared with radar observations for the first location. Our accurate DSDs will improve radar-based rain rate estimates as well as propagation predictions.

  4. Screening and characterization of reactive compounds with in vitro peptide-trapping and liquid chromatography/high-resolution accurate mass spectrometry.

    Science.gov (United States)

    Wei, Cong; Chupak, Louis S; Philip, Thomas; Johnson, Benjamin M; Gentles, Robert; Drexler, Dieter M

    2014-02-01

    The present study describes a novel methodology for the detection of reactive compounds using in vitro peptide-trapping and liquid chromatography-high-resolution accurate mass spectrometry (LC-HRMS). Compounds that contain electrophilic groups can covalently bind to nucleophilic moieties in proteins and form adducts. Such adducts are thought to be associated with drug-mediated toxicity and therefore represent potential liabilities in drug discovery programs. In addition, reactive compounds identified in biological screening can be associated with data that can be misinterpreted if the reactive nature of the compound is not appreciated. In this work, to facilitate the triage of hits from high-throughput screening (HTS), a novel assay was developed to monitor the formation of covalent peptide adducts by compounds suspected to be chemically reactive. The assay consists of in vitro incubations of test compounds (under conditions of physiological pH) with synthetically prepared peptides presenting a variety of nucleophilic moieties such as cysteine, lysine, histidine, arginine, serine, and tyrosine. Reaction mixtures were analyzed using full-scan LC-HRMS, the data were interrogated using postacquisition data mining, and modified amino acids were identified by subsequent LC-HRMS/mass spectrometry. The study demonstrated that in vitro nucleophilic peptide trapping followed by LC-HRMS analysis is a useful approach for screening of intrinsically reactive compounds identified from HTS exercises, which are then removed from follow-up processes, thus obviating the generation of data from biochemical activity assays.

  5. Accurate Assessment of the Oxygen Reduction Electrocatalytic Activity of Mn/Polypyrrole Nanocomposites Based on Rotating Disk Electrode Measurements, Complemented with Multitechnique Structural Characterizations

    Directory of Open Access Journals (Sweden)

    Patrizia Bocchetta

    2016-01-01

    Full Text Available This paper reports on the quantitative assessment of the oxygen reduction reaction (ORR electrocatalytic activity of electrodeposited Mn/polypyrrole (PPy nanocomposites for alkaline aqueous solutions, based on the Rotating Disk Electrode (RDE method and accompanied by structural characterizations relevant to the establishment of structure-function relationships. The characterization of Mn/PPy films is addressed to the following: (i morphology, as assessed by Field-Emission Scanning Electron Microscopy (FE-SEM and Atomic Force Microscope (AFM; (ii local electrical conductivity, as measured by Scanning Probe Microscopy (SPM; and (iii molecular structure, accessed by Raman Spectroscopy; these data provide the background against which the electrocatalytic activity can be rationalised. For comparison, the properties of Mn/PPy are gauged against those of graphite, PPy, and polycrystalline-Pt (poly-Pt. Due to the literature lack of accepted protocols for precise catalytic activity measurement at poly-Pt electrode in alkaline solution using the RDE methodology, we have also worked on the obtainment of an intralaboratory benchmark by evidencing some of the time-consuming parameters which drastically affect the reliability and repeatability of the measurement.

  6. Parasitic analysis and π-type Butterworth-Van Dyke model for complementary-metal-oxide-semiconductor Lamb wave resonator with accurate two-port Y-parameter characterizations.

    Science.gov (United States)

    Wang, Yong; Goh, Wang Ling; Chai, Kevin T-C; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu

    2016-04-01

    The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators.

  7. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  8. Accurate and Accidental Empathy.

    Science.gov (United States)

    Chandler, Michael

    The author offers two controversial criticisms of what are rapidly becoming standard assessment procedures for the measurement of empathic skill. First, he asserts that assessment procedures which attend exclusively to the accuracy with which subjects are able to characterize other people's feelings provide little or no useful information about…

  9. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  10. Accurate thickness measurement of graphene

    Science.gov (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  11. Accurate thickness measurement of graphene.

    Science.gov (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  12. Groundwater recharge: Accurately representing evapotranspiration

    CSIR Research Space (South Africa)

    Bugan, Richard DH

    2011-09-01

    Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...

  13. NNLOPS accurate associated HW production

    CERN Document Server

    Astill, William; Re, Emanuele; Zanderighi, Giulia

    2016-01-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross Section Working Group.

  14. Efficient and accurate fragmentation methods.

    Science.gov (United States)

    Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S

    2014-09-16

    Conspectus Three novel fragmentation methods that are available in the electronic structure program GAMESS (general atomic and molecular electronic structure system) are discussed in this Account. The fragment molecular orbital (FMO) method can be combined with any electronic structure method to perform accurate calculations on large molecular species with no reliance on capping atoms or empirical parameters. The FMO method is highly scalable and can take advantage of massively parallel computer systems. For example, the method has been shown to scale nearly linearly on up to 131 000 processor cores for calculations on large water clusters. There have been many applications of the FMO method to large molecular clusters, to biomolecules (e.g., proteins), and to materials that are used as heterogeneous catalysts. The effective fragment potential (EFP) method is a model potential approach that is fully derived from first principles and has no empirically fitted parameters. Consequently, an EFP can be generated for any molecule by a simple preparatory GAMESS calculation. The EFP method provides accurate descriptions of all types of intermolecular interactions, including Coulombic interactions, polarization/induction, exchange repulsion, dispersion, and charge transfer. The EFP method has been applied successfully to the study of liquid water, π-stacking in substituted benzenes and in DNA base pairs, solvent effects on positive and negative ions, electronic spectra and dynamics, non-adiabatic phenomena in electronic excited states, and nonlinear excited state properties. The effective fragment molecular orbital (EFMO) method is a merger of the FMO and EFP methods, in which interfragment interactions are described by the EFP potential, rather than the less accurate electrostatic potential. The use of EFP in this manner facilitates the use of a smaller value for the distance cut-off (Rcut). Rcut determines the distance at which EFP interactions replace fully quantum

  15. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  16. Accurate ab initio spin densities

    CERN Document Server

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  17. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  18. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...... to the POT. The GDOT can optimize for the size as well as the orientation and position of arbitrarily shaped array elements. Both co- and cross-polar radiation can be optimized for multiple frequencies, dual polarization, and several feed illuminations. Several contoured beam reflectarrays have been designed...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  19. Accurate determination of characteristic relative permeability curves

    Science.gov (United States)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  20. A More Accurate Fourier Transform

    CERN Document Server

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  1. Pendant bubble method for an accurate characterization of superhydrophobic surfaces.

    Science.gov (United States)

    Ling, William Yeong Liang; Ng, Tuck Wah; Neild, Adrian

    2011-12-06

    The commonly used sessile drop method for measuring contact angles and surface tension suffers from errors on superhydrophobic surfaces. This occurs from unavoidable experimental error in determining the vertical location of the liquid-solid-vapor interface due to a camera's finite pixel resolution, thereby necessitating the development and application of subpixel algorithms. We demonstrate here the advantage of a pendant bubble in decreasing the resulting error prior to the application of additional algorithms. For sessile drops to attain an equivalent accuracy, the pixel count would have to be increased by 2 orders of magnitude.

  2. Accurate characterization of OPVs: Device masking and different solar simulators

    DEFF Research Database (Denmark)

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.;

    2013-01-01

    laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  3. Optimal strategies for throwing accurately

    CERN Document Server

    Venkadesan, Madhusudhan

    2010-01-01

    Accuracy of throwing in games and sports is governed by how errors at projectile release are propagated by flight dynamics. To address the question of what governs the choice of throwing strategy, we use a simple model of throwing with an arm modelled as a hinged bar of fixed length that can release a projectile at any angle and angular velocity. We show that the amplification of deviations in launch parameters from a one parameter family of solution curves is quantified by the largest singular value of an appropriate Jacobian. This allows us to predict a preferred throwing style in terms of this singular value, which itself depends on target location and the target shape. Our analysis also allows us to characterize the trade-off between speed and accuracy despite not including any effects of signal-dependent noise. Using nonlinear calculations for propagating finite input-noise, we find that an underarm throw to a target leads to an undershoot, but an overarm throw does not. Finally, we consider the limit of...

  4. 38 CFR 4.46 - Accurate measurement.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  5. Laboratory Building for Accurate Determination of Plutonium

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  6. Understanding the Code: keeping accurate records.

    Science.gov (United States)

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met.

  7. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  8. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  9. Quantum-Accurate Molecular Dynamics Potential for Tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Mitchell; Thompson, Aidan P.

    2017-03-01

    The purpose of this short contribution is to report on the development of a Spectral Neighbor Analysis Potential (SNAP) for tungsten. We have focused on the characterization of elastic and defect properties of the pure material in order to support molecular dynamics simulations of plasma-facing materials in fusion reactors. A parallel genetic algorithm approach was used to efficiently search for fitting parameters optimized against a large number of objective functions. In addition, we have shown that this many-body tungsten potential can be used in conjunction with a simple helium pair potential1 to produce accurate defect formation energies for the W-He binary system.

  10. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  11. Accurate Switched-Voltage voltage averaging circuit

    OpenAIRE

    金光, 一幸; 松本, 寛樹

    2006-01-01

    Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.

  12. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency renderi

  13. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  14. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  15. Accurate colorimetric feedback for RGB LED clusters

    Science.gov (United States)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  16. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  17. Synthesizing Accurate Floating-Point Formulas

    OpenAIRE

    Ioualalen, Arnault; Martel, Matthieu

    2013-01-01

    International audience; Many critical embedded systems perform floating-point computations yet their accuracy is difficult to assert and strongly depends on how formulas are written in programs. In this article, we focus on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes. In general, an expression may be rewritten in many ways. To avoid any combinatorial explosion, we use an intermediate representation, called APEG, enabling us to rep...

  18. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  19. Accurate Control of Josephson Phase Qubits

    Science.gov (United States)

    2016-04-14

    61 ~1986!. 23 K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory, Lecture Notes in Physics , Vol. 190 ~Springer-Verlag... PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...for Bits and Atoms and Department of Physics , MIT, Cambridge, Massachusetts 02139, USA 2Solid State and Photonics Laboratory, Stanford University

  20. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  1. Accurate guitar tuning by cochlear implant musicians.

    Science.gov (United States)

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  2. Accurate integration of forced and damped oscillators

    OpenAIRE

    García Alonso, Fernando Luis; Cortés Molina, Mónica; Villacampa, Yolanda; Reyes Perales, José Antonio

    2016-01-01

    The new methods accurately integrate forced and damped oscillators. A family of analytical functions is introduced known as T-functions which are dependent on three parameters. The solution is expressed as a series of T-functions calculating their coefficients by means of recurrences which involve the perturbation function. In the T-functions series method the perturbation parameter is the factor in the local truncation error. Furthermore, this method is zero-stable and convergent. An applica...

  3. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  4. Accurate finite element modeling of acoustic waves

    Science.gov (United States)

    Idesman, A.; Pham, D.

    2014-07-01

    In the paper we suggest an accurate finite element approach for the modeling of acoustic waves under a suddenly applied load. We consider the standard linear elements and the linear elements with reduced dispersion for the space discretization as well as the explicit central-difference method for time integration. The analytical study of the numerical dispersion shows that the most accurate results can be obtained with the time increments close to the stability limit. However, even in this case and the use of the linear elements with reduced dispersion, mesh refinement leads to divergent numerical results for acoustic waves under a suddenly applied load. This is explained by large spurious high-frequency oscillations. For the quantification and the suppression of spurious oscillations, we have modified and applied a two-stage time-integration technique that includes the stage of basic computations and the filtering stage. This technique allows accurate convergent results at mesh refinement as well as significantly reduces the numerical anisotropy of solutions. We should mention that the approach suggested is very general and can be equally applied to any loading as well as for any space-discretization technique and any explicit or implicit time-integration method.

  5. Accurate measurement of unsteady state fluid temperature

    Science.gov (United States)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  6. New law requires 'medically accurate' lesson plans.

    Science.gov (United States)

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  7. Accurate diagnosis is essential for amebiasis

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  8. The first accurate description of an aurora

    Science.gov (United States)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  9. Niche Genetic Algorithm with Accurate Optimization Performance

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  10. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  11. Accurate Stellar Parameters for Exoplanet Host Stars

    Science.gov (United States)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  12. Accurate pose estimation for forensic identification

    Science.gov (United States)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  13. Accurate pattern registration for integrated circuit tomography

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.

    2001-07-15

    As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.

  14. Accurate basis set truncation for wavefunction embedding

    Science.gov (United States)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  15. How Accurately can we Calculate Thermal Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  16. Accurate taxonomic assignment of short pyrosequencing reads.

    Science.gov (United States)

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  17. An accurate potential energy curve for helium based on ab initio calculations

    Science.gov (United States)

    Janzen, A. R.; Aziz, R. A.

    1997-07-01

    Korona, Williams, Bukowski, Jeziorski, and Szalewicz [J. Chem. Phys. 106, 1 (1997)] constructed a completely ab initio potential for He2 by fitting their calculations using infinite order symmetry adapted perturbation theory at intermediate range, existing Green's function Monte Carlo calculations at short range and accurate dispersion coefficients at long range to a modified Tang-Toennies potential form. The potential with retardation added to the dipole-dipole dispersion is found to predict accurately a large set of microscopic and macroscopic experimental data. The potential with a significantly larger well depth than other recent potentials is judged to be the most accurate characterization of the helium interaction yet proposed.

  18. Accurate Electromagnetic Modeling Methods for Integrated Circuits

    NARCIS (Netherlands)

    Sheng, Z.

    2010-01-01

    The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on t

  19. Accurate Telescope Mount Positioning with MEMS Accelerometers

    Science.gov (United States)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  20. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  1. Toward Accurate and Quantitative Comparative Metagenomics

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  2. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  3. Accurate quantification of magnetic particle properties by intra-pair magnetophoresis for nanobiotechnology

    Science.gov (United States)

    van Reenen, Alexander; Gao, Yang; Bos, Arjen H.; de Jong, Arthur M.; Hulsen, Martien A.; den Toonder, Jaap M. J.; Prins, Menno W. J.

    2013-07-01

    The application of magnetic particles in biomedical research and in-vitro diagnostics requires accurate characterization of their magnetic properties, with single-particle resolution and good statistics. Here, we report intra-pair magnetophoresis as a method to accurately quantify the field-dependent magnetic moments of magnetic particles and to rapidly generate histograms of the magnetic moments with good statistics. We demonstrate our method with particles of different sizes and from different sources, with a measurement precision of a few percent. We expect that intra-pair magnetophoresis will be a powerful tool for the characterization and improvement of particles for the upcoming field of particle-based nanobiotechnology.

  4. Accurate Weather Forecasting for Radio Astronomy

    Science.gov (United States)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  5. Waste Characterization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Vigil-Holterman, Luciana R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Naranjo, Felicia Danielle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-02

    This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream’s generation, characterization, and management; and not merely a list of information sources.

  6. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  7. Accurate lineshape spectroscopy and the Boltzmann constant.

    Science.gov (United States)

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-10-14

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.

  8. MEMS accelerometers in accurate mount positioning systems

    Science.gov (United States)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  9. Towards Accurate Modeling of Moving Contact Lines

    CERN Document Server

    Holmgren, Hanna

    2015-01-01

    A main challenge in numerical simulations of moving contact line problems is that the adherence, or no-slip boundary condition leads to a non-integrable stress singularity at the contact line. In this report we perform the first steps in developing the macroscopic part of an accurate multiscale model for a moving contact line problem in two space dimensions. We assume that a micro model has been used to determine a relation between the contact angle and the contact line velocity. An intermediate region is introduced where an analytical expression for the velocity exists. This expression is used to implement boundary conditions for the moving contact line at a macroscopic scale, along a fictitious boundary located a small distance away from the physical boundary. Model problems where the shape of the interface is constant thought the simulation are introduced. For these problems, experiments show that the errors in the resulting contact line velocities converge with the grid size $h$ at a rate of convergence $...

  10. Accurate upper body rehabilitation system using kinect.

    Science.gov (United States)

    Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit

    2016-08-01

    The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.

  11. Fast and accurate exhaled breath ammonia measurement.

    Science.gov (United States)

    Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H

    2014-06-11

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.

  12. Noninvasive hemoglobin monitoring: how accurate is enough?

    Science.gov (United States)

    Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E

    2013-10-01

    Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.

  13. Accurate free energy calculation along optimized paths.

    Science.gov (United States)

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  14. Accurate fission data for nuclear safety

    CERN Document Server

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  15. Fast and Provably Accurate Bilateral Filtering.

    Science.gov (United States)

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy.

  16. Accurate thermoplasmonic simulation of metallic nanoparticles

    Science.gov (United States)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing

    2017-01-01

    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  17. Accurate Atom Counting in Mesoscopic Ensembles

    CERN Document Server

    Hume, D B; Joos, M; Muessel, W; Strobel, H; Oberthaler, M K

    2013-01-01

    Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.

  18. Accurate Atom Counting in Mesoscopic Ensembles

    Science.gov (United States)

    Hume, D. B.; Stroescu, I.; Joos, M.; Muessel, W.; Strobel, H.; Oberthaler, M. K.

    2013-12-01

    Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.

  19. Accurate atom counting in mesoscopic ensembles.

    Science.gov (United States)

    Hume, D B; Stroescu, I; Joos, M; Muessel, W; Strobel, H; Oberthaler, M K

    2013-12-20

    Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.

  20. Accurate measurement of liquid transport through nanoscale conduits

    Science.gov (United States)

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2016-01-01

    Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems. PMID:27112404

  1. Accurate paleointensities - the multi-method approach

    Science.gov (United States)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  2. Optimizing cell arrays for accurate functional genomics

    Directory of Open Access Journals (Sweden)

    Fengler Sven

    2012-07-01

    Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.

  3. Important Nearby Galaxies without Accurate Distances

    Science.gov (United States)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  4. How flatbed scanners upset accurate film dosimetry.

    Science.gov (United States)

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  5. An automated method for accurate vessel segmentation

    Science.gov (United States)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008

  6. Accurate simulation of optical properties in dyes.

    Science.gov (United States)

    Jacquemin, Denis; Perpète, Eric A; Ciofini, Ilaria; Adamo, Carlo

    2009-02-17

    Since Antiquity, humans have produced and commercialized dyes. To this day, extraction of natural dyes often requires lengthy and costly procedures. In the 19th century, global markets and new industrial products drove a significant effort to synthesize artificial dyes, characterized by low production costs, huge quantities, and new optical properties (colors). Dyes that encompass classes of molecules absorbing in the UV-visible part of the electromagnetic spectrum now have a wider range of applications, including coloring (textiles, food, paintings), energy production (photovoltaic cells, OLEDs), or pharmaceuticals (diagnostics, drugs). Parallel to the growth in dye applications, researchers have increased their efforts to design and synthesize new dyes to customize absorption and emission properties. In particular, dyes containing one or more metallic centers allow for the construction of fairly sophisticated systems capable of selectively reacting to light of a given wavelength and behaving as molecular devices (photochemical molecular devices, PMDs).Theoretical tools able to predict and interpret the excited-state properties of organic and inorganic dyes allow for an efficient screening of photochemical centers. In this Account, we report recent developments defining a quantitative ab initio protocol (based on time-dependent density functional theory) for modeling dye spectral properties. In particular, we discuss the importance of several parameters, such as the methods used for electronic structure calculations, solvent effects, and statistical treatments. In addition, we illustrate the performance of such simulation tools through case studies. We also comment on current weak points of these methods and ways to improve them.

  7. Automatic classification and accurate size measurement of blank mask defects

    Science.gov (United States)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  8. Data fusion for accurate microscopic rough surface metrology.

    Science.gov (United States)

    Chen, Yuhang

    2016-06-01

    Data fusion for rough surface measurement and evaluation was analyzed on simulated datasets, one with higher density (HD) but lower accuracy and the other with lower density (LD) but higher accuracy. Experimental verifications were then performed on laser scanning microscopy (LSM) and atomic force microscopy (AFM) characterizations of surface areal roughness artifacts. The results demonstrated that the fusion based on Gaussian process models is effective and robust under different measurement biases and noise strengths. All the amplitude, height distribution, and spatial characteristics of the original sample structure can be precisely recovered, with better metrological performance than any individual measurements. As for the influencing factors, the HD noise has a relatively weaker effect as compared with the LD noise. Furthermore, to enable an accurate fusion, the ratio of LD sampling interval to surface autocorrelation length should be smaller than a critical threshold. In general, data fusion is capable of enhancing the nanometrology of rough surfaces by combining efficient LSM measurement and down-sampled fast AFM scan. The accuracy, resolution, spatial coverage and efficiency can all be significantly improved. It is thus expected to have potential applications in development of hybrid microscopy and in surface metrology.

  9. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    Science.gov (United States)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  10. Accurate Jones Matrix of the Practical Faraday Rotator

    Institute of Scientific and Technical Information of China (English)

    王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝

    2003-01-01

    The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.

  11. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  12. Spherical near-field antenna measurements — The most accurate antenna measurement technique

    DEFF Research Database (Denmark)

    Breinbjerg, Olav

    2016-01-01

    The spherical near-field antenna measurement technique combines several advantages and generally constitutes the most accurate technique for experimental characterization of radiation from antennas. This paper/presentation discusses these advantages, briefly reviews the early history and present...... status, and addresses future challenges for spherical near-field antenna measurements; in particular, from the viewpoint of the DTU-ESA Spherical Near-Field Antenna Test Facility....

  13. Accurate source location from P waves scattered by surface topography

    Science.gov (United States)

    Wang, N.; Shen, Y.

    2015-12-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (> 100 m). In this study, we explore the use of P-coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example. The grid search method is combined with the 3D strain Green's tensor database type method to improve the search efficiency as well as the quality of hypocenter solution. The strain Green's tensor is calculated by the 3D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are then obtained based on the least-square misfit between the 'observed' and predicted P and P-coda waves. A 95% confidence interval of the solution is also provided as a posterior error estimation. We find that the scattered waves are mainly due to topography in comparison with random velocity heterogeneity characterized by the von Kάrmάn-type power spectral density function. When only P wave data is used, the 'best' solution is offset from the real source location mostly in the vertical direction. The incorporation of P coda significantly improves solution accuracy and reduces its uncertainty. The solution remains robust with a range of random noises in data, un-modeled random velocity heterogeneities, and uncertainties in moment tensors that we tested.

  14. Accurate absolute measurement of trapped Cs atoms in a MOT

    Energy Technology Data Exchange (ETDEWEB)

    Talavera O, M.; Lopez R, M.; Carlos L, E. de [Division de Tiempo y Frecuencia, Centro Nacional de Metrologia, CENAM, km 4.5 Carretera a los Cues, El Marques, 76241 Queretaro (Mexico); Jimenez S, S. [Centro de Investigacion y Estudios Avanzados del lPN, Unidad Queretaro, Libramiento Norponiente No. 2000, Fracc. Real de Juriquilla, 76230 Queretaro (Mexico)

    2007-07-01

    A Cs-133 Magneto-Optical Trap (MOT) has been developed at the Time and Frequency Division of the Centro Nacional de Metrologia, CENAM, in Mexico. This MOT is part of a primary frequency standard based on ultra-cold Cs atoms, called CsF-1 clock, under development at CENAM. In this Cs MOT, we use the standard configuration ({sigma}{sup +} - {sigma}{sup -}) 4-horizontal 2-vertical laser beams 1.9 cm in diameter, with 5 mW each. We use a 852 nm, 5 mW, DBR laser as a master laser which is stabilized by saturation spectroscopy. Emission linewidth of the master laser is l MHz. In order to amplify the light of the master laser, a 50 mW, 852 nm AlGaAs laser is used as slave laser. This slave laser is stabilized by light injection technique. A 12 MHz red shift of the light is performed by two double passes through two Acusto-Optic Modulators (AOMs). The optical part of the CENAMs MOT is very robust against mechanical vibration, acoustic noise and temperature changes in our laboratory, because none of our diode lasers use an extended cavity to reduce the linewidth. In this paper, we report results of our MOT characterization as a function of several operation parameters such as the intensity of laser beams, the laser beam diameter, the red shift of light, and the gradient of the magnetic field. We also report accurate absolute measurement of the number of Cs atoms trapped in our Cs MOT. We found up to 6 x 10{sup 7} Cs atoms trapped in our MOT measured with an uncertainty no greater than 6.4%. (Author)

  15. An accurately delineated Permian-Triassic Boundary in continental successions

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The Permian-Triassic Boundary Stratigraphic Set (PTBST), characteristic of the GSSP section of Meishan and widespread in marine Permian-Triassic Boundary (PTB) sequences of South China, is used to trace and recognize the PTB in a continental sequence at Chahe (Beds 66f―68c). Diversified Permian plant fossils extended to the PTBST, and a few relicts survived above that level. Sporomorphs are dominated by fern spores of Permian nature below the PTBST, above which they are replaced by gymnosperm pollen of Triassic aspect. In the nearby Zhejue Section, the continental PTBST is characterized by the fungal 'spike' recorded in many places throughout the world. The boundary claybeds (66f and 68a,c) of the PTBST are composed of mixed illite-montmorillonite layers analogous with those at Meishan. They contain volcanogenic minerals such as β quartz and zircon. U/Pb dating of the upper claybed gives ages of 247.5 and 252.6 Ma for Beds 68a and 68c respectively, averaging 250 Ma. In contrast to the situation in Xinjiang and South Africa, the sediment sequence of the Permian-Triassic transition in the Chahe section (Beds 56―80) become finer upward. Shallowing and coarsening upward is not, therefore, characteristic of the Permian-Triassic transition everywhere. The occurrence of relicts of the Gigantopteris Flora in the Kayitou Fm. indicates that, unlike most marine biota, relicts of this paleophytic flora survived into the earliest Triassic. It is concluded that Bed 67 at Chahe corresponds to Bed 27 at Meishan, and that the PTB should be put within the 60-cm-thick Bed 67b④, now put at its base tentatively. This is the most accurate correlation of the PTB in continental facies with that in the marine GSSP.

  16. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    CERN Document Server

    Jose, Jithin; Steenbergen, Wiendelt; Slump, Cornelis H; van Leeuwen, Ton G; Manohar, Srirang

    2012-01-01

    In most photoacoustic (PA) measurements, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. We present experimental and image reconstruction methods with which 2-D SOS distributions can be accurately acquired and reconstructed, and with which the SOS map can be used subsequently to reconstruct highly accurate PA tomograms. We begin with a 2-D iterative reconstruction approach in an ultrasound transmission tomography (UTT) setting, which uses ray refracted paths instead of straight ray paths to recover accurate SOS images of the subject. Subsequently, we use the SOS distribution in a new 2-D iterative approach, where refraction of rays originating from PA sources are accounted for in accurately retrieving the distribution of these sources. Both the SOS reconstruction and SOS-compensated PA reconstruction methods utilize the Eikonal equation to m...

  17. Uso da tomografia de coerência ótica intracoronariana para caracterização precisa da aterosclerose Uso de la tomografía de coherencia óptica intracoronaria para caracterización precisa de la aterosclerosis Use of optical coherence tomography for accurate characterization of atherosclerosis

    Directory of Open Access Journals (Sweden)

    John Coletta

    2010-02-01

    a signal source to provide vascular cross-sectional imaging with definition far superior to any other available modality. With spatial resolution of up to 10μm, OCT provides 20-fold higher resolution than intravascular ultrasound (IVUS, currently the most used modality for intra-coronary imaging. OCT has the capacity to provide invaluable insight into the various phases of atherosclerotic disease and vascular response to therapeutics. Studies have shown the ability of OCT to detect arterial structures and assist in the determination of different histological constituents. Its capacity to distinguish different grades of atherosclerotic changes and the various types of plaques, as compared to histology, has recently been demonstrated with acceptable intra-observer and inter-observer correlations for these findings. OCT provides unrivaled real-time in vivo endovascular resolution, which has been exploited to assess the vascular structures and response to device deployment. While depth remains a limitation for OCT plaque characterization beyond 2-mm, near-histological resolution can be achieved within the first millimeter of the vessel wall allowing unique assessment of fibrous cap characteristics and thickness. In addition, assessment of neointimal coverage, para-strut tissue patterns and stent apposition can now be scrutinized for individual struts on the micron scale, the so-called strut-level analysis. OCT has propelled intravascular imaging into micron-level in vivo vascular analysis and is expected to soon become a valuable and indispensable tool for the cardiologists on both clinical and research applications.

  18. Realization of Quadrature Signal Generator Using Accurate Magnitude Integrator

    DEFF Research Database (Denmark)

    Xin, Zhen; Yoon, Changwoo; Zhao, Rende

    2016-01-01

    -signal parameters, espically when a fast resonse is required for usages such as grid synchronization. As a result, the parameters design of the SOGI-QSG becomes complicated. Theoretical analysis shows that it is caused by the inaccurate magnitude-integration characteristic of the SOGI-QSG. To solve this problem......, an Accurate-Magnitude-Integrator based QSG (AMI-QSG) is proposed. The AMI has an accurate magnitude-integration characteristic for the sinusoidal signal, which makes the AMI-QSG possess an accurate First-Order-System (FOS) characteristic in terms of magnitude than the SOGI-QSG. The parameter design process...

  19. Fabricating an Accurate Implant Master Cast: A Technique Report.

    Science.gov (United States)

    Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F

    2015-12-01

    The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated.

  20. Highly Accurate Sensor for High-Purity Oxygen Determination Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....

  1. Multi-objective optimization of inverse planning for accurate radiotherapy

    Institute of Scientific and Technical Information of China (English)

    曹瑞芬; 吴宜灿; 裴曦; 景佳; 李国丽; 程梦云; 李贵; 胡丽琴

    2011-01-01

    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment pl

  2. Accurate backgrounds to Higgs production at the LHC

    CERN Document Server

    Kauer, N

    2007-01-01

    Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.

  3. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  4. Controlling Hay Fever Symptoms with Accurate Pollen Counts

    Science.gov (United States)

    ... counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has been reviewed by Thanai ... rhinitis known as hay fever is caused by pollen carried in the air during different times of ...

  5. Digital system accurately controls velocity of electromechanical drive

    Science.gov (United States)

    Nichols, G. B.

    1965-01-01

    Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.

  6. Mass spectrometry based protein identification with accurate statistical significance assignment

    OpenAIRE

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be ach...

  7. Requirements for accurate estimation of anisotropic material parameters by magnetic resonance elastography: A computational study.

    Science.gov (United States)

    Tweten, D J; Okamoto, R J; Bayly, P V

    2017-01-17

    To establish the essential requirements for characterization of a transversely isotropic material by magnetic resonance elastography (MRE). Three methods for characterizing nearly incompressible, transversely isotropic (ITI) materials were used to analyze data from closed-form expressions for traveling waves, finite-element (FE) simulations of waves in homogeneous ITI material, and FE simulations of waves in heterogeneous material. Key properties are the complex shear modulus μ2 , shear anisotropy ϕ=μ1/μ2-1, and tensile anisotropy ζ=E1/E2-1. Each method provided good estimates of ITI parameters when both slow and fast shear waves with multiple propagation directions were present. No method gave accurate estimates when the displacement field contained only slow shear waves, only fast shear waves, or waves with only a single propagation direction. Methods based on directional filtering are robust to noise and include explicit checks of propagation and polarization. Curl-based methods led to more accurate estimates in low noise conditions. Parameter estimation in heterogeneous materials is challenging for all methods. Multiple shear waves, both slow and fast, with different propagation directions, must be present in the displacement field for accurate parameter estimates in ITI materials. Experimental design and data analysis can ensure that these requirements are met. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Facilitating the selection and creation of accurate interatomic potentials with robust tools and characterization

    Science.gov (United States)

    Trautt, Zachary T.; Tavazza, Francesca; Becker, Chandler A.

    2015-10-01

    The Materials Genome Initiative seeks to significantly decrease the cost and time of development and integration of new materials. Within the domain of atomistic simulations, several roadblocks stand in the way of reaching this goal. While the NIST Interatomic Potentials Repository hosts numerous interatomic potentials (force fields), researchers cannot immediately determine the best choice(s) for their use case. Researchers developing new potentials, specifically those in restricted environments, lack a comprehensive portfolio of efficient tools capable of calculating and archiving the properties of their potentials. This paper elucidates one solution to these problems, which uses Python-based scripts that are suitable for rapid property evaluation and human knowledge transfer. Calculation results are visible on the repository website, which reduces the time required to select an interatomic potential for a specific use case. Furthermore, property evaluation scripts are being integrated with modern platforms to improve discoverability and access of materials property data. To demonstrate these scripts and features, we will discuss the automation of stacking fault energy calculations and their application to additional elements. While the calculation methodology was developed previously, we are using it here as a case study in simulation automation and property calculations. We demonstrate how the use of Python scripts allows for rapid calculation in a more easily managed way where the calculations can be modified, and the results presented in user-friendly and concise ways. Additionally, the methods can be incorporated into other efforts, such as openKIM.

  9. Mine operating accurate stability control with optical fiber sensing and Bragg grating technology: the BRITE-EURAM STABILOS project

    Science.gov (United States)

    Ferdinand, Pierre; Ferragu, Olivier; Lechien, J. L.; Lescop, B.; Marty-DeWinter, Veronique; Rougeault, S.; Pierre, Guillaume; Renouf, C.; Jarret, Bertrand; Kotrotsios, Georges; Neuman, Victor; Depeursinge, Y.; Michel, J. B.; Van Uffelen, M.; Verbandt, Yves; Voet, Marc R. H.; Toscano, D.

    1994-09-01

    Recent developments of stability control in mines, essentially based on Ge-doped Fiber Bragg Gratings (FBG) are reported including results about the different aspects of the system: accurate characterizations of FBG, sensor network topology and multiplexing method, user interface design and sensor packaging.

  10. Accurate microfour-point probe sheet resistance measurements on small samples

    DEFF Research Database (Denmark)

    Thorsteinsson, Sune; Wang, Fei; Petersen, Dirch Hjorth

    2009-01-01

    We show that accurate sheet resistance measurements on small samples may be performed using microfour-point probes without applying correction factors. Using dual configuration measurements, the sheet resistance may be extracted with high accuracy when the microfour-point probes are in proximity...... with sufficient accuracy. As an example, the sheet resistance of a 40 µm (50 µm) square sample may be characterized with an accuracy of 0.3% (0.1%) using a 10 µm pitch microfour-point probe and assuming a probe alignment accuracy of ±2.5 µm. ©2009 American Institute of Physics...... of a mirror plane on small samples with dimensions of a few times the probe pitch. We calculate theoretically the size of the “sweet spot,” where sufficiently accurate sheet resistances result and show that even for very small samples it is feasible to do correction free extraction of the sheet resistance...

  11. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations

    Science.gov (United States)

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-01

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  12. No galaxy left behind: accurate measurements with the faintest objects in the Dark Energy Survey

    CERN Document Server

    Suchyta, E; Aleksić, J; Melchior, P; Jouvel, S; MacCrann, N; Crocce, M; Gaztanaga, E; Honscheid, K; Leistedt, B; Peiris, H V; Ross, A J; Rykoff, E S; Sheldon, E; Abbott, T; Abdalla, F B; Allam, S; Banerji, M; Benoit-Lévy, A; Bertin, E; Brooks, D; Burke, D L; Rosell, A Carnero; Kind, M Carrasco; Carretero, J; Cunha, C E; D'Andrea, C B; da Costa, L N; DePoy, D L; Desai, S; Diehl, H T; Dietrich, J P; Doel, P; Eifler, T F; Estrada, J; Evrard, A E; Flaugher, B; Fosalba, P; Frieman, J; Gerdes, D W; Gruen, D; Gruendl, R A; James, D J; Jarvis, M; Kuehn, K; Kuropatkin, N; Lahav, O; Lima, M; Maia, M A G; March, M; Marshall, J L; Miller, C J; Miquel, R; Neilsen, E; Nichol, R C; Nord, B; Ogando, R; Percival, W J; Reil, K; Roodman, A; Sako, M; Sanchez, E; Scarpine, V; Sevilla-Noarbe, I; Smith, R C; Soares-Santos, M; Sobreira, F; Swanson, M E C; Tarle, G; Thaler, J; Thomas, D; Vikram, V; Walker, A R; Wechsler, R H; Zhang, Y

    2015-01-01

    Accurate statistical measurement with large imaging surveys has traditionally required throwing away a sizable fraction of the data. This is because most measurements have have relied on selecting nearly complete samples, where variations in the composition of the galaxy population with seeing, depth, or other survey characteristics are small. We introduce a new measurement method that aims to minimize this wastage, allowing precision measurement for any class of stars or galaxies detectable in an imaging survey. We have implemented our proposal in Balrog, a software package which embeds fake objects in real imaging in order to accurately characterize measurement biases. We demonstrate this technique with an angular clustering measurement using Dark Energy Survey (DES) data. We first show that recovery of our injected galaxies depends on a wide variety of survey characteristics in the same way as the real data. We then construct a flux-limited sample of the faintest galaxies in DES, chosen specifically for th...

  13. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    Directory of Open Access Journals (Sweden)

    Jinying Jia

    2014-01-01

    Full Text Available This paper tackles location privacy protection in current location-based services (LBS where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user’s accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR, nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user’s accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  14. Nonexposure accurate location K-anonymity algorithm in LBS.

    Science.gov (United States)

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  15. How accurate are the nonlinear chemical Fokker-Planck and chemical Langevin equations?

    Science.gov (United States)

    Grima, Ramon; Thomas, Philipp; Straube, Arthur V

    2011-08-28

    The chemical Fokker-Planck equation and the corresponding chemical Langevin equation are commonly used approximations of the chemical master equation. These equations are derived from an uncontrolled, second-order truncation of the Kramers-Moyal expansion of the chemical master equation and hence their accuracy remains to be clarified. We use the system-size expansion to show that chemical Fokker-Planck estimates of the mean concentrations and of the variance of the concentration fluctuations about the mean are accurate to order Ω(-3∕2) for reaction systems which do not obey detailed balance and at least accurate to order Ω(-2) for systems obeying detailed balance, where Ω is the characteristic size of the system. Hence, the chemical Fokker-Planck equation turns out to be more accurate than the linear-noise approximation of the chemical master equation (the linear Fokker-Planck equation) which leads to mean concentration estimates accurate to order Ω(-1∕2) and variance estimates accurate to order Ω(-3∕2). This higher accuracy is particularly conspicuous for chemical systems realized in small volumes such as biochemical reactions inside cells. A formula is also obtained for the approximate size of the relative errors in the concentration and variance predictions of the chemical Fokker-Planck equation, where the relative error is defined as the difference between the predictions of the chemical Fokker-Planck equation and the master equation divided by the prediction of the master equation. For dimerization and enzyme-catalyzed reactions, the errors are typically less than few percent even when the steady-state is characterized by merely few tens of molecules.

  16. Accurate level set method for simulations of liquid atomization☆

    Institute of Scientific and Technical Information of China (English)

    Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan

    2015-01-01

    Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.

  17. Highly Accurate Measurement of the Electron Orbital Magnetic Moment

    CERN Document Server

    Awobode, A M

    2015-01-01

    We propose to accurately determine the orbital magnetic moment of the electron by measuring, in a Magneto-Optical or Ion trap, the ratio of the Lande g-factors in two atomic states. From the measurement of (gJ1/gJ2), the quantity A, which depends on the corrections to the electron g-factors can be extracted, if the states are LS coupled. Given that highly accurate values of the correction to the spin g-factor are currently available, accurate values of the correction to the orbital g-factor may also be determined. At present, (-1.8 +/- 0.4) x 10-4 has been determined as a correction to the electron orbital g-factor, by using earlier measurements of the ratio gJ1/gJ2, made on the Indium 2P1/2 and 2P3/2 states.

  18. Simple and accurate analytical calculation of shortest path lengths

    CERN Document Server

    Melnik, Sergey

    2016-01-01

    We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.

  19. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    Science.gov (United States)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  20. Accurate nuclear radii and binding energies from a chiral interaction

    CERN Document Server

    Ekstrom, A; Wendt, K A; Hagen, G; Papenbrock, T; Carlsson, B D; Forssen, C; Hjorth-Jensen, M; Navratil, P; Nazarewicz, W

    2015-01-01

    The accurate reproduction of nuclear radii and binding energies is a long-standing challenge in nuclear theory. To address this problem two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective 3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  1. Accurate reconstruction of digital holography using frequency domain zero padding

    Science.gov (United States)

    Shin, Jun Geun; Kim, Ju Wan; Lee, Jae Hwi; Lee, Byeong Ha

    2017-04-01

    We propose an image reconstruction method of digital holography for getting more accurate reconstruction. Digital holography provides both the light amplitude and the phase of a specimen through recording the interferogram. Since the Fresenl diffraction can be efficiently implemented by the Fourier transform, zero padding technique can be applied to obtain more accurate information. In this work, we report the method of frequency domain zero padding (FDZP). Both in computer-simulation and in experiment made with a USAF 1951 resolution chart and target, the FDZD gave the more accurate rconstruction images. Even though, the FDZD asks more processing time, with the help of graphics processing unit (GPU), it can find good applications in digital holography for 3-D profile imaging.

  2. Memory conformity affects inaccurate memories more than accurate memories.

    Science.gov (United States)

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  3. Accurate torque-speed performance prediction for brushless dc motors

    Science.gov (United States)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  4. Accurate analysis of planar metamaterials using the RLC theory

    DEFF Research Database (Denmark)

    Malureanu, Radu; Lavrinenko, Andrei

    2008-01-01

    In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering...... the variation of permittivity due to small thicknesses. Even if complex, such strategy gives accurate results and we believe that, after more refinement, can be used to completely calculate a complex metallic structure placed on a substrate in a far faster manner than full simulations programs do....

  5. Method of accurate grinding for single enveloping TI worm

    Institute of Scientific and Technical Information of China (English)

    SUN; Yuehai; ZHENG; Huijiang; BI; Qingzhen; WANG; Shuren

    2005-01-01

    TI worm drive consists of involute helical gear and its enveloping Hourglass worm. Accurate grinding for TI worm is the key manufacture technology for TI worm gearing being popularized and applied. According to the theory of gear mesh, the equations of tooth surface of worm drive are gained, and the equation of the axial section profile of grinding wheel that can accurately grind TI worm is extracted. Simultaneously,the relation of position and motion between TI worm and grinding wheel are expounded.The method for precisely grinding single enveloping TI worm is obtained.

  6. Robust and accurate transient light transport decomposition via convolutional sparse coding.

    Science.gov (United States)

    Hu, Xuemei; Deng, Yue; Lin, Xing; Suo, Jinli; Dai, Qionghai; Barsi, Christopher; Raskar, Ramesh

    2014-06-01

    Ultrafast sources and detectors have been used to record the time-resolved scattering of light propagating through macroscopic scenes. In the context of computational imaging, decomposition of this transient light transport (TLT) is useful for applications, such as characterizing materials, imaging through diffuser layers, and relighting scenes dynamically. Here, we demonstrate a method of convolutional sparse coding to decompose TLT into direct reflections, inter-reflections, and subsurface scattering. The method relies on the sparsity composition of the time-resolved kernel. We show that it is robust and accurate to noise during the acquisition process.

  7. Accurate Period Approximation for Any Simple Pendulum Amplitude

    Institute of Scientific and Technical Information of China (English)

    XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen

    2012-01-01

    Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02% is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.

  8. On accurate boundary conditions for a shape sensitivity equation method

    Science.gov (United States)

    Duvigneau, R.; Pelletier, D.

    2006-01-01

    This paper studies the application of the continuous sensitivity equation method (CSEM) for the Navier-Stokes equations in the particular case of shape parameters. Boundary conditions for shape parameters involve flow derivatives at the boundary. Thus, accurate flow gradients are critical to the success of the CSEM. A new approach is presented to extract accurate flow derivatives at the boundary. High order Taylor series expansions are used on layered patches in conjunction with a constrained least-squares procedure to evaluate accurate first and second derivatives of the flow variables at the boundary, required for Dirichlet and Neumann sensitivity boundary conditions. The flow and sensitivity fields are solved using an adaptive finite-element method. The proposed methodology is first verified on a problem with a closed form solution obtained by the Method of Manufactured Solutions. The ability of the proposed method to provide accurate sensitivity fields for realistic problems is then demonstrated. The flow and sensitivity fields for a NACA 0012 airfoil are used for fast evaluation of the nearby flow over an airfoil of different thickness (NACA 0015).

  9. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  10. The value of accurate A/R information.

    Science.gov (United States)

    Freeman, G; Allcorn, S

    1985-01-01

    The understanding and management of an accounts receivable system in a medical group practice is particularly important to administrators in today's economy. As the authors explain, an accurate information system can provide the medical group with valuable information regarding its financial condition and cash flow collections status, as well as help plan for future funding needs.

  11. Technique to accurately quantify collagen content in hyperconfluent cell culture.

    Science.gov (United States)

    See, Eugene Yong-Shun; Toh, Siew Lok; Goh, James Cho Hong

    2008-12-01

    Tissue engineering aims to regenerate tissues that can successfully take over the functions of the native tissue when it is damaged or diseased. In most tissues, collagen makes up the bulk component of the extracellular matrix, thus, there is great emphasis on its accurate quantification in tissue engineering. It has already been reported that pepsin digestion is able to solubilize the collagen deposited within the cell layer for accurate quantification of collagen content in cultures, but this method has drawbacks when cultured cells are hyperconfluent. In this condition, Pepsin digestion will result in fragments of the cell layers that cannot be completely resolved. These fragments of the undigested cell sheet are visible to the naked eye, which can bias the final results. To the best of our knowledge, there has been no reported method to accurately quantify the collagen content in hyperconfluent cell sheet. Therefore, this study aims to illustrate that sonication is able to aid pepsin digestion of hyperconfluent cell layers of fibroblasts and bone marrow mesenchymal stem cells, to solubilize all the collagen for accurate quantification purposes.

  12. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  13. On a more accurate Hardy-Mulholland-type inequality

    Directory of Open Access Journals (Sweden)

    Bicheng Yang

    2016-03-01

    Full Text Available Abstract By using weight coefficients, technique of real analysis, and Hermite-Hadamard’s inequality, we give a more accurate Hardy-Mulholland-type inequality with multiparameters and a best possible constant factor related to the beta function. The equivalent forms, the reverses, the operator expressions, and some particular cases are also considered.

  14. Improved fingercode alignment for accurate and compact fingerprint recognition

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-05-01

    Full Text Available The traditional texture-based fingerprint recognition system known as FingerCode is improved in this work. Texture-based fingerprint recognition methods are generally more accurate than other methods, but at the disadvantage of increased storage...

  15. Accurate analysis of planar metamaterials using the RLC theory

    DEFF Research Database (Denmark)

    Malureanu, Radu; Lavrinenko, Andrei

    2008-01-01

    In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering the...

  16. $H_{2}^{+}$ ion in strong magnetic field an accurate calculation

    CERN Document Server

    López, J C; Turbiner, A V

    1997-01-01

    Using a unique trial function we perform an accurate calculation of the ground state $1\\sigma_g$ of the hydrogenic molecular ion $H^+_2$ in a constant uniform magnetic field ranging $0-10^{13}$ G. We show that this trial function also makes it possible to study the negative parity ground state $1\\sigma_u$.

  17. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  18. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...

  19. Accurate segmentation of dense nanoparticles by partially discrete electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2012-03-15

    Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.

  20. Fast, Accurate and Detailed NoC Simulations

    NARCIS (Netherlands)

    Wolkotte, P.T.; Hölzenspies, P.K.F.; Smit, G.J.M.; Kellenberger, P.

    2007-01-01

    Network-on-Chip (NoC) architectures have a wide variety of parameters that can be adapted to the designer's requirements. Fast exploration of this parameter space is only possible at a high-level and several methods have been proposed. Cycle and bit accurate simulation is necessary when the actual r

  1. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...

  2. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, Claudia

    2014-01-01

    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  3. Accurate eye center location through invariant isocentric patterns

    NARCIS (Netherlands)

    Valenti, R.; Gevers, T.

    2012-01-01

    Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and

  4. Creating a Culture of Accurate and Precise Data.

    Science.gov (United States)

    Bergren, Martha Dewey; Maughan, Erin D; Johnson, Kathleen H; Wolfe, Linda C; Watts, H Estelle S; Cole, Marjorie

    2017-01-01

    There are many stakeholders for school health data. Each one has a stake in the quality and accuracy of the health data collected and reported in schools. The joint NASN and NASSNC national school nurse data set initiative, Step Up & Be Counted!, heightens the need to assure accurate and precise data. The use of a standardized terminology allows the data on school health care delivered in local schools to be aggregated for use at the local, state, and national levels. The use of uniform terminology demands that data elements be defined and that accurate and reliable data are entered into the database. Barriers to accurate data are misunderstanding of accurate data needs, student caseloads that exceed the national recommendations, lack of electronic student health records, and electronic student health records that do not collect the indicators using the standardized terminology or definitions. The quality of the data that school nurses report and share has an impact at the personal, district, state, and national levels and influences the confidence and quality of the decisions made using that data.

  5. Modeling Battery Behavior for Accurate State-of-Charge Indication

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Veld, op het J.H.G.; Regtien, P.P.L.; Danilov, D.; Notten, P.H.L.

    2006-01-01

    Li-ion is the most commonly used battery chemistry in portable applications nowadays. Accurate state-of-charge (SOC) and remaining run-time indication for portable devices is important for the user's convenience and to prolong the lifetime of batteries. A new SOC indication system, combining the ele

  6. Compact and Accurate Turbocharger Modelling for Engine Control

    DEFF Research Database (Denmark)

    Sorenson, Spencer C; Hendricks, Elbert; Magnússon, Sigurjón

    2005-01-01

    (Engine Control Unit) as a table. This method uses a great deal of memory space and often requires on-line interpolation and thus a large amount of CPU time. In this paper a more compact, accurate and rapid method of dealing with the compressor modelling problem is presented and is applicable to all...

  7. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    NARCIS (Netherlands)

    Jose, J.; Willemink, G.H.; Steenbergen, W.; Leeuwen, van A.G.J.M.; Manohar, S.

    2012-01-01

    Purpose: In most photoacoustic (PA) tomographic reconstructions, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. The authors pres

  8. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  9. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  10. Dynamic weighing for accurate fertilizer application and monitoring

    NARCIS (Netherlands)

    Bergeijk, van J.; Goense, D.; Willigenburg, van L.G.; Speelman, L.

    2001-01-01

    The mass flow of fertilizer spreaders must be calibrated for the different types of fertilizers used. To obtain accurate fertilizer application manual calibration of actual mass flow must be repeated frequently. Automatic calibration is possible by measurement of the actual mass flow, based on

  11. A Self-Instructional Device for Conditioning Accurate Prosody.

    Science.gov (United States)

    Buiten, Roger; Lane, Harlan

    1965-01-01

    A self-instructional device for conditioning accurate prosody in second-language learning is described in this article. The Speech Auto-Instructional Device (SAID) is electro-mechanical and performs three functions: SAID (1) presents to the student tape-recorded pattern sentences that are considered standards in prosodic performance; (2) processes…

  12. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  13. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, C.

    2014-01-01

    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  14. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    NARCIS (Netherlands)

    Bahrepour, M.; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless

  15. On the importance of having accurate data for astrophysical modelling

    Science.gov (United States)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  16. Accurate reconstruction of temporal correlation for neuronal sources using the enhanced dual-core MEG beamformer.

    Science.gov (United States)

    Diwakar, Mithun; Tal, Omer; Liu, Thomas T; Harrington, Deborah L; Srinivasan, Ramesh; Muzzatti, Laura; Song, Tao; Theilmann, Rebecca J; Lee, Roland R; Huang, Ming-Xiong

    2011-06-15

    Beamformer spatial filters are commonly used to explore the active neuronal sources underlying magnetoencephalography (MEG) recordings at low signal-to-noise ratio (SNR). Conventional beamformer techniques are successful in localizing uncorrelated neuronal sources under poor SNR conditions. However, the spatial and temporal features from conventional beamformer reconstructions suffer when sources are correlated, which is a common and important property of real neuronal networks. Dual-beamformer techniques, originally developed by Brookes et al. to deal with this limitation, successfully localize highly-correlated sources and determine their orientations and weightings, but their performance degrades at low correlations. They also lack the capability to produce individual time courses and therefore cannot quantify source correlation. In this paper, we present an enhanced formulation of our earlier dual-core beamformer (DCBF) approach that reconstructs individual source time courses and their correlations. Through computer simulations, we show that the enhanced DCBF (eDCBF) consistently and accurately models dual-source activity regardless of the correlation strength. Simulations also show that a multi-core extension of eDCBF effectively handles the presence of additional correlated sources. In a human auditory task, we further demonstrate that eDCBF accurately reconstructs left and right auditory temporal responses and their correlations. Spatial resolution and source localization strategies corresponding to different measures within the eDCBF framework are also discussed. In summary, eDCBF accurately reconstructs source spatio-temporal behavior, providing a means for characterizing complex neuronal networks and their communication.

  17. DNA barcode data accurately assign higher spider taxa

    Directory of Open Access Journals (Sweden)

    Jonathan A. Coddington

    2016-07-01

    Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However

  18. Directed sample interrogation utilizing an accurate mass exclusion-based data-dependent acquisition strategy (AMEx).

    Science.gov (United States)

    Rudomin, Emily L; Carr, Steven A; Jaffe, Jacob D

    2009-06-01

    The ability to perform thorough sampling is of critical importance when using mass spectrometry to characterize complex proteomic mixtures. A common approach is to reinterrogate a sample multiple times by LC-MS/MS. However, the conventional data-dependent acquisition methods that are typically used in proteomics studies will often redundantly sample high-intensity precursor ions while failing to sample low-intensity precursors entirely. We describe a method wherein the masses of successfully identified peptides are used to generate an accurate mass exclusion list such that those precursors are not selected for sequencing during subsequent analyses. We performed multiple concatenated analytical runs to sample a complex cell lysate, using either accurate mass exclusion-based data-dependent acquisition (AMEx) or standard data-dependent acquisition, and found that utilization of AMEx on an ESI-Orbitrap instrument significantly increases the total number of validated peptide identifications relative to a standard DDA approach. The additional identified peptides represent precursor ions that exhibit low signal intensity in the sample. Increasing the total number of peptide identifications augmented the number of proteins identified, as well as improved the sequence coverage of those proteins. Together, these data indicate that using AMEx is an effective strategy to improve the characterization of complex proteomic mixtures.

  19. Accurate parameter estimation for unbalanced three-phase system.

    Science.gov (United States)

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  20. Symmetric Uniformly Accurate Gauss-Runge-Kutta Method

    Directory of Open Access Journals (Sweden)

    Dauda G. YAKUBU

    2007-08-01

    Full Text Available Symmetric methods are particularly attractive for solving stiff ordinary differential equations. In this paper by the selection of Gauss-points for both interpolation and collocation, we derive high order symmetric single-step Gauss-Runge-Kutta collocation method for accurate solution of ordinary differential equations. The resulting symmetric method with continuous coefficients is evaluated for the proposed block method for accurate solution of ordinary differential equations. More interestingly, the block method is self-starting with adequate absolute stability interval that is capable of producing simultaneously dense approximation to the solution of ordinary differential equations at a block of points. The use of this method leads to a maximal gain in efficiency as well as in minimal function evaluation per step.

  1. Accurate measurement of the helical twisting power of chiral dopants

    Science.gov (United States)

    Kosa, Tamas; Bodnar, Volodymyr; Taheri, Bahman; Palffy-Muhoray, Peter

    2002-03-01

    We propose a method for the accurate determination of the helical twisting power (HTP) of chiral dopants. In the usual Cano-wedge method, the wedge angle is determined from the far-field separation of laser beams reflected from the windows of the test cell. Here we propose to use an optical fiber based spectrometer to accurately measure the cell thickness. Knowing the cell thickness at the positions of the disclination lines allows determination of the HTP. We show that this extension of the Cano-wedge method greatly increases the accuracy with which the HTP is determined. We show the usefulness of this method by determining the HTP of ZLI811 in a variety of hosts with negative dielectric anisotropy.

  2. Accurate multireference study of Si3 electronic manifold

    CERN Document Server

    Goncalves, Cayo Emilio Monteiro; Braga, Joao Pedro

    2016-01-01

    Since it has been shown that the silicon trimer has a highly multi-reference character, accurate multi-reference configuration interaction calculations are performed to elucidate its electronic manifold. Emphasis is given to the long range part of the potential, aiming to understand the atom-diatom collisions dynamical aspects, to describe conical intersections and important saddle points along the reactive path. Potential energy surface main features analysis are performed for benchmarking, and highly accurate values for structures, vibrational constants and energy gaps are reported, as well as the unpublished spin-orbit coupling magnitude. The results predict that inter-system crossings will play an important role in dynamical simulations, specially in triplet state quenching, making the problem of constructing a precise potential energy surface more complicated and multi-layer dependent. The ground state is predicted to be the singlet one, but since the singlet-triplet gap is rather small (2.448 kJ/mol) bo...

  3. Efficient and Accurate Robustness Estimation for Large Complex Networks

    CERN Document Server

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  4. Accurate speed and slip measurement of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Ho, S.Y.S.; Langman, R. [Tasmania Univ., Hobart, TAS (Australia)

    1996-03-01

    Two alternative hardware circuits, for the accurate measurement of low slip in cage induction motors, are discussed. Both circuits compare the periods of the fundamental of the supply frequency and pulses from a shaft-connected toothed-wheel. The better of the two achieves accuracy to 0.5 percent of slip over the range 0.1 to 0.005, or better than 0.001 percent of speed over the range. This method is considered useful for slip measurement of motors supplied by either constant frequency mains of variable speed controllers with PMW waveforms. It is demonstrated that accurate slip measurement supports the conclusions of work previously done on the detection of broken rotor bars. (author). 1 tab., 6 figs., 13 refs.

  5. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  6. A fast and accurate method for echocardiography strain rate imaging

    Science.gov (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  7. Accurate analysis of arbitrarily-shaped helical groove waveguide

    Institute of Scientific and Technical Information of China (English)

    Liu Hong-Tao; Wei Yan-Yu; Gong Yu-Bin; Yue Ling-Na; Wang Wen-Xiang

    2006-01-01

    This paper presents a theory on accurately analysing the dispersion relation and the interaction impedance of electromagnetic waves propagating through a helical groove waveguide with arbitrary groove shape, in which the complex groove profile is synthesized by a series of rectangular steps. By introducing the influence of high-order evanescent modes on the connection of any two neighbouring steps by an equivalent susceptance under a modified admittance matching condition, the assumption of the neglecting discontinuity capacitance in previously published analysis is avoided, and the accurate dispersion equation is obtained by means of a combination of field-matching method and admittancematching technique. The validity of this theory is proved by comparison between the measurements and the numerical calculations for two kinds of helical groove waveguides with different groove shapes.

  8. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  9. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    Directory of Open Access Journals (Sweden)

    Jianhua Zhang

    2014-01-01

    Full Text Available This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views’ calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  10. Library preparation for highly accurate population sequencing of RNA viruses

    Science.gov (United States)

    Acevedo, Ashley; Andino, Raul

    2015-01-01

    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624

  11. FIXED-WING MICRO AERIAL VEHICLE FOR ACCURATE CORRIDOR MAPPING

    Directory of Open Access Journals (Sweden)

    M. Rehak

    2015-08-01

    Full Text Available In this study we present a Micro Aerial Vehicle (MAV equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  12. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  13. Accurate stochastic reconstruction of heterogeneous microstructures by limited x-ray tomographic projections.

    Science.gov (United States)

    Li, Hechao; Kaira, Shashank; Mertens, James; Chawla, Nikhilesh; Jiao, Yang

    2016-12-01

    An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for its performance prediction, prognosis and optimization. X-ray tomography has provided a nondestructive means for microstructure characterization in 3D and 4D (i.e. structural evolution over time), in which a material is typically reconstructed from a large number of tomographic projections using filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART). Here, we present in detail a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of absorption contrast x-ray tomographic projections. This discrete tomography reconstruction procedure is in contrast to the commonly used FBP and ART, which usually requires thousands of projections for accurate microstructure rendition. The utility of our stochastic procedure is first demonstrated by reconstructing a wide class of two-phase heterogeneous materials including sandstone and hard-particle packing from simulated limited-angle projections in both cone-beam and parallel beam projection geometry. It is then applied to reconstruct tailored Sn-sphere-clay-matrix systems from limited-angle cone-beam data obtained via a lab-scale tomography facility at Arizona State University and parallel-beam synchrotron data obtained at Advanced Photon Source, Argonne National Laboratory. In addition, we examine the information content of tomography data by successively incorporating larger number of projections and quantifying the accuracy of the reconstructions. We show that only a small number of projections (e.g. 20-40, depending on the complexity of the microstructure of interest and desired resolution) are necessary for accurate material reconstructions via our stochastic procedure, which indicates its high efficiency in using limited structural information. The ramifications of the stochastic reconstruction procedure in 4D materials science are also

  14. A robust and accurate formulation of molecular and colloidal electrostatics

    Science.gov (United States)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  15. Accurate Method for Determining Adhesion of Cantilever Beams

    Energy Technology Data Exchange (ETDEWEB)

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  16. Accurate and Simple Calibration of DLP Projector Systems

    OpenAIRE

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry pr...

  17. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    Science.gov (United States)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  18. The highly accurate anteriolateral portal for injecting the knee

    Directory of Open Access Journals (Sweden)

    Chavez-Chiang Colbert E

    2011-03-01

    Full Text Available Abstract Background The extended knee lateral midpatellar portal for intraarticular injection of the knee is accurate but is not practical for all patients. We hypothesized that a modified anteriolateral portal where the synovial membrane of the medial femoral condyle is the target would be highly accurate and effective for intraarticular injection of the knee. Methods 83 subjects with non-effusive osteoarthritis of the knee were randomized to intraarticular injection using the modified anteriolateral bent knee versus the standard lateral midpatellar portal. After hydrodissection of the synovial membrane with lidocaine using a mechanical syringe (reciprocating procedure device, 80 mg of triamcinolone acetonide were injected into the knee with a 2.0-in (5.1-cm 21-gauge needle. Baseline pain, procedural pain, and pain at outcome (2 weeks and 6 months were determined with the 10 cm Visual Analogue Pain Score (VAS. The accuracy of needle placement was determined by sonographic imaging. Results The lateral midpatellar and anteriolateral portals resulted in equivalent clinical outcomes including procedural pain (VAS midpatellar: 4.6 ± 3.1 cm; anteriolateral: 4.8 ± 3.2 cm; p = 0.77, pain at outcome (VAS midpatellar: 2.6 ± 2.8 cm; anteriolateral: 1.7 ± 2.3 cm; p = 0.11, responders (midpatellar: 45%; anteriolateral: 56%; p = 0.33, duration of therapeutic effect (midpatellar: 3.9 ± 2.4 months; anteriolateral: 4.1 ± 2.2 months; p = 0.69, and time to next procedure (midpatellar: 7.3 ± 3.3 months; anteriolateral: 7.7 ± 3.7 months; p = 0.71. The anteriolateral portal was 97% accurate by real-time ultrasound imaging. Conclusion The modified anteriolateral bent knee portal is an effective, accurate, and equivalent alternative to the standard lateral midpatellar portal for intraarticular injection of the knee. Trial Registration ClinicalTrials.gov: NCT00651625

  19. Accurate quantum state estimation via "Keeping the experimentalist honest"

    CERN Document Server

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  20. Are Predictive Energy Expenditure Equations in Ventilated Surgery Patients Accurate?

    Science.gov (United States)

    Tignanelli, Christopher J; Andrews, Allan G; Sieloff, Kurt M; Pleva, Melissa R; Reichert, Heidi A; Wooley, Jennifer A; Napolitano, Lena M; Cherry-Bukowiec, Jill R

    2017-01-01

    While indirect calorimetry (IC) is the gold standard used to calculate specific calorie needs in the critically ill, predictive equations are frequently utilized at many institutions for various reasons. Prior studies suggest these equations frequently misjudge actual resting energy expenditure (REE) in medical and mixed intensive care unit (ICU) patients; however, their utility for surgical ICU (SICU) patients has not been fully evaluated. Therefore, the objective of this study was to compare the REE measured by IC with REE calculated using specific calorie goals or predictive equations for nutritional support in ventilated adult SICU patients. A retrospective review of prospectively collected data was performed on all adults (n = 419, 18-91 years) mechanically ventilated for >24 hours, with an Fio2 ≤ 60%, who met IC screening criteria. Caloric needs were estimated using Harris-Benedict equations (HBEs), and 20, 25, and 30 kcal/kg/d with actual (ABW), adjusted (ADJ), and ideal body (IBW) weights. The REE was measured using IC. The estimated REE was considered accurate when within ±10% of the measured REE by IC. The HBE, 20, 25, and 30 kcal/kg/d estimates of REE were found to be inaccurate regardless of age, gender, or weight. The HBE and 20 kcal/kg/d underestimated REE, while 25 and 30 kcal/kg/d overestimated REE. Of the methods studied, those found to most often accurately estimate REE were the HBE using ABW, which was accurate 35% of the time, and 25 kcal/kg/d ADJ, which was accurate 34% of the time. This difference was not statistically significant. Using HBE, 20, 25, or 30 kcal/kg/d to estimate daily caloric requirements in critically ill surgical patients is inaccurate compared to REE measured by IC. In SICU patients with nutrition requirements essential to recovery, IC measurement should be performed to guide clinicians in determining goal caloric requirements.

  1. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry

    OpenAIRE

    Fuchs, Franz G.; Hjelmervik, Jon M.

    2014-01-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire...

  2. A multiple more accurate Hardy-Littlewood-Polya inequality

    Directory of Open Access Journals (Sweden)

    Qiliang Huang

    2012-11-01

    Full Text Available By introducing multi-parameters and conjugate exponents and using Euler-Maclaurin’s summation formula, we estimate the weight coefficient and prove a multiple more accurate Hardy-Littlewood-Polya (H-L-P inequality, which is an extension of some earlier published results. We also prove that the constant factor in the new inequality is the best possible, and obtain its equivalent forms.

  3. A highly accurate method to solve Fisher’s equation

    Indian Academy of Sciences (India)

    Mehdi Bastani; Davod Khojasteh Salkuyeh

    2012-03-01

    In this study, we present a new and very accurate numerical method to approximate the Fisher’s-type equations. Firstly, the spatial derivative in the proposed equation is approximated by a sixth-order compact finite difference (CFD6) scheme. Secondly, we solve the obtained system of differential equations using a third-order total variation diminishing Runge–Kutta (TVD-RK3) scheme. Numerical examples are given to illustrate the efficiency of the proposed method.

  4. Accurate and simple calibration of DLP projector systems

    Science.gov (United States)

    Wilm, Jakob; Olesen, Oline V.; Larsen, Rasmus

    2014-03-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality.

  5. Accurate modelling of unsteady flows in collapsible tubes.

    Science.gov (United States)

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  6. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    Science.gov (United States)

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  7. Discrete sensors distribution for accurate plantar pressure analyses.

    Science.gov (United States)

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  9. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  10. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    Directory of Open Access Journals (Sweden)

    Li C Xia

    Full Text Available Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy. GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  11. An accurate metric for the spacetime around neutron stars

    CERN Document Server

    Pappas, George

    2016-01-01

    The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical propert...

  12. [Spectroscopy technique and ruminant methane emissions accurate inspecting].

    Science.gov (United States)

    Shang, Zhan-Huan; Guo, Xu-Sheng; Long, Rui-Jun

    2009-03-01

    The increase in atmospheric CH4 concentration, on the one hand through the radiation process, will directly cause climate change, and on the other hand, cause a lot of changes in atmospheric chemical processes, indirectly causing climate change. The rapid growth of atmospheric methane has gained attention of governments and scientists. All countries in the world now deal with global climate change as an important task of reducing emissions of greenhouse gases, but the need for monitoring the concentration of methane gas, in particular precision monitoring, can be scientifically formulated to provide a scientific basis for emission reduction measures. So far, CH4 gas emissions of different animal production systems have received extensive research. The methane emission by ruminant reported in the literature is only estimation. This is due to the various factors that affect the methane production in ruminant, there are various variables associated with the techniques for measuring methane production, the techniques currently developed to measure methane are unable to accurately determine the dynamics of methane emission by ruminant, and therefore there is an urgent need to develop an accurate method for this purpose. Currently, spectroscopy technique has been used and is relatively a more accurate and reliable method. Various spectroscopy techniques such as modified infrared spectroscopy methane measuring system, laser and near-infrared sensory system are able to achieve the objective of determining the dynamic methane emission by both domestic and grazing ruminant. Therefore spectroscopy technique is an important methane measuring technique, and contributes to proposing reduction methods of methane.

  13. Accurate LAI retrieval method based on PROBA/CHRIS data

    Directory of Open Access Journals (Sweden)

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  14. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    Science.gov (United States)

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.

  15. An accurate metric for the spacetime around rotating neutron stars

    Science.gov (United States)

    Pappas, George

    2017-04-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, i.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  16. Fourth order accurate compact scheme with group velocity control (GVC)

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    For solving complex flow field with multi-scale structure higher order accurate schemes are preferred. Among high order schemes the compact schemes have higher resolving efficiency. When the compact and upwind compact schemes are used to solve aerodynamic problems there are numerical oscillations near the shocks. The reason of oscillation production is because of non-uniform group velocity of wave packets in numerical solutions. For improvement of resolution of the shock a parameter function is introduced in compact scheme to control the group velocity. The newly developed method is simple. It has higher accuracy and less stencil of grid points.

  17. Rapid and Accurate Idea Transfer: Presenting Ideas with Concept Maps

    Science.gov (United States)

    2008-07-30

    questions from the Pre-test were included in the Post-test. I. At the end of the day, people in the camps take home about __ taka to feed their families. 2...teachers are not paid regularly. Tuition fees for different classes are 40 to 80 taka (Bangladesh currency) 27 per month which is very high for the...October 26, 2002. 27 In April 2005, exchange rate, one $ = 60 taka . 44 Rapid and Accurate Idea Transfer: CDRL (DI-MIS-807 11 A, 00012 1) Presenting

  18. Accurate emulators for large-scale computer experiments

    CERN Document Server

    Haaland, Ben; 10.1214/11-AOS929

    2012-01-01

    Large-scale computer experiments are becoming increasingly important in science. A multi-step procedure is introduced to statisticians for modeling such experiments, which builds an accurate interpolator in multiple steps. In practice, the procedure shows substantial improvements in overall accuracy, but its theoretical properties are not well established. We introduce the terms nominal and numeric error and decompose the overall error of an interpolator into nominal and numeric portions. Bounds on the numeric and nominal error are developed to show theoretically that substantial gains in overall accuracy can be attained with the multi-step approach.

  19. Accurate strand-specific quantification of viral RNA.

    Directory of Open Access Journals (Sweden)

    Nicole E Plaskon

    Full Text Available The presence of full-length complements of viral genomic RNA is a hallmark of RNA virus replication within an infected cell. As such, methods for detecting and measuring specific strands of viral RNA in infected cells and tissues are important in the study of RNA viruses. Strand-specific quantitative real-time PCR (ssqPCR assays are increasingly being used for this purpose, but the accuracy of these assays depends on the assumption that the amount of cDNA measured during the quantitative PCR (qPCR step accurately reflects amounts of a specific viral RNA strand present in the RT reaction. To specifically test this assumption, we developed multiple ssqPCR assays for the positive-strand RNA virus o'nyong-nyong (ONNV that were based upon the most prevalent ssqPCR assay design types in the literature. We then compared various parameters of the ONNV-specific assays. We found that an assay employing standard unmodified virus-specific primers failed to discern the difference between cDNAs generated from virus specific primers and those generated through false priming. Further, we were unable to accurately measure levels of ONNV (- strand RNA with this assay when higher levels of cDNA generated from the (+ strand were present. Taken together, these results suggest that assays of this type do not accurately quantify levels of the anti-genomic strand present during RNA virus infectious cycles. However, an assay permitting the use of a tag-specific primer was able to distinguish cDNAs transcribed from ONNV (- strand RNA from other cDNAs present, thus allowing accurate quantification of the anti-genomic strand. We also report the sensitivities of two different detection strategies and chemistries, SYBR(R Green and DNA hydrolysis probes, used with our tagged ONNV-specific ssqPCR assays. Finally, we describe development, design and validation of ssqPCR assays for chikungunya virus (CHIKV, the recent cause of large outbreaks of disease in the Indian Ocean

  20. Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.

    Science.gov (United States)

    Robinson, David

    2014-12-09

    A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation.

  1. Accurate measurement of ultrasonic velocity by eliminating the diffraction effect

    Institute of Scientific and Technical Information of China (English)

    WEI Tingcun

    2003-01-01

    The accurate measurement method of ultrasonic velocity by the pulse interferencemethod with eliminating the diffraction effect has been investigated in VHF range experimen-tally. Two silicate glasses were taken as the specimens, their frequency dependences of longitu-dinal velocities were measured in the frequency range 50-350 MHz, and the phase advances ofultrasonic signals caused by diffraction effect were calculated using A. O. Williams' theoreticalexpression. For the frequency dependences of longitudinal velocities, the measurement resultswere in good agreement with the simulation ones in which the phase advances were included.It has been shown that the velocity error due to diffraction effect can be corrected very well bythis method.

  2. Accurate studies on dissociation energies of diatomic molecules

    Institute of Scientific and Technical Information of China (English)

    SUN; WeiGuo; FAN; QunChao

    2007-01-01

    The molecular dissociation energies of some electronic states of hydride and N2 molecules were studied using a parameter-free analytical formula suggested in this study and the algebraic method (AM) proposed recently. The results show that the accurate AM dissociation energies DeAM agree excellently with experimental dissociation energies Deexpt, and that the dissociation energy of an electronic state such as the 23△g state of 7Li2 whose experimental value is not available can be predicted using the new formula.

  3. A novel simple and accurate flatness measurement method

    CERN Document Server

    Thang, H L

    2011-01-01

    Flatness measurement of a surface plate is an intensive and old research topic. However ISO definition related and other measurement methods seem uneasy in measuring and/ or complicated in data analysis. Especially in reality, the mentioned methods don't take a clear and straightforward care on the inclining angle which is always included in any given flatness measurement. In this report a novel simple and accurate flatness measurement method was introduced to overcome this prevailing feature in the available methods. The mathematical modeling for this method was also presented making the underlying nature of the method transparent. The applying examples show consistent results.

  4. Accurate analysis of multitone signals using a DFT

    Science.gov (United States)

    Burgess, John C.

    2004-07-01

    Optimum data windows make it possible to determine accurately the amplitude, phase, and frequency of one or more tones (sinusoidal components) in a signal. Procedures presented in this paper can be applied to noisy signals, signals having moderate nonstationarity, and tones close in frequency. They are relevant to many areas of acoustics where sounds are quasistationary. Among these are acoustic probes transmitted through media and natural sounds, such as animal vocalization, speech, and music. The paper includes criteria for multitone FFT block design and an example of application to sound transmission in the atmosphere.

  5. Virmid: accurate detection of somatic mutations with sample impurity inference.

    Science.gov (United States)

    Kim, Sangwoo; Jeong, Kyowon; Bhutani, Kunal; Lee, Jeong; Patel, Anand; Scott, Eric; Nam, Hojung; Lee, Hayan; Gleeson, Joseph G; Bafna, Vineet

    2013-08-29

    Detection of somatic variation using sequence from disease-control matched data sets is a critical first step. In many cases including cancer, however, it is hard to isolate pure disease tissue, and the impurity hinders accurate mutation analysis by disrupting overall allele frequencies. Here, we propose a new method, Virmid, that explicitly determines the level of impurity in the sample, and uses it for improved detection of somatic variation. Extensive tests on simulated and real sequencing data from breast cancer and hemimegalencephaly demonstrate the power of our model. A software implementation of our method is available at http://sourceforge.net/projects/virmid/.

  6. Accurate Programming: Thinking about programs in terms of properties

    Directory of Open Access Journals (Sweden)

    Walid Taha

    2011-09-01

    Full Text Available Accurate programming is a practical approach to producing high quality programs. It combines ideas from test-automation, test-driven development, agile programming, and other state of the art software development methods. In addition to building on approaches that have proven effective in practice, it emphasizes concepts that help programmers sharpen their understanding of both the problems they are solving and the solutions they come up with. This is achieved by encouraging programmers to think about programs in terms of properties.

  7. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  8. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  9. Surface Response-based Behavioral Modeling of Accurate Digitizers a Case Study on a Fast Digital Integrator at CERN

    CERN Document Server

    Arpaia, P; Spiezia, G; Tiso, S

    2007-01-01

    A statistical approach to behavioral modeling for assessing dynamic metrological performance during the concept design of accurate digitizers is proposed. A surface-response approach based on statistical experiment design is exploited for avoiding unrealistic hypothesis of linearity, optimizing simulation, exploring operating conditions systematically, as well as verifying identification and validation uncertainty. An actual case study on the dynamic metrological characterization of a Fast Digital Integrator for high-performance magnetic measurements at the European Organization for Nuclear Research (CERN) is presented.

  10. An Analytic Method for Measuring Accurate Fundamental Frequency Components

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Soon Ryul; Park Jong Keun [Seoul National University, Seoul(Korea); Kang, Sang Hee [Myongji University, Seoul (Korea)

    2002-04-01

    This paper proposes an analytic method for measuring the accurate fundamental frequency component of a fault current signal distorted with a DC-offset, a characteristic frequency component, and harmonics. The proposed algorithm is composed of four stages: sine filer, linear filter, Prony's method, and measurement. The sine filter and the linear filter eliminate harmonics and the fundamental frequency component, respectively. Then Prony's method is used to estimate the parameters of the DC-offset and the characteristic frequency component. Finally, the fundamental frequency component is measured by compensating the sine-filtered signal with the estimated parameters. The performance evaluation of the proposed method is presented for a-phase to ground faults on a 345 kV 200 km overhead transmission line. The EMTP is used to generate fault current signals under different fault locations and fault inception angles. It is shown that the analytic method accurately measures the fundamental frequency component regardless of the characteristic frequency component as well as the DC-offset.(author). 19 refs., 4 figs., 4 tabs.

  11. Is Cancer Information Exchanged on Social Media Scientifically Accurate?

    Science.gov (United States)

    Gage-Bouchard, Elizabeth A; LaValley, Susan; Warunek, Molli; Beaupin, Lynda Kwon; Mollica, Michelle

    2017-07-19

    Cancer patients and their caregivers are increasingly using social media as a platform to share cancer experiences, connect with support, and exchange cancer-related information. Yet, little is known about the nature and scientific accuracy of cancer-related information exchanged on social media. We conducted a content analysis of 12 months of data from 18 publically available Facebook Pages hosted by parents of children with acute lymphoblastic leukemia (N = 15,852 posts) and extracted all exchanges of medically-oriented cancer information. We systematically coded for themes in the nature of cancer-related information exchanged on personal Facebook Pages and two oncology experts independently evaluated the scientific accuracy of each post. Of the 15,852 total posts, 171 posts contained medically-oriented cancer information. The most frequent type of cancer information exchanged was information related to treatment protocols and health services use (35%) followed by information related to side effects and late effects (26%), medication (16%), medical caregiving strategies (13%), alternative and complementary therapies (8%), and other (2%). Overall, 67% of all cancer information exchanged was deemed medically/scientifically accurate, 19% was not medically/scientifically accurate, and 14% described unproven treatment modalities. These findings highlight the potential utility of social media as a cancer-related resource, but also indicate that providers should focus on recommending reliable, evidence-based sources to patients and caregivers.

  12. Novel dispersion tolerant interferometry method for accurate measurements of displacement

    Science.gov (United States)

    Bradu, Adrian; Maria, Michael; Leick, Lasse; Podoleanu, Adrian G.

    2015-05-01

    We demonstrate that the recently proposed master-slave interferometry method is able to provide true dispersion free depth profiles in a spectrometer-based set-up that can be used for accurate displacement measurements in sensing and optical coherence tomography. The proposed technique is based on correlating the channelled spectra produced by the linear camera in the spectrometer with previously recorded masks. As such technique is not based on Fourier transformations (FT), it does not require any resampling of data and is immune to any amounts of dispersion left unbalanced in the system. In order to prove the tolerance of technique to dispersion, different lengths of optical fiber are used in the interferometer to introduce dispersion and it is demonstrated that neither the sensitivity profile versus optical path difference (OPD) nor the depth resolution are affected. In opposition, it is shown that the classical FT based methods using calibrated data provide less accurate optical path length measurements and exhibit a quicker decays of sensitivity with OPD.

  13. More-Accurate Model of Flows in Rocket Injectors

    Science.gov (United States)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  14. The economic value of accurate wind power forecasting to utilities

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.J. [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G.; Joensen, A. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  15. Ultra-accurate collaborative information filtering via directed user similarity

    Science.gov (United States)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  16. Accurate and precise zinc isotope ratio measurements in urban aerosols.

    Science.gov (United States)

    Gioia, Simone; Weiss, Dominik; Coles, Barry; Arnold, Tim; Babinski, Marly

    2008-12-15

    We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05 per thousand per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37 per thousand in coarse and between -1.04 and 0.02 per thousand in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta(66)Zn(Imperial) = 0.26 +/- 0.10 per thousand).

  17. Accurate Runout Measurement for HDD Spinning Motors and Disks

    Science.gov (United States)

    Jiang, Quan; Bi, Chao; Lin, Song

    As hard disk drive (HDD) areal density increases, its track width becomes smaller and smaller and so is non-repeatable runout. HDD industry needs more accurate and better resolution runout measurements of spinning spindle motors and media platters in both axial and radial directions. This paper introduces a new system how to precisely measure the runout of HDD spinning disks and motors through synchronously acquiring the rotor position signal and the displacements in axial or radial directions. In order to minimize the synchronizing error between the rotor position and the displacement signal, a high resolution counter is adopted instead of the conventional phase-lock loop method. With Laser Doppler Vibrometer and proper signal processing, the proposed runout system can precisely measure the runout of the HDD spinning disks and motors with 1 nm resolution and 0.2% accuracy with a proper sampling rate. It can provide an effective and accurate means to measure the runout of high areal density HDDs, in particular the next generation HDDs, such as, pattern media HDDs and HAMR HDDs.

  18. Accurate Interatomic Force Fields via Machine Learning with Covariant Kernels

    CERN Document Server

    Glielmo, Aldo; De Vita, Alessandro

    2016-01-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian Process (GP) Regression. This is based on matrix-valued kernel functions, to which we impose that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such "covariant" GP kernels can be obtained by integration over the elements of the rotation group SO(d) for the relevant dimensionality d. Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni and Fe crystalline...

  19. Accurate interatomic force fields via machine learning with covariant kernels

    Science.gov (United States)

    Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro

    2017-06-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.

  20. Accurately controlled sequential self-folding structures by polystyrene film

    Science.gov (United States)

    Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse

    2017-08-01

    Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.

  1. Learning fast accurate movements requires intact frontostriatal circuits

    Directory of Open Access Journals (Sweden)

    Britne eShabbott

    2013-11-01

    Full Text Available The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound, and difficulties in distinguishing learning deficits from execution impairments (performance confound. We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements, and we addressed the definition and performance confounds by: 1 focusing on an operationally defined core element of motor skill learning (speed-accuracy learning, and 2 using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington’s disease (HD, a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning.

  2. Accurate measurement of streamwise vortices using dual-plane PIV

    Science.gov (United States)

    Waldman, Rye M.; Breuer, Kenneth S.

    2012-11-01

    Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers.

  3. Simple and accurate optical height sensor for wafer inspection systems

    Science.gov (United States)

    Shimura, Kei; Nakai, Naoya; Taniguchi, Koichi; Itoh, Masahide

    2016-02-01

    An accurate method for measuring the wafer surface height is required for wafer inspection systems to adjust the focus of inspection optics quickly and precisely. A method for projecting a laser spot onto the wafer surface obliquely and for detecting its image displacement using a one-dimensional position-sensitive detector is known, and a variety of methods have been proposed for improving the accuracy by compensating the measurement error due to the surface patterns. We have developed a simple and accurate method in which an image of a reticle with eight slits is projected on the wafer surface and its reflected image is detected using an image sensor. The surface height is calculated by averaging the coordinates of the images of the slits in both the two directions in the captured image. Pattern-related measurement error was reduced by applying the coordinates averaging to the multiple-slit-projection method. Accuracy of better than 0.35 μm was achieved for a patterned wafer at the reference height and ±0.1 mm from the reference height in a simple configuration.

  4. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  5. Towards an accurate determination of the age of the Universe

    CERN Document Server

    Jiménez, R

    1998-01-01

    In the past 40 years a considerable effort has been focused in determining the age of the Universe at zero redshift using several stellar clocks. In this review I will describe the best theoretical methods to determine the age of the oldest Galactic Globular Clusters (GC). I will also argue that a more accurate age determination may come from passively evolving high-redshift ellipticals. In particular, I will review two new methods to determine the age of GC. These two methods are more accurate than the classical isochrone fitting technique. The first method is based on the morphology of the horizontal branch and is independent of the distance modulus of the globular cluster. The second method uses a careful binning of the stellar luminosity function which determines simultaneously the distance and age of the GC. It is found that the oldest GCs have an age of $13.5 \\pm 2$ Gyr. The absolute minimum age for the oldest GCs is 10.5 Gyr and the maximum is 16.0 Gyr (with 99% confidence). Therefore, an Einstein-De S...

  6. Accurate stone analysis: the impact on disease diagnosis and treatment.

    Science.gov (United States)

    Mandel, Neil S; Mandel, Ian C; Kolbach-Mandel, Ann M

    2017-02-01

    This manuscript reviews the requirements for acceptable compositional analysis of kidney stones using various biophysical methods. High-resolution X-ray powder diffraction crystallography and Fourier transform infrared spectroscopy (FTIR) are the only acceptable methods in our labs for kidney stone analysis. The use of well-constructed spectral reference libraries is the basis for accurate and complete stone analysis. The literature included in this manuscript identify errors in most commercial laboratories and in some academic centers. We provide personal comments on why such errors are occurring at such high rates, and although the work load is rather large, it is very worthwhile in providing accurate stone compositions. We also provide the results of our almost 90,000 stone analyses and a breakdown of the number of components we have observed in the various stones. We also offer advice on determining the method used by the various FTIR equipment manufacturers who also provide a stone analysis library so that the FTIR users can feel comfortable in the accuracy of their reported results. Such an analysis on the accuracy of the individual reference libraries could positively influence the reduction in their respective error rates.

  7. Accurate prediction of secondary metabolite gene clusters in filamentous fungi.

    Science.gov (United States)

    Andersen, Mikael R; Nielsen, Jakob B; Klitgaard, Andreas; Petersen, Lene M; Zachariasen, Mia; Hansen, Tilde J; Blicher, Lene H; Gotfredsen, Charlotte H; Larsen, Thomas O; Nielsen, Kristian F; Mortensen, Uffe H

    2013-01-02

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify supporting enzymes for key synthases one cluster at a time. In this study, we design and apply a DNA expression array for Aspergillus nidulans in combination with legacy data to form a comprehensive gene expression compendium. We apply a guilt-by-association-based analysis to predict the extent of the biosynthetic clusters for the 58 synthases active in our set of experimental conditions. A comparison with legacy data shows the method to be accurate in 13 of 16 known clusters and nearly accurate for the remaining 3 clusters. Furthermore, we apply a data clustering approach, which identifies cross-chemistry between physically separate gene clusters (superclusters), and validate this both with legacy data and experimentally by prediction and verification of a supercluster consisting of the synthase AN1242 and the prenyltransferase AN11080, as well as identification of the product compound nidulanin A. We have used A. nidulans for our method development and validation due to the wealth of available biochemical data, but the method can be applied to any fungus with a sequenced and assembled genome, thus supporting further secondary metabolite pathway elucidation in the fungal kingdom.

  8. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  9. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  10. Cerebral fat embolism: Use of MR spectroscopy for accurate diagnosis

    Directory of Open Access Journals (Sweden)

    Laxmi Kokatnur

    2015-01-01

    Full Text Available Cerebral fat embolism (CFE is an uncommon but serious complication following orthopedic procedures. It usually presents with altered mental status, and can be a part of fat embolism syndrome (FES if associated with cutaneous and respiratory manifestations. Because of the presence of other common factors affecting the mental status, particularly in the postoperative period, the diagnosis of CFE can be challenging. Magnetic resonance imaging (MRI of brain typically shows multiple lesions distributed predominantly in the subcortical region, which appear as hyperintense lesions on T2 and diffusion weighted images. Although the location offers a clue, the MRI findings are not specific for CFE. Watershed infarcts, hypoxic encephalopathy, disseminated infections, demyelinating disorders, diffuse axonal injury can also show similar changes on MRI of brain. The presence of fat in these hyperintense lesions, identified by MR spectroscopy as raised lipid peaks will help in accurate diagnosis of CFE. Normal brain tissue or conditions producing similar MRI changes will not show any lipid peak on MR spectroscopy. We present a case of CFE initially misdiagnosed as brain stem stroke based on clinical presentation and cranial computed tomography (CT scan, and later, MR spectroscopy elucidated the accurate diagnosis.

  11. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Energy Technology Data Exchange (ETDEWEB)

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  12. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    Directory of Open Access Journals (Sweden)

    Usman Khan

    2014-04-01

    Full Text Available Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h. The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication.

  13. Accurate diagnostics of ataxia-telangiectasia cellular phenotype by employing in vitro lymphocyte radiosensitivity testing

    Directory of Open Access Journals (Sweden)

    Vujić Dragana S.

    2013-01-01

    Full Text Available In this paper we present the data of lymphocyte radiosensitivity testing used for characterization of radiosensitive cellular phenotype and diagnostics of ataxia-telangiectasia disease. We point out the advantage of lymphocyte micronucleus test (CBMN over other cellular tests for assessment of radiosensitivity: the first advantage of CBMN is that primary patient cells are used (less than 1 ml, the second one is that the results of testing are obtained within 3 days and there is no need for establishing a patient-derived cell line, which requires additional time and application of more expensive methods. The third advantage of CBMN method is that it gives information about proliferative ability of cells, which can recognize dysfunctional ataxia-telangiectasia mutated protein. The results are fast and accurate in diagnostics of ataxia-telagiectasia diseases.

  14. Accurate mass measurements on neutron-deficient krypton isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, D. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany)]. E-mail: rodriguez@lpccaen.in2p3.fr; Audi, G. [CSNSM-IN2P3-CNRS, 91405 Orsay-Campus(France); Aystoe, J. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, 40351 Jyvaeskylae (Finland); Beck, D. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); Blaum, K. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); Institute of Physics, University of Mainz, Staudingerweg 7, 55128 Mainz (Germany); Bollen, G. [NSCL, Michigan State University, East Lansing, MI 48824-1321 (United States); Herfurth, F. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); Jokinen, A. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, 40351 Jyvaeskylae (Finland); Kellerbauer, A. [CERN, Division EP, 1211 Geneva 23 (Switzerland); Kluge, H.-J. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); University of Heidelberg, 69120 Heidelberg (Germany); Kolhinen, V.S. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, 40351 Jyvaeskylae (Finland); Oinonen, M. [Helsinki Institute of Physics, P.O. Box 64, 00014 University of Helsinki (Finland); Sauvan, E. [Institute of Physics, University of Mainz, Staudingerweg 7, 55128 Mainz (Germany); Schwarz, S. [NSCL, Michigan State University, East Lansing, MI 48824-1321 (United States)

    2006-04-17

    The masses of {sup 72-78,80,82,86}Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for {sup 72-75}Kr outweighed previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  15. Stereotypes of age differences in personality traits: universal and accurate?

    Science.gov (United States)

    Chan, Wayne; McCrae, Robert R; De Fruyt, Filip; Jussim, Lee; Löckenhoff, Corinna E; De Bolle, Marleen; Costa, Paul T; Sutin, Angelina R; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Hřebíčková, Martina; Graf, Sylvie; Yik, Michelle; Brunner-Sciarra, Marina; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-Kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R; Crawford, Jarret T; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Marušić, Iris; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Martin, Thomas A; Gheorghiu, Mirona; Smith, Peter B; Barbaranelli, Claudio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V; Pramila, V S; Terracciano, Antonio

    2012-12-01

    Age trajectories for personality traits are known to be similar across cultures. To address whether stereotypes of age groups reflect these age-related changes in personality, we asked participants in 26 countries (N = 3,323) to rate typical adolescents, adults, and old persons in their own country. Raters across nations tended to share similar beliefs about different age groups; adolescents were seen as impulsive, rebellious, undisciplined, preferring excitement and novelty, whereas old people were consistently considered lower on impulsivity, activity, antagonism, and Openness. These consensual age group stereotypes correlated strongly with published age differences on the five major dimensions of personality and most of 30 specific traits, using as criteria of accuracy both self-reports and observer ratings, different survey methodologies, and data from up to 50 nations. However, personal stereotypes were considerably less accurate, and consensual stereotypes tended to exaggerate differences across age groups. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  16. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    Science.gov (United States)

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  17. Accurate Calculation of Fringe Fields in the LHC Main Dipoles

    CERN Document Server

    Kurz, S; Siegel, N

    2000-01-01

    The ROXIE program developed at CERN for the design and optimization of the superconducting LHC magnets has been recently extended in a collaboration with the University of Stuttgart, Germany, with a field computation method based on the coupling between the boundary element (BEM) and the finite element (FEM) technique. This avoids the meshing of the coils and the air regions, and avoids the artificial far field boundary conditions. The method is therefore specially suited for the accurate calculation of fields in the superconducting magnets in which the field is dominated by the coil. We will present the fringe field calculations in both 2d and 3d geometries to evaluate the effect of connections and the cryostat on the field quality and the flux density to which auxiliary bus-bars are exposed.

  18. Accurate Element of Compressive Bar considering the Effect of Displacement

    Directory of Open Access Journals (Sweden)

    Lifeng Tang

    2015-01-01

    Full Text Available By constructing the compressive bar element and developing the stiffness matrix, most issues about the compressive bar can be solved. In this paper, based on second derivative to the equilibrium differential governing equations, the displacement shape functions are got. And then the finite element formula of compressive bar element is developed by using the potential energy principle and analytical shape function. Based on the total potential energy variation principle, the static and geometrical stiffness matrices are proposed, in which the large deformation of compressive bar is considered. To verify the accurate and validity of the analytical trial function element proposed in this paper, a number of the numerical examples are presented. Comparisons show that the proposed element has high calculation efficiency and rapid speed of convergence.

  19. Accurate determination of heteroclinic orbits in chaotic dynamical systems

    Science.gov (United States)

    Li, Jizhou; Tomsovic, Steven

    2017-03-01

    Accurate calculation of heteroclinic and homoclinic orbits can be of significant importance in some classes of dynamical system problems. Yet for very strongly chaotic systems initial deviations from a true orbit will be magnified by a large exponential rate making direct computational methods fail quickly. In this paper, a method is developed that avoids direct calculation of the orbit by making use of the well-known stability property of the invariant unstable and stable manifolds. Under an area-preserving map, this property assures that any initial deviation from the stable (unstable) manifold collapses onto them under inverse (forward) iterations of the map. Using a set of judiciously chosen auxiliary points on the manifolds, long orbit segments can be calculated using the stable and unstable manifold intersections of the heteroclinic (homoclinic) tangle. Detailed calculations using the example of the kicked rotor are provided along with verification of the relation between action differences and certain areas bounded by the manifolds.

  20. Accurate finite difference methods for time-harmonic wave propagation

    Science.gov (United States)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  1. Accurate bond dissociation energies (D 0) for FHF- isotopologues

    Science.gov (United States)

    Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.

    2013-09-01

    Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.

  2. Redundancy-Free, Accurate Analytical Center Machine for Classification

    Institute of Scientific and Technical Information of China (English)

    ZHENGFanzi; QIUZhengding; LengYonggang; YueJianhai

    2005-01-01

    Analytical center machine (ACM) has remarkable generalization performance based on analytical center of version space and outperforms SVM. From the analysis of geometry of machine learning and principle of ACM, it is showed that some training patterns are redundant to the definition of version space. Redundant patterns push ACM classifier away from analytical center of the prime version space so that the generalization performance degrades, at the same time redundant patterns slow down the classifier and reduce the efficiency of storage. Thus, an incremental algorithm is proposed to remove redundant patterns and embed into the frame of ACM that yields a Redundancy free accurate-Analytical center machine (RFA-ACM) for classification. Experiments with Heart, Thyroid, Banana datasets demonstrate the validity of RFA-ACM.

  3. Machine Learning of Accurate Energy-Conserving Molecular Force Fields

    CERN Document Server

    Chmiela, Stefan; Sauceda, Huziel E; Poltavsky, Igor; Schütt, Kristof; Müller, Klaus-Robert

    2016-01-01

    Using conservation of energy -- a fundamental property of closed classical and quantum mechanical systems -- we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential-energy surfaces of intermediate-size molecules with an accuracy of 0.3 kcal/mol for energies and 1 kcal/mol/{\\AA} for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations...

  4. Accurate derivative evaluation for any Grad–Shafranov solver

    Energy Technology Data Exchange (ETDEWEB)

    Ricketson, L.F. [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Cerfon, A.J., E-mail: cerfon@cims.nyu.edu [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Rachh, M. [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Freidberg, J.P. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2016-01-15

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad–Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  5. A fast and accurate FPGA based QRS detection system.

    Science.gov (United States)

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  6. Accurate measure by weight of liquids in industry

    Energy Technology Data Exchange (ETDEWEB)

    Muller, M.R.

    1992-12-12

    This research's focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.

  7. Accurate measure by weight of liquids in industry. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Muller, M.R.

    1992-12-12

    This research`s focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.

  8. GLIMPSE: Accurate 3D weak lensing reconstructions using sparsity

    CERN Document Server

    Leonard, Adrienne; Starck, Jean-Luc

    2013-01-01

    We present GLIMPSE - Gravitational Lensing Inversion and MaPping with Sparse Estimators - a new algorithm to generate density reconstructions in three dimensions from photometric weak lensing measurements. This is an extension of earlier work in one dimension aimed at applying compressive sensing theory to the inversion of gravitational lensing measurements to recover 3D density maps. Using the assumption that the density can be represented sparsely in our chosen basis - 2D transverse wavelets and 1D line of sight dirac functions - we show that clusters of galaxies can be identified and accurately localised and characterised using this method. Throughout, we use simulated data consistent with the quality currently attainable in large surveys. We present a thorough statistical analysis of the errors and biases in both the redshifts of detected structures and their amplitudes. The GLIMPSE method is able to produce reconstructions at significantly higher resolution than the input data; in this paper we show reco...

  9. Accurate Parallel Algorithm for Adini Nonconforming Finite Element

    Institute of Scientific and Technical Information of China (English)

    罗平; 周爱辉

    2003-01-01

    Multi-parameter asymptotic expansions are interesting since they justify the use of multi-parameter extrapolation which can be implemented in parallel and are well studied in many papers for the conforming finite element methods. For the nonconforming finite element methods, however, the work of the multi-parameter asymptotic expansions and extrapolation have seldom been found in the literature. This paper considers the solution of the biharmonic equation using Adini nonconforming finite elements and reports new results for the multi-parameter asymptotic expansions and extrapolation. The Adini nonconforming finite element solution of the biharmonic equation is shown to have a multi-parameter asymptotic error expansion and extrapolation. This expansion and a multi-parameter extrapolation technique were used to develop an accurate approximation parallel algorithm for the biharmonic equation. Finally, numerical results have verified the extrapolation theory.

  10. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    DEFF Research Database (Denmark)

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain...... model of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  11. Accurate Performance Analysis of Opportunistic Decode-and-Forward Relaying

    CERN Document Server

    Tourki, Kamel; Alouni, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive statistics based on exact probability density function (PDF) of each hop. Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures.

  12. An Integrative Approach to Accurate Vehicle Logo Detection

    Directory of Open Access Journals (Sweden)

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  13. Stereotypes of Age Differences in Personality Traits: Universal and Accurate?

    Science.gov (United States)

    Chan, Wayne; McCrae, Robert R.; De Fruyt, Filip; Jussim, Lee; Löckenhoff, Corinna E.; De Bolle, Marleen; Costa, Paul T.; Sutin, Angelina R.; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Hřebíčková, Martina; Kourilova, Sylvie; Yik, Michelle; Ficková, Emília; Brunner-Sciarra, Marina; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E.; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R.; Crawford, Jarret T.; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R.; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Martin, Thomas A.; Gheorghiu, Mirona; Smith, Peter B.; Barbaranelli, Claduio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P.; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V.; Pramila, V. S.; Terracciano, Antonio

    2012-01-01

    Age trajectories for personality traits are known to be similar across cultures. To address whether stereotypes of age groups reflect these age-related changes in personality, we asked participants in 26 countries (N = 3,323) to rate typical adolescents, adults, and old persons in their own country. Raters across nations tended to share similar beliefs about different age groups; adolescents were seen as impulsive, rebellious, undisciplined, preferring excitement and novelty, whereas old people were consistently considered lower on impulsivity, activity, antagonism, and Openness. These consensual age group stereotypes correlated strongly with published age differences on the five major dimensions of personality and most of 30 specific traits, using as criteria of accuracy both self-reports and observer ratings, different survey methodologies, and data from up to 50 nations. However, personal stereotypes were considerably less accurate, and consensual stereotypes tended to exaggerate differences across age groups. PMID:23088227

  14. Accurate mass measurements on neutron-deficient krypton isotopes

    CERN Document Server

    Rodríguez, D; Äystö, J; Beck, D

    2006-01-01

    The masses of $^{72–78,80,82,86}$Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for $^{72–75}$Kr being more precise than the previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  15. Fast and spectrally accurate summation of 2-periodic Stokes potentials

    CERN Document Server

    Lindbo, Dag

    2011-01-01

    We derive a Ewald decomposition for the Stokeslet in planar periodicity and a novel PME-type O(N log N) method for the fast evaluation of the resulting sums. The decomposition is the natural 2P counterpart to the classical 3P decomposition by Hasimoto, and is given in an explicit form not found in the literature. Truncation error estimates are provided to aid in selecting parameters. The fast, PME-type, method appears to be the first fast method for computing Stokeslet Ewald sums in planar periodicity, and has three attractive properties: it is spectrally accurate; it uses the minimal amount of memory that a gridded Ewald method can use; and provides clarity regarding numerical errors and how to choose parameters. Analytical and numerical results are give to support this. We explore the practicalities of the proposed method, and survey the computational issues involved in applying it to 2-periodic boundary integral Stokes problems.

  16. Second-Order Accurate Projective Integrators for Multiscale Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S L; Gear, C W

    2005-05-27

    We introduce new projective versions of second-order accurate Runge-Kutta and Adams-Bashforth methods, and demonstrate their use as outer integrators in solving stiff differential systems. An important outcome is that the new outer integrators, when combined with an inner telescopic projective integrator, can result in fully explicit methods with adaptive outer step size selection and solution accuracy comparable to those obtained by implicit integrators. If the stiff differential equations are not directly available, our formulations and stability analysis are general enough to allow the combined outer-inner projective integrators to be applied to black-box legacy codes or perform a coarse-grained time integration of microscopic systems to evolve macroscopic behavior, for example.

  17. Spectropolarimetrically accurate magnetohydrostatic sunspot model for forward modelling in helioseismology

    CERN Document Server

    Przybylski, D; Cally, P S

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magneto-hydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion and absorption in the solar interior and photosphere with the sunspot embedded into it. With the $6173\\mathrm{\\AA}$ magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as full Stokes vector for the simulation at various positions at the solar disk, and analyse the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterised. An increase in acoustic power in the simulated observ...

  18. Accurate complex scaling of three dimensional numerical potentials.

    Science.gov (United States)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schrödinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.

  19. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  20. Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations

    Science.gov (United States)

    Bowman, J.; Jensen, S.; McDonald, Mark

    2010-10-01

    High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.

  1. Accurate Enthalpies of Formation of Astromolecules: Energy, Stability and Abundance

    CERN Document Server

    Etim, Emmanuel E

    2016-01-01

    Accurate enthalpies of formation are reported for known and potential astromolecules using high level ab initio quantum chemical calculations. A total of 130 molecules comprising of 31 isomeric groups and 24 cyanide/isocyanide pairs with atoms ranging from 3 to 12 have been considered. The results show an interesting, surprisingly not well explored, relationship between energy, stability and abundance (ESA) existing among these molecules. Among the isomeric species, isomers with lower enthalpies of formation are more easily observed in the interstellar medium compared to their counterparts with higher enthalpies of formation. Available data in literature confirm the high abundance of the most stable isomer over other isomers in the different groups considered. Potential for interstellar hydrogen bonding accounts for the few exceptions observed. Thus, in general, it suffices to say that the interstellar abundances of related species are directly proportional to their stabilities. The immediate consequences of ...

  2. Fast and accurate determination of modularity and its effect size

    CERN Document Server

    Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E

    2014-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  3. Methods for Accurate Free Flight Measurement of Drag Coefficients

    CERN Document Server

    Courtney, Elya; Courtney, Michael

    2015-01-01

    This paper describes experimental methods for free flight measurement of drag coefficients to an accuracy of approximately 1%. There are two main methods of determining free flight drag coefficients, or equivalent ballistic coefficients: 1) measuring near and far velocities over a known distance and 2) measuring a near velocity and time of flight over a known distance. Atmospheric conditions must also be known and nearly constant over the flight path. A number of tradeoffs are important when designing experiments to accurately determine drag coefficients. The flight distance must be large enough so that the projectile's loss of velocity is significant compared with its initial velocity and much larger than the uncertainty in the near and/or far velocity measurements. On the other hand, since drag coefficients and ballistic coefficients both depend on velocity, the change in velocity over the flight path should be small enough that the average drag coefficient over the path (which is what is really determined)...

  4. Calculation of Accurate Hexagonal Discontinuity Factors for PARCS

    Energy Technology Data Exchange (ETDEWEB)

    Pounders. J., Bandini, B. R. , Xu, Y, and Downar, T. J.

    2007-11-01

    In this study we derive a methodology for calculating discontinuity factors consistent with the Triangle-based Polynomial Expansion Nodal (TPEN) method implemented in PARCS for hexagonal reactor geometries. The accuracy of coarse-mesh nodal methods is greatly enhanced by permitting flux discontinuities at node boundaries, but the practice of calculating discontinuity factors from infinite-medium (zero-current) single bundle calculations may not be sufficiently accurate for more challenging problems in which there is a large amount of internodal neutron streaming. The authors therefore derive a TPEN-based method for calculating discontinuity factors that are exact with respect to generalized equivalence theory. The method is validated by reproducing the reference solution for a small hexagonal core.

  5. Accurate platelet counting in an insidious case of pseudothrombocytopenia.

    Science.gov (United States)

    Lombarts, A J; Zijlstra, J J; Peters, R H; Thomasson, C G; Franck, P F

    1999-01-01

    Anticoagulant-induced aggregation of platelets leads to pseudothrombocytopenia. Blood cell counters generally trigger alarms to alert the user. We describe an insidious case of pseudothrombocytopenia, where the complete absence of Coulter counter alarms both in ethylenediaminetetraacetic acid blood and in citrate or acid citrate dextrose blood samples was compounded by the fact that the massive aggregates were exclusively found at the edges of the blood smear. Non-recognition of pseudothrombocytopenia can have serious diagnostic and therapeutic consequences. While the anti-aggregant mixture citrate-theophylline-adenosine-dipyridamole completely failed in preventing pseudothrombocytopenia, addition of iloprost to anticoagulants only partially prevented the aggregation. Only the prior addition of gentamicin to any anticoagulant used resulted in a complete prevention of pseudothrombocytopenia and enabled to count accurately the platelets.

  6. A new accurate pill recognition system using imprint information

    Science.gov (United States)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  7. IVA: accurate de novo assembly of RNA virus genomes.

    Science.gov (United States)

    Hunt, Martin; Gall, Astrid; Ong, Swee Hoe; Brener, Jacqui; Ferns, Bridget; Goulder, Philip; Nastouli, Eleni; Keane, Jacqueline A; Kellam, Paul; Otto, Thomas D

    2015-07-15

    An accurate genome assembly from short read sequencing data is critical for downstream analysis, for example allowing investigation of variants within a sequenced population. However, assembling sequencing data from virus samples, especially RNA viruses, into a genome sequence is challenging due to the combination of viral population diversity and extremely uneven read depth caused by amplification bias in the inevitable reverse transcription and polymerase chain reaction amplification process of current methods. We developed a new de novo assembler called IVA (Iterative Virus Assembler) designed specifically for read pairs sequenced at highly variable depth from RNA virus samples. We tested IVA on datasets from 140 sequenced samples from human immunodeficiency virus-1 or influenza-virus-infected people and demonstrated that IVA outperforms all other virus de novo assemblers. The software runs under Linux, has the GPLv3 licence and is freely available from http://sanger-pathogens.github.io/iva © The Author 2015. Published by Oxford University Press.

  8. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  9. Accurate and efficient maximal ball algorithm for pore network extraction

    Science.gov (United States)

    Arand, Frederick; Hesser, Jürgen

    2017-04-01

    The maximal ball (MB) algorithm is a well established method for the morphological analysis of porous media. It extracts a network of pores and throats from volumetric data. This paper describes structural modifications to the algorithm, while the basic concepts are preserved. Substantial improvements to accuracy and efficiency are achieved as follows: First, all calculations are performed on a subvoxel accurate distance field, and no approximations to discretize balls are made. Second, data structures are simplified to keep memory usage low and improve algorithmic speed. Third, small and reasonable adjustments increase speed significantly. In volumes with high porosity, memory usage is improved compared to classic MB algorithms. Furthermore, processing is accelerated more than three times. Finally, the modified MB algorithm is verified by extracting several network properties from reference as well as real data sets. Runtimes are measured and compared to literature.

  10. Accurate Iterative Analysis of the K-V Equations

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, O.A.

    2005-05-09

    Those working with alternating-gradient (A-G) systems look for simple, accurate ways to analyze A-G performance for matched beams. The useful K-V equations are easily solved in the smooth approximation. This approximate solution becomes quite inaccurate for applications with large focusing fields and phase advances. Results of efforts to improve the accuracy have tended to be indirect or complex. Their generalizations presented previously gave better accuracy in a simple explicit format. However, the method used to derive their results (expansion in powers of a small parameter) was complex and hard to follow; also, reference 7 only gave low-order correction formulas. The present paper uses a straightforward iteration method and obtains equations of higher order than shown in their previous paper.

  11. Efficient and Accurate Indoor Localization Using Landmark Graphs

    Science.gov (United States)

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  12. The accurate assessment of small-angle X-ray scattering data

    Energy Technology Data Exchange (ETDEWEB)

    Grant, Thomas D. [Hauptman–Woodward Medical Research Institute, 700 Ellicott Street, Buffalo, NY 14203 (United States); Luft, Joseph R. [Hauptman–Woodward Medical Research Institute, 700 Ellicott Street, Buffalo, NY 14203 (United States); SUNY Buffalo, 700 Ellicott Street, Buffalo, NY 14203 (United States); Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne [Stanford Synchrotron Radiation Lightsource, 2575 Sand Hill Road, MS69, Menlo Park, CA 94025 (United States); Snell, Edward H., E-mail: esnell@hwi.buffalo.edu [Hauptman–Woodward Medical Research Institute, 700 Ellicott Street, Buffalo, NY 14203 (United States); SUNY Buffalo, 700 Ellicott Street, Buffalo, NY 14203 (United States)

    2015-01-01

    A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  13. Accurate Cure Modeling for Isothermal Processing of Fast Curing Epoxy Resins

    Directory of Open Access Journals (Sweden)

    Alexander Bernath

    2016-11-01

    Full Text Available In this work a holistic approach for the characterization and mathematical modeling of the reaction kinetics of a fast epoxy resin is shown. Major composite manufacturing processes like resin transfer molding involve isothermal curing at temperatures far below the ultimate glass transition temperature. Hence, premature vitrification occurs during curing and consequently has to be taken into account by the kinetic model. In order to show the benefit of using a complex kinetic model, the Kamal-Malkin kinetic model is compared to the Grindling kinetic model in terms of prediction quality for isothermal processing. From the selected models, only the Grindling kinetic is capable of taking into account vitrification. Non-isothermal, isothermal and combined differential scanning calorimetry (DSC measurements are conducted and processed for subsequent use for model parametrization. In order to demonstrate which DSC measurements are vital for proper cure modeling, both models are fitted to varying sets of measurements. Special attention is given to the evaluation of isothermal DSC measurements which are subject to deviations arising from unrecorded cross-linking prior to the beginning of the measurement as well as from physical aging effects. It is found that isothermal measurements are vital for accurate modeling of isothermal cure and cannot be neglected. Accurate cure predictions are achieved using the Grindling kinetic model.

  14. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    Energy Technology Data Exchange (ETDEWEB)

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  15. Is accurate mapping of EMG signals on kinematics needed for precise online myoelectric control?

    Science.gov (United States)

    Jiang, Ning; Vujaklija, Ivan; Rehbaum, Hubertus; Graimann, Bernhard; Farina, Dario

    2014-05-01

    In this paper, we present a systematic analysis of the relationship between the accuracy of the mapping between EMG and hand kinematics and the control performance in goal-oriented tasks of three simultaneous and proportional myoelectric control algorithms: nonnegative matrix factorization (NMF), linear regression (LR), and artificial neural networks (ANN). The purpose was to investigate the impact of the precision of the kinematics estimation by a myoelectric controller for accurately complete goal-directed tasks. Nine naïve subjects performed a series of goal-directed myoelectric control tasks using the three algorithms, and their online performance was characterized by 6 indexes. The results showed that, although the three algorithms' mapping accuracies were significantly different, their online performance was similar. Moreover, for LR and ANN, the offline performance was not correlated to any of the online performance indexes, and only a weak correlation was found with three of them for NMF . We conclude that for reliable simultaneous and proportional myoelectric control, it is not necessary to achieve high accuracy in the mapping between EMG and kinematics. Rather, good online myoelectric control is achieved by the continuous interaction and adaptation of the user with the myoelectric controller through feedback (visual in the current study). Control signals generated by EMG with rather poor association with kinematic variables can still be fully exploited by the user for precise control. This conclusion explains the possibility of accurate simultaneous and proportional control over multiple degrees of freedom when using unsupervised algorithms, such as NMF.

  16. Innovative Flow Cytometry Allows Accurate Identification of Rare Circulating Cells Involved in Endothelial Dysfunction

    Science.gov (United States)

    Boraldi, Federica; Bartolomeo, Angelica; De Biasi, Sara; Orlando, Stefania; Costa, Sonia; Cossarizza, Andrea; Quaglino, Daniela

    2016-01-01

    Introduction Although rare, circulating endothelial and progenitor cells could be considered as markers of endothelial damage and repair potential, possibly predicting the severity of cardiovascular manifestations. A number of studies highlighted the role of these cells in age-related diseases, including those characterized by ectopic calcification. Nevertheless, their use in clinical practice is still controversial, mainly due to difficulties in finding reproducible and accurate methods for their determination. Methods Circulating mature cells (CMC, CD45-, CD34+, CD133-) and circulating progenitor cells (CPC, CD45dim, CD34bright, CD133+) were investigated by polychromatic high-speed flow cytometry to detect the expression of endothelial (CD309+) or osteogenic (BAP+) differentiation markers in healthy subjects and in patients affected by peripheral vascular manifestations associated with ectopic calcification. Results This study shows that: 1) polychromatic flow cytometry represents a valuable tool to accurately identify rare cells; 2) the balance of CD309+ on CMC/CD309+ on CPC is altered in patients affected by peripheral vascular manifestations, suggesting the occurrence of vascular damage and low repair potential; 3) the increase of circulating cells exhibiting a shift towards an osteoblast-like phenotype (BAP+) is observed in the presence of ectopic calcification. Conclusion Differences between healthy subjects and patients with ectopic calcification indicate that this approach may be useful to better evaluate endothelial dysfunction in a clinical context. PMID:27560136

  17. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    Directory of Open Access Journals (Sweden)

    Kevin R Ramkissoon

    Full Text Available The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  18. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    Science.gov (United States)

    Ramkissoon, Kevin R; Miller, Jennifer K; Ojha, Sunil; Watson, Douglas S; Bomar, Martha G; Galande, Amit K; Shearer, Alexander G

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  19. Rapid Identification of Sequences for Orphan Enzymes to Power Accurate Protein Annotation

    Science.gov (United States)

    Ojha, Sunil; Watson, Douglas S.; Bomar, Martha G.; Galande, Amit K.; Shearer, Alexander G.

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the “back catalog” of enzymology – “orphan enzymes,” those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme “back catalog” is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology’s “back catalog” another powerful tool to drive accurate genome annotation. PMID:24386392

  20. Fast and accurate inference of local ancestry in Latino populations

    Science.gov (United States)

    Baran, Yael; Pasaniuc, Bogdan; Sankararaman, Sriram; Torgerson, Dara G.; Gignoux, Christopher; Eng, Celeste; Rodriguez-Cintron, William; Chapela, Rocio; Ford, Jean G.; Avila, Pedro C.; Rodriguez-Santana, Jose; Burchard, Esteban Gonzàlez; Halperin, Eran

    2012-01-01

    Motivation: It is becoming increasingly evident that the analysis of genotype data from recently admixed populations is providing important insights into medical genetics and population history. Such analyses have been used to identify novel disease loci, to understand recombination rate variation and to detect recent selection events. The utility of such studies crucially depends on accurate and unbiased estimation of the ancestry at every genomic locus in recently admixed populations. Although various methods have been proposed and shown to be extremely accurate in two-way admixtures (e.g. African Americans), only a few approaches have been proposed and thoroughly benchmarked on multi-way admixtures (e.g. Latino populations of the Americas). Results: To address these challenges we introduce here methods for local ancestry inference which leverage the structure of linkage disequilibrium in the ancestral population (LAMP-LD), and incorporate the constraint of Mendelian segregation when inferring local ancestry in nuclear family trios (LAMP-HAP). Our algorithms uniquely combine hidden Markov models (HMMs) of haplotype diversity within a novel window-based framework to achieve superior accuracy as compared with published methods. Further, unlike previous methods, the structure of our HMM does not depend on the number of reference haplotypes but on a fixed constant, and it is thereby capable of utilizing large datasets while remaining highly efficient and robust to over-fitting. Through simulations and analysis of real data from 489 nuclear trio families from the mainland US, Puerto Rico and Mexico, we demonstrate that our methods achieve superior accuracy compared with published methods for local ancestry inference in Latinos. Availability: http://lamp.icsi.berkeley.edu/lamp/lampld/ Contact: bpasaniu@hsph.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22495753

  1. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    Science.gov (United States)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  2. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  3. An accurate solver for forward and inverse transport

    Science.gov (United States)

    Monard, François; Bal, Guillaume

    2010-07-01

    This paper presents a robust and accurate way to solve steady-state linear transport (radiative transfer) equations numerically. Our main objective is to address the inverse transport problem, in which the optical parameters of a domain of interest are reconstructed from measurements performed at the domain's boundary. This inverse problem has important applications in medical and geophysical imaging, and more generally in any field involving high frequency waves or particles propagating in scattering environments. Stable solutions of the inverse transport problem require that the singularities of the measurement operator, which maps the optical parameters to the available measurements, be captured with sufficient accuracy. This in turn requires that the free propagation of particles be calculated with care, which is a difficult problem on a Cartesian grid. A standard discrete ordinates method is used for the direction of propagation of the particles. Our methodology to address spatial discretization is based on rotating the computational domain so that each direction of propagation is always aligned with one of the grid axes. Rotations are performed in the Fourier domain to achieve spectral accuracy. The numerical dispersion of the propagating particles is therefore minimal. As a result, the ballistic and single scattering components of the transport solution are calculated robustly and accurately. Physical blurring effects, such as small angular diffusion, are also incorporated into the numerical tool. Forward and inverse calculations performed in a two-dimensional setting exemplify the capabilities of the method. Although the methodology might not be the fastest way to solve transport equations, its physical accuracy provides us with a numerical tool to assess what can and cannot be reconstructed in inverse transport theory.

  4. Accurate skin dose measurements using radiochromic film in clinical applications.

    Science.gov (United States)

    Devic, S; Seuntjens, J; Abdel-Rahman, W; Evans, M; Olivares, M; Podgorsak, E B; Vuong, Té; Soares, Christopher G

    2006-04-01

    Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 micron. We used the new GAFCHROMIC dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 micron. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 micron to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10 x 10 cm2 increases from 14% to 43%. For the three GAFCHROMIC dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC film model. Finally, a procedure that uses EBT model GAFCHROMIC film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.

  5. Sleep Deprivation Impairs the Accurate Recognition of Human Emotions

    Science.gov (United States)

    van der Helm, Els; Gujar, Ninad; Walker, Matthew P.

    2010-01-01

    Study Objectives: Investigate the impact of sleep deprivation on the ability to recognize the intensity of human facial emotions. Design: Randomized total sleep-deprivation or sleep-rested conditions, involving between-group and within-group repeated measures analysis. Setting: Experimental laboratory study. Participants: Thirty-seven healthy participants, (21 females) aged 18–25 y, were randomly assigned to the sleep control (SC: n = 17) or total sleep deprivation group (TSD: n = 20). Interventions: Participants performed an emotional face recognition task, in which they evaluated 3 different affective face categories: Sad, Happy, and Angry, each ranging in a gradient from neutral to increasingly emotional. In the TSD group, the task was performed once under conditions of sleep deprivation, and twice under sleep-rested conditions following different durations of sleep recovery. In the SC group, the task was performed twice under sleep-rested conditions, controlling for repeatability. Measurements and Results: In the TSD group, when sleep-deprived, there was a marked and significant blunting in the recognition of Angry and Happy affective expressions in the moderate (but not extreme) emotional intensity range; differences that were most reliable and significant in female participants. No change in the recognition of Sad expressions was observed. These recognition deficits were, however, ameliorated following one night of recovery sleep. No changes in task performance were observed in the SC group. Conclusions: Sleep deprivation selectively impairs the accurate judgment of human facial emotions, especially threat relevant (Anger) and reward relevant (Happy) categories, an effect observed most significantly in females. Such findings suggest that sleep loss impairs discrete affective neural systems, disrupting the identification of salient affective social cues. Citation: van der Helm E; Gujar N; Walker MP. Sleep deprivation impairs the accurate recognition of human

  6. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Directory of Open Access Journals (Sweden)

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  7. An accurate and portable solid state neutron rem meter

    Energy Technology Data Exchange (ETDEWEB)

    Oakes, T.M. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Bellinger, S.L. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Miller, W.H. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Missouri University Research Reactor, Columbia, MO (United States); Myers, E.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Fronk, R.G.; Cooper, B.W [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Sobering, T.J. [Electronics Design Laboratory, Kansas State University, KS (United States); Scott, P.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Ugorowski, P.; McGregor, D.S; Shultis, J.K. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Caruso, A.N., E-mail: carusoan@umkc.edu [Department of Physics, University of Missouri, Kansas City, MO (United States)

    2013-08-11

    Accurately resolving the ambient neutron dose equivalent spanning the thermal to 15 MeV energy range with a single configuration and lightweight instrument is desirable. This paper presents the design of a portable, high intrinsic efficiency, and accurate neutron rem meter whose energy-dependent response is electronically adjusted to a chosen neutron dose equivalent standard. The instrument may be classified as a moderating type neutron spectrometer, based on an adaptation to the classical Bonner sphere and position sensitive long counter, which, simultaneously counts thermalized neutrons by high thermal efficiency solid state neutron detectors. The use of multiple detectors and moderator arranged along an axis of symmetry (e.g., long axis of a cylinder) with known neutron-slowing properties allows for the construction of a linear combination of responses that approximate the ambient neutron dose equivalent. Variations on the detector configuration are investigated via Monte Carlo N-Particle simulations to minimize the total instrument mass while maintaining acceptable response accuracy—a dose error less than 15% for bare {sup 252}Cf, bare AmBe, an epi-thermal and mixed monoenergetic sources is found at less than 4.5 kg moderator mass in all studied cases. A comparison of the energy dependent dose equivalent response and resultant energy dependent dose equivalent error of the present dosimeter to commercially-available portable rem meters and the prior art are presented. Finally, the present design is assessed by comparison of the simulated output resulting from applications of several known neutron sources and dose rates.

  8. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  9. Queuing theory accurately models the need for critical care resources.

    Science.gov (United States)

    McManus, Michael L; Long, Michael C; Cooper, Abbot; Litvak, Eugene

    2004-05-01

    Allocation of scarce resources presents an increasing challenge to hospital administrators and health policy makers. Intensive care units can present bottlenecks within busy hospitals, but their expansion is costly and difficult to gauge. Although mathematical tools have been suggested for determining the proper number of intensive care beds necessary to serve a given demand, the performance of such models has not been prospectively evaluated over significant periods. The authors prospectively collected 2 years' admission, discharge, and turn-away data in a busy, urban intensive care unit. Using queuing theory, they then constructed a mathematical model of patient flow, compared predictions from the model to observed performance of the unit, and explored the sensitivity of the model to changes in unit size. The queuing model proved to be very accurate, with predicted admission turn-away rates correlating highly with those actually observed (correlation coefficient = 0.89). The model was useful in predicting both monthly responsiveness to changing demand (mean monthly difference between observed and predicted values, 0.4+/-2.3%; range, 0-13%) and the overall 2-yr turn-away rate for the unit (21%vs. 22%). Both in practice and in simulation, turn-away rates increased exponentially when utilization exceeded 80-85%. Sensitivity analysis using the model revealed rapid and severe degradation of system performance with even the small changes in bed availability that might result from sudden staffing shortages or admission of patients with very long stays. The stochastic nature of patient flow may falsely lead health planners to underestimate resource needs in busy intensive care units. Although the nature of arrivals for intensive care deserves further study, when demand is random, queuing theory provides an accurate means of determining the appropriate supply of beds.

  10. Approaches for the accurate definition of geological time boundaries

    Science.gov (United States)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  11. Characterization techniques for surface-micromachined devices

    Energy Technology Data Exchange (ETDEWEB)

    Eaton, W.P.; Smith, N.F.; Irwin, L.; Tanner, D.M.

    1998-08-01

    Using a microengine as the primary test vehicle, the authors have examined several aspects of characterization. Parametric measurements provide fabrication process information. Drive signal optimization is necessary for increased microengine performance. Finally, electrical characterization of resonant frequency and quality factor can be more accurate than visual techniques.

  12. Accurate in-line CD metrology for nanometer semiconductor manufacturing

    Science.gov (United States)

    Perng, Baw-Ching; Shieh, Jyu-Horng; Jang, S.-M.; Liang, M.-S.; Huang, Renee; Chen, Li-Chien; Hwang, Ruey-Lian; Hsu, Joe; Fong, David

    2006-03-01

    The need for absolute accuracy is increasing as semiconductor-manufacturing technologies advance to sub-65nm nodes, since device sizes are reducing to sub-50nm but offsets ranging from 5nm to 20nm are often encountered. While TEM is well-recognized as the most accurate CD metrology, direct comparison between the TEM data and in-line CD data might be misleading sometimes due to different statistical sampling and interferences from sidewall roughness. In this work we explore the capability of CD-AFM as an accurate in-line CD reference metrology. Being a member of scanning profiling metrology, CD-AFM has the advantages of avoiding e-beam damage and minimum sample damage induced CD changes, in addition to the capability of more statistical sampling than typical cross section metrologies. While AFM has already gained its reputation on the accuracy of depth measurement, not much data was reported on the accuracy of CD-AFM for CD measurement. Our main focus here is to prove the accuracy of CD-AFM and show its measuring capability for semiconductor related materials and patterns. In addition to the typical precision check, we spent an intensive effort on examining the bias performance of this CD metrology, which is defined as the difference between CD-AFM data and the best-known CD value of the prepared samples. We first examine line edge roughness (LER) behavior for line patterns of various materials, including polysilicon, photoresist, and a porous low k material. Based on the LER characteristics of each patterning, a method is proposed to reduce its influence on CD measurement. Application of our method to a VLSI nanoCD standard is then performed, and agreement of less than 1nm bias is achieved between the CD-AFM data and the standard's value. With very careful sample preparations and TEM tool calibration, we also obtained excellent correlation between CD-AFM and TEM for poly-CDs ranging from 70nm to 400nm. CD measurements of poly ADI and low k trenches are also

  13. Accurate, low-cost 3D-models of gullies

    Science.gov (United States)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  14. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Directory of Open Access Journals (Sweden)

    Farmerie William G

    2006-08-01

    Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy

  15. A spectroscopic transfer standard for accurate atmospheric CO measurements

    Science.gov (United States)

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  16. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  17. Final Technical Report: Characterizing Emerging Technologies.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Riley, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gonzalez, Sigifredo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    The Characterizing Emerging Technologies project focuses on developing, improving and validating characterization methods for PV modules, inverters and embedded power electronics. Characterization methods and associated analysis techniques are at the heart of technology assessments and accurate component and system modeling. Outputs of the project include measurement and analysis procedures that industry can use to accurately model performance of PV system components, in order to better distinguish and understand the performance differences between competing products (module and inverters) and new component designs and technologies (e.g., new PV cell designs, inverter topologies, etc.).

  18. How accurate are our assumptions about our students' background knowledge?

    Science.gov (United States)

    Rovick, A A; Michael, J A; Modell, H I; Bruce, D S; Horwitz, B; Adamson, T; Richardson, D R; Silverthorn, D U; Whitescarver, S A

    1999-06-01

    Teachers establish prerequisites that students must meet before they are permitted to enter their courses. It is expected that having these prerequisites will provide students with the knowledge and skills they will need to successfully learn the course content. Also, the material that the students are expected to have previously learned need not be included in a course. We wanted to determine how accurate instructors' understanding of their students background knowledge actually was. To do this, we wrote a set of multiple-choice questions that could be used to test students' knowledge of concepts deemed to be essential for learning respiratory physiology. Instructors then selected 10 of these questions to be used as a prerequisite knowledge test. The instructors also predicted the performance they expected from the students on each of the questions they had selected. The resulting tests were administered in the first week of each of seven courses. The results of this study demonstrate that instructors are poor judges of what beginning students know. Instructors tended to both underestimate and overestimate students' knowledge by large margins on individual questions. Although on the average they tended to underestimate students' factual knowledge, they overestimated the students' abilities to apply this knowledge. Hence, the validity of decisions that instructors make, predicated on the basis of their students having the prerequisite knowledge that they expect, is open to question.

  19. Accurate triage of lower gastrointestinal bleed (LGIB) - A cohort study.

    Science.gov (United States)

    Chong, Vincent; Hill, Andrew G; MacCormick, Andrew D

    2016-01-01

    Acute lower gastrointestinal bleeding (LGIB) is a common acute presenting complaint to hospital. Unlike upper gastrointestinal bleeding, the diagnostic and therapeutic approach is not well-standardised. Intensive monitoring and urgent interventions are essential for patients with severe LGIB. The aim of this study is to investigate factors that predict severe LGIB and develop a clinical predictor tool to accurately triage LGIB in the emergency department of a busy metropolitan teaching hospital. We retrospectively identified all adult patients who presented to Middlemore Hospital Emergency Department with LGIB over a one year period. We recorded demographic variables, Charlson Co-morbidities Index, use of anticoagulation, examination findings, vital signs on arrival, laboratory test results, treatment plans and further investigations results. We then identified a subgroup of patients who suffered severe LGIB. A total of 668 patients presented with an initial triage diagnosis of LGIB. 83 of these patients (20%) developed severe LGIB. Binary logistic regression analysis identified four independent risk factors for severe LGIB: use of aspirin, history of collapse, haemoglobin on presentation of less than 100 mg/dl and albumin of less than 38 g/l. We have developed a clinical prediction tool for severe LGIB in our population with a negative predictive value (NPV) of 88% and a positive predictive value (PPV) of 44% respectively. We aim to validate the clinical prediction tool in a further cohort to ensure stability of the multivariate model. Copyright © 2015. Published by Elsevier Ltd.

  20. How complete and accurate is meningococcal disease notification?

    Science.gov (United States)

    Breen, E; Ghebrehewet, S; Regan, M; Thomson, A P J

    2004-12-01

    Effective public health control of meningococcal disease (meningococcal meningitis and septicaemia) is dependent on complete, accurate and speedy notification. Using capture-recapture techniques this study assesses the completeness, accuracy and timeliness of meningococcal notification in a health authority. The completeness of meningococcal disease notification was 94.8% (95% confidence interval 93.2% to 96.2%); 91.2% of cases in 2001 were notified within 24 hours of diagnosis, but 28.0% of notifications in 2001 were false positives. Clinical staff need to be aware of the public health implications of a notification of meningococcal disease, and of failure of, or delay in notification. Incomplete or delayed notification not only leads to inaccurate data collection but also means that important public health measures may not be taken. A clinical diagnosis of meningococcal disease should be carefully considered between the clinician and the consultant in communicable disease control (CCDC). Otherwise, prophylaxis may be given unnecessarily, disease incidence inflated, and the benefits of control measures underestimated. Consultants in communicable disease control (CCDCs), in conjunction with clinical staff, should de-notify meningococcal disease if the diagnosis changes.

  1. In pursuit of accurate timekeeping: Liverpool and Victorian electrical horology.

    Science.gov (United States)

    Ishibashi, Yuto

    2014-10-01

    This paper explores how nineteenth-century Liverpool became such an advanced city with regard to public timekeeping, and the wider impact of this on the standardisation of time. From the mid-1840s, local scientists and municipal bodies in the port city were engaged in improving the ways in which accurate time was communicated to ships and the general public. As a result, Liverpool was the first British city to witness the formation of a synchronised clock system, based on an invention by Robert Jones. His method gained a considerable reputation in the scientific and engineering communities, which led to its subsequent replication at a number of astronomical observatories such as Greenwich and Edinburgh. As a further key example of developments in time-signalling techniques, this paper also focuses on the time ball established in Liverpool by the Electric Telegraph Company in collaboration with George Biddell Airy, the Astronomer Royal. This is a particularly significant development because, as the present paper illustrates, one of the most important technologies in measuring the accuracy of the Greenwich time signal took shape in the experimental operation of the time ball. The inventions and knowledge which emerged from the context of Liverpool were vital to the transformation of public timekeeping in Victorian Britain.

  2. Accurate measurement of RF exposure from emerging wireless communication systems

    Science.gov (United States)

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  3. Accurate light-time correction due to a gravitating mass

    CERN Document Server

    Ashby, Neil

    2009-01-01

    This work arose as an aftermath of Cassini's 2002 experiment \\cite{bblipt03}, in which the PPN parameter $\\gamma$ was measured with an accuracy $\\sigma_\\gamma = 2.3\\times 10^{-5}$ and found consistent with the prediction $\\gamma =1$ of general relativity. The Orbit Determination Program (ODP) of NASA's Jet Propulsion Laboratory, which was used in the data analysis, is based on an expression for the gravitational delay which differs from the standard formula; this difference is of second order in powers of $m$ -- the sun's gravitational radius -- but in Cassini's case it was much larger than the expected order of magnitude $m^2/b$, where $b$ is the ray's closest approach distance. Since the ODP does not account for any other second-order terms, it is necessary, also in view of future more accurate experiments, to systematically evaluate higher order corrections and to determine which terms are significant. Light propagation in a static spacetime is equivalent to a problem in ordinary geometrical optics; Fermat...

  4. Accurate determination of segmented X-ray detector geometry.

    Science.gov (United States)

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A; Chapman, Henry N; Barty, Anton

    2015-11-02

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments.

  5. Accurate Complex Systems Design: Integrating Serious Games with Petri Nets

    Directory of Open Access Journals (Sweden)

    Kirsten Sinclair

    2016-03-01

    Full Text Available Difficulty understanding the large number of interactions involved in complex systems makes their successful engineering a problem. Petri Nets are one graphical modelling technique used to describe and check proposed designs of complex systems thoroughly. While automatic analysis capabilities of Petri Nets are useful, their visual form is less so, particularly for communicating the design they represent. In engineering projects, this can lead to a gap in communications between people with different areas of expertise, negatively impacting achieving accurate designs.In contrast, although capable of representing a variety of real and imaginary objects effectively, behaviour of serious games can only be analysed manually through interactive simulation. This paper examines combining the complementary strengths of Petri Nets and serious games. The novel contribution of this work is a serious game prototype of a complex system design that has been checked thoroughly. Underpinned by Petri Net analysis, the serious game can be used as a high-level interface to communicate and refine the design.Improvement of a complex system design is demonstrated by applying the integration to a proof-of-concept case study.   

  6. AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D

    Energy Technology Data Exchange (ETDEWEB)

    George L Mesina; David Aumiller; Francis Buschman

    2014-07-01

    Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.

  7. An accurate {delta}f method for neoclassical transport calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    1999-03-01

    A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  8. HIPPI: highly accurate protein family classification with ensembles of HMMs

    Directory of Open Access Journals (Sweden)

    Nam-phuong Nguyen

    2016-11-01

    Full Text Available Abstract Background Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Results We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification. HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. Conclusion HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  9. Accurate estimation of third-order moments from turbulence measurements

    Directory of Open Access Journals (Sweden)

    J. J. Podesta

    2009-02-01

    Full Text Available Politano and Pouquet's law, a generalization of Kolmogorov's four-fifths law to incompressible MHD, makes it possible to measure the energy cascade rate in incompressible MHD turbulence by means of third-order moments. In hydrodynamics, accurate measurement of third-order moments requires large amounts of data because the probability distributions of velocity-differences are nearly symmetric and the third-order moments are relatively small. Measurements of the energy cascade rate in solar wind turbulence have recently been performed for the first time, but without careful consideration of the accuracy or statistical uncertainty of the required third-order moments. This paper investigates the statistical convergence of third-order moments as a function of the sample size N. It is shown that the accuracy of the third-moment <(δ v||3> depends on the number of correlation lengths spanned by the data set and a method of estimating the statistical uncertainty of the third-moment is developed. The technique is illustrated using both wind tunnel data and solar wind data.

  10. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    Directory of Open Access Journals (Sweden)

    Miao Liu

    2015-10-01

    Full Text Available In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system.

  11. Accurate de novo design of hyperstable constrained peptides.

    Science.gov (United States)

    Bhardwaj, Gaurav; Mulligan, Vikram Khipple; Bahl, Christopher D; Gilmore, Jason M; Harvey, Peta J; Cheneval, Olivier; Buchko, Garry W; Pulavarti, Surya V S R K; Kaas, Quentin; Eletsky, Alexander; Huang, Po-Ssu; Johnsen, William A; Greisen, Per Jr; Rocklin, Gabriel J; Song, Yifan; Linsky, Thomas W; Watkins, Andrew; Rettie, Stephen A; Xu, Xianzhong; Carter, Lauren P; Bonneau, Richard; Olson, James M; Coutsias, Evangelos; Correnti, Colin E; Szyperski, Thomas; Craik, David J; Baker, David

    2016-10-20

    Naturally occurring, pharmacologically active peptides constrained with covalent crosslinks generally have shapes that have evolved to fit precisely into binding pockets on their targets. Such peptides can have excellent pharmaceutical properties, combining the stability and tissue penetration of small-molecule drugs with the specificity of much larger protein therapeutics. The ability to design constrained peptides with precisely specified tertiary structures would enable the design of shape-complementary inhibitors of arbitrary targets. Here we describe the development of computational methods for accurate de novo design of conformationally restricted peptides, and the use of these methods to design 18-47 residue, disulfide-crosslinked peptides, a subset of which are heterochiral and/or N-C backbone-cyclized. Both genetically encodable and non-canonical peptides are exceptionally stable to thermal and chemical denaturation, and 12 experimentally determined X-ray and NMR structures are nearly identical to the computational design models. The computational design methods and stable scaffolds presented here provide the basis for development of a new generation of peptide-based drugs.

  12. An accurate method for measuring triploidy of larval fish spawns

    Science.gov (United States)

    Jenkins, Jill A.; Draugelis-Dale, Rassa O.; Glennon, Robert; Kelly, Anita; Brown, Bonnie L.; Morrison, John

    2017-01-01

    A standard flow cytometric protocol was developed for estimating triploid induction in batches of larval fish. Polyploid induction treatments are not guaranteed to be 100% efficient, thus the ability to quantify the proportion of triploid larvae generated by a particular treatment helps managers to stock high-percentage spawns and researchers to select treatments for efficient triploid induction. At 3 d posthatch, individual Grass Carp Ctenopharyngodon idella were mechanically dissociated into single-cell suspensions; nuclear DNA was stained with propidium iodide then analyzed by flow cytometry. Following ploidy identification of individuals, aliquots of diploid and triploid cell suspensions were mixed to generate 15 levels (0–100%) of known triploidy (n = 10). Using either 20 or 50 larvae per level, the observed triploid percentages were lower than the known, actual values. Using nonlinear regression analyses, quadratic equations solved for triploid proportions in mixed samples and corresponding estimation reference plots allowed for predicting triploidy. Thus, an accurate prediction of the proportion of triploids in a spawn can be made by following a standard larval processing and analysis protocol with either 20 or 50 larvae from a single spawn, coupled with applying the quadratic equations or reference plots to observed flow cytometry results. Due to the universality of triploid DNA content being 1.5 times the diploid level and because triploid fish consist of fewer cells than diploids, this method should be applicable to other produced triploid fish species, and it may be adapted for use with bivalves or other species where batch analysis is appropriate.

  13. Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)

    Science.gov (United States)

    Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.

    2015-12-01

    The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.

  14. Accurate methodology for channel bow impact on CPR

    Energy Technology Data Exchange (ETDEWEB)

    Bergmann, U.C. [Westinghouse Electric Sweden AB, Vaesteraas (Sweden)

    2012-11-01

    An overview is given of existing CPR design criteria and the methods used in BWR reload analysis to evaluate the impact of channel bow on CPR margins. Potential weaknesses in today's methodologies are discussed. Westinghouse in collaboration with KKL and Axpo - operator and owner of the Leibstadt NPP - has developed an enhanced CPR methodology based on a new criterion to protect against dryout during normal operation and with a more rigorous treatment of channel bow. The new steady-state criterion is expressed in terms of an upper limit of 0.01 for the dryout failure probability per year. This is considered a meaningful and appropriate criterion that can be directly related to the probabilistic criteria set-up for the analyses of Anticipated Operation Occurrences (AOOs) and accidents. In the Monte Carlo approach a statistical modeling of channel bow and an accurate evaluation of CPR response functions allow the associated CPR penalties to be included directly in the plant SLMCPR and OLMCPR in a best-estimate manner. In this way, the treatment of channel bow is equivalent to all other uncertainties affecting CPR. The enhanced CPR methodology has been implemented in the Westinghouse Monte Carlo code, McSLAP. The methodology improves the quality of dryout safety assessments by supplying more valuable information and better control of conservatisms in establishing operational limits for CPR. The methodology is demonstrated with application examples from the introduction at KKL. (orig.)

  15. Accurate ionization potential of semiconductors from efficient density functional calculations

    Science.gov (United States)

    Ye, Lin-Hui

    2016-07-01

    Despite its huge successes in total-energy-related applications, the Kohn-Sham scheme of density functional theory cannot get reliable single-particle excitation energies for solids. In particular, it has not been able to calculate the ionization potential (IP), one of the most important material parameters, for semiconductors. We illustrate that an approximate exact-exchange optimized effective potential (EXX-OEP), the Becke-Johnson exchange, can be used to largely solve this long-standing problem. For a group of 17 semiconductors, we have obtained the IPs to an accuracy similar to that of the much more sophisticated G W approximation (GWA), with the computational cost of only local-density approximation/generalized gradient approximation. The EXX-OEP, therefore, is likely as useful for solids as for finite systems. For solid surfaces, the asymptotic behavior of the vx c has effects similar to those of finite systems which, when neglected, typically cause the semiconductor IPs to be underestimated. This may partially explain why standard GWA systematically underestimates the IPs and why using the same GWA procedures has not been able to get an accurate IP and band gap at the same time.

  16. Machine learning of accurate energy-conserving molecular force fields

    Science.gov (United States)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert

    2017-01-01

    Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076

  17. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    Science.gov (United States)

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  18. Accurate measurement method for tube's endpoints based on machine vision

    Science.gov (United States)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  19. Faster and More Accurate Sequence Alignment with SNAP

    CERN Document Server

    Zaharia, Matei; Curtis, Kristal; Fox, Armando; Patterson, David; Shenker, Scott; Stoica, Ion; Karp, Richard M; Sittler, Taylor

    2011-01-01

    We present the Scalable Nucleotide Alignment Program (SNAP), a new short and long read aligner that is both more accurate (i.e., aligns more reads with fewer errors) and 10-100x faster than state-of-the-art tools such as BWA. Unlike recent aligners based on the Burrows-Wheeler transform, SNAP uses a simple hash index of short seed sequences from the genome, similar to BLAST's. However, SNAP greatly reduces the number and cost of local alignment checks performed through several measures: it uses longer seeds to reduce the false positive locations considered, leverages larger memory capacities to speed index lookup, and excludes most candidate locations without fully computing their edit distance to the read. The result is an algorithm that scales well for reads from one hundred to thousands of bases long and provides a rich error model that can match classes of mutations (e.g., longer indels) that today's fast aligners ignore. We calculate that SNAP can align a dataset with 30x coverage of a human genome in le...

  20. Accurate Measurement Method for Tube's Endpoints Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    LIU Shaoli; JIN Peng; LIU Jianhua; WANG Xiao; SUN Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles,and their accurate assembly can directly affect the assembling reliability and the quality of products.It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly.However,the traditional tube inspection method is time-consuming and complex operations.Therefore,a new measurement method for a tube's endpoints based on machine vision is proposed.First,reflected light on tube's surface can be removed by using photometric linearization.Then,based on the optimization model for the tube's endpoint measurements and the principle of stereo matching,the global coordinates and the relative distance of the tube's endpoint are obtained.To confirm the feasibility,11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured.The experiment results show that the measurement repeatability accuracy is 0.167 mm,and the absolute accuracy is 0.328 mm.The measurement takes less than 1 min.The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  1. How Many Grid Points Are Required for Time Accurate Simulations?

    Science.gov (United States)

    Edoh, Ayaboe; Karagozian, Ann; Mundis, Nathan; Sankaran, Venkateswaran

    2015-11-01

    Grid resolution is a key element in a numerical discretization scheme's ability to accurately capture complex fluid dynamics phenomena encountered in LES and DNS calculations. The fundamental question to be asked concerns the minimum number of points required to represent relevant flow phenomena such as vortex and acoustic wave propagation. The answer is naturally dependent upon the choice of numerical scheme, but it is also influenced by the modal content of the fluid dynamics. Specifically, this study looks at high-order and optimized spatial stencils and their associated dispersion and dissipation characteristics coupled with several time integration schemes. Scheme stabilization is also addressed with respect to artificial dissipation and filtering techniques. The theoretical investigations based on von Neumann analysis are substantiated by calculations of pure mode and multiple mode wave propagation problems, isentropic vortex propagation and the DNS of Taylor Green vortex transition, all of which are used to establish the accuracy properties of the schemes. Distribution A: Approved for public release, distribution unlimited. Supported by AFOSR (PM: Drs. F. Fahroo and Chiping Li).

  2. Concurrent and Accurate Short Read Mapping on Multicore Processors.

    Science.gov (United States)

    Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S

    2015-01-01

    We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR.

  3. Accurate multipixel phase measurement with classical-light interferometry

    Science.gov (United States)

    Singh, Mandeep; Khare, Kedar; Jha, Anand Kumar; Prabhakar, Shashi; Singh, R. P.

    2015-02-01

    We demonstrate accurate phase measurement from experimental low photon level interferograms using a constrained optimization method that takes into account the expected redundancy in the unknown phase function. This approach is shown to have significant noise advantage over traditional methods, such as balanced homodyning or phase shifting, that treat individual pixels in the interference data as independent of each other. Our interference experiments comparing the optimization method with the traditional phase-shifting method show that when the same photon resources are used, the optimization method provides phase recoveries with tighter error bars. In particular, rms phase error performance of the optimization method for low photon number data (10 photons per pixel) shows a >5 × noise gain over the phase-shifting method. In our experiments where a laser light source is used for illumination, the results imply phase measurement with an accuracy better than the conventional single-pixel-based shot-noise limit that assumes independent phases at individual pixels. The constrained optimization approach presented here is independent of the nature of the light source and may further enhance the accuracy of phase detection when a nonclassical-light source is used.

  4. Progress in Fast, Accurate Multi-scale Climate Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Collins, William D [Lawrence Berkeley National Laboratory (LBNL); Johansen, Hans [Lawrence Berkeley National Laboratory (LBNL); Evans, Katherine J [ORNL; Woodward, Carol S. [Lawrence Livermore National Laboratory (LLNL); Caldwell, Peter [Lawrence Livermore National Laboratory (LLNL)

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  5. Accurate determination of segmented X-ray detector geometry

    Science.gov (United States)

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton

    2015-01-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  6. Accurate measurement method for tube's endpoints based on machine vision

    Science.gov (United States)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2016-08-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  7. Subvoxel accurate graph search using non-Euclidean graph space.

    Directory of Open Access Journals (Sweden)

    Michael D Abràmoff

    Full Text Available Graph search is attractive for the quantitative analysis of volumetric medical images, and especially for layered tissues, because it allows globally optimal solutions in low-order polynomial time. However, because nodes of graphs typically encode evenly distributed voxels of the volume with arcs connecting orthogonally sampled voxels in Euclidean space, segmentation cannot achieve greater precision than a single unit, i.e. the distance between two adjoining nodes, and partial volume effects are ignored. We generalize the graph to non-Euclidean space by allowing non-equidistant spacing between nodes, so that subvoxel accurate segmentation is achievable. Because the number of nodes and edges in the graph remains the same, running time and memory use are similar, while all the advantages of graph search, including global optimality and computational efficiency, are retained. A deformation field calculated from the volume data adaptively changes regional node density so that node density varies with the inverse of the expected cost. We validated our approach using optical coherence tomography (OCT images of the retina and 3-D MR of the arterial wall, and achieved statistically significant increased accuracy. Our approach allows improved accuracy in volume data acquired with the same hardware, and also, preserved accuracy with lower resolution, more cost-effective, image acquisition equipment. The method is not limited to any specific imaging modality and readily extensible to higher dimensions.

  8. Accurate Completion of Medical Report on Diagnosing Death.

    Science.gov (United States)

    Savić, Slobodan; Alempijević, Djordje; Andjelić, Sladjana

    2015-01-01

    Diagnosing death and issuing a Death Diagnosing Form (DDF) represents an activity that carries a great deal of public responsibility for medical professionals of the Emergency Medical Services (EMS) and is perpetually exposed to the control of the general public. Diagnosing death is necessary so as to confirm true, to exclude apparent death and consequentially to avoid burying a person alive, i.e. apparently dead. These expert-methodological guidelines based on the most up-to-date and medically based evidence have the goal of helping the physicians of the EMS in accurately filling out a medical report on diagnosing death. If the outcome of applied cardiopulmonary resuscitation measures is negative or when the person is found dead, the physician is under obligation to diagnose death and correctly fill out the DDF. It is also recommended to perform electrocardiography (EKG) and record asystole in at least two leads. In the process of diagnostics and treatment, it is a moral obligation of each Belgrade EMS physician to apply all available achievements and knowledge of modern medicine acquired from extensive international studies, which have been indeed the major theoretical basis for the creation of these expert-methodological guidelines. Those acting differently do so in accordance with their conscience and risk professional, and even criminal sanctions.

  9. Control Strategies for Accurate Force Generation and Relaxation.

    Science.gov (United States)

    Ohtaka, Chiaki; Fujiwara, Motoko

    2016-10-01

    Characteristics and motor strategies for force generation and force relaxation were examined using graded tasks during isometric force control. Ten female college students (M age = 20.2 yr., SD = 1.1) were instructed to accurately control the force of isometric elbow flexion using their right arm to match a target force level as quickly as possible. They performed: (1) a generation task, wherein they increased their force from 0% maximum voluntary force to 20% maximum voluntary force (0%-20%), 40% maximum voluntary force (0%-40%), or 60% maximum voluntary force (0%-60%) and (2) and a relaxation task, in which they decreased their force from 60% maximum voluntary force to 40% maximum voluntary force (60%-40%), 20% maximum voluntary force (60%-20%), or to 0% maximum voluntary force (60%-0%). Produced force parameters of point of accuracy (force level, error), quickness (reaction time, adjustment time, rate of force development), and strategy (force wave, rate of force development) were analyzed. Errors of force relaxation were all greater, and reaction times shorter, than those of force generation. Adjustment time depended on the magnitude of force and peak rates of force development and force relaxation differed. Controlled relaxation of force is more difficult with low magnitude of force control.

  10. Accurate stereochemistry for two related 22,26-epiminocholestene derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Vega-Baez, José Luis; Sandoval-Ramírez, Jesús; Meza-Reyes, Socorro; Montiel-Smith, Sara; Gómez-Calvario, Victor [Facultad de Ciencias Químicas, Benemérita Universidad Autónoma de Puebla, Ciudad Universitaria, San Manuel, 72000 Puebla, Pue. (Mexico); Bernès, Sylvain, E-mail: sylvain-bernes@hotmail.com [DEP Facultad de Ciencias Químicas, UANL, Guerrero y Progreso S/N, Col. Treviño, 64570 Monterrey, NL (Mexico); Facultad de Ciencias Químicas, Benemérita Universidad Autónoma de Puebla, Ciudad Universitaria, San Manuel, 72000 Puebla, Pue. (Mexico)

    2008-04-01

    Regioselective opening of ring E of solasodine under various conditions afforded (25R)-22,26-epimino@@cholesta-5,22(N)-di@@ene-3β,16β-diyl diacetate (previously known as 3,16-diacetyl pseudosolasodine B), C{sub 31}H{sub 47}NO{sub 4}, or (22S,25R)-16β-hydr@@oxy-22,26-epimino@@cholesta-5-en-3β-yl acetate (a derivative of the naturally occurring alkaloid oblonginine), C{sub 29}H{sub 47}NO{sub 3}. In both cases, the reactions are carried out with retention of chirality at the C16, C20 and C25 stereogenic centers, which are found to be S, S and R, respectively. Although pseudosolasodine was synthesized 50 years ago, these accurate assignments clarify some controversial points about the actual stereochemistry for these alkaloids. This is of particular importance in the case of oblonginine, since this compound is currently under consideration for the treatment of aphasia arising from apoplexy; the present study defines a diastereoisomerically pure compound for pharmacological studies.

  11. Accurate mass measurements of very short-lived nuclei

    CERN Document Server

    Herfurth, F; Ames, F; Audi, G; Beck, D; Blaum, K; Bollen, G; Engels, O; Kluge, H J; Lunney, M D; Moores, R B; Oinonen, M; Sauvan, E; Bolle, C A; Scheidenberger, C; Schwarz, S; Sikler, G; Weber, C

    2002-01-01

    Mass measurements of /sup 34/Ar, /sup 73-78/Kr, and /sup 74,76/Rb were performed with the Penning-trap mass spectrometer ISOLTRAP. Very accurate Q/sub EC/-values are needed for the investigations of the F /sub t/-value of 0/sup +/ to 0/sup +/ nuclear beta -decays used to test the standard model predictions for weak interactions. The necessary accuracy on the Q/sub EC/-value requires the mass of mother and daughter nuclei to be measured with delta m/m

  12. Accurate estimators of correlation functions in Fourier space

    Science.gov (United States)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  13. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Directory of Open Access Journals (Sweden)

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  14. Accurate mass and velocity functions of dark matter haloes

    Science.gov (United States)

    Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly

    2017-08-01

    N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (model of the distinct halo mass function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z occupation distribution using Vmax. The data and the analysis code are made publicly available in the Skies and Universes data base.

  15. Accurate and efficient waveforms for compact binaries on eccentric orbits

    CERN Document Server

    Huerta, E A; McWilliams, Sean T; O'Shaughnessy, Richard; Yunes, Nicolas

    2014-01-01

    Compact binaries that emit gravitational waves in the sensitivity band of ground-based detectors can have non-negligible eccentricities just prior to merger, depending on the formation scenario. We develop a purely analytic, frequency-domain model for gravitational waves emitted by compact binaries on orbits with small eccentricity, which reduces to the quasi-circular post-Newtonian approximant TaylorF2 at zero eccentricity and to the post-circular approximation of Yunes et al. (2009) at small eccentricity. Our model uses a spectral approximation to the (post-Newtonian) Kepler problem to model the orbital phase as a function of frequency, accounting for eccentricity effects up to ${\\cal{O}}(e^8)$ at each post-Newtonian order. Our approach accurately reproduces an alternative time-domain eccentric waveform model for eccentricities $e\\in [0, 0.4]$ and binaries with total mass less than 12 solar masses. As an application, we evaluate the signal amplitude that eccentric binaries produce in different networks of e...

  16. Superpixel-based graph cuts for accurate stereo matching

    Science.gov (United States)

    Feng, Liting; Qin, Kaihuai

    2017-06-01

    Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.

  17. Accurate line intensities of methane from first-principles calculations

    Science.gov (United States)

    Nikitin, Andrei V.; Rey, Michael; Tyuterev, Vladimir G.

    2017-10-01

    In this work, we report first-principle theoretical predictions of methane spectral line intensities that are competitive with (and complementary to) the best laboratory measurements. A detailed comparison with the most accurate data shows that discrepancies in integrated polyad intensities are in the range of 0.4%-2.3%. This corresponds to estimations of the best available accuracy in laboratory Fourier Transform spectra measurements for this quantity. For relatively isolated strong lines the individual intensity deviations are in the same range. A comparison with the most precise laser measurements of the multiplet intensities in the 2ν3 band gives an agreement within the experimental error margins (about 1%). This is achieved for the first time for five-atomic molecules. In the Supplementary Material we provide the lists of theoretical intensities at 269 K for over 5000 strongest transitions in the range below 6166 cm-1. The advantage of the described method is that this offers a possibility to generate fully assigned exhaustive line lists at various temperature conditions. Extensive calculations up to 12,000 cm-1 including high-T predictions will be made freely available through the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru) that contains ab initio born line lists and provides a user-friendly graphical interface for a fast simulation of the absorption cross-sections and radiance.

  18. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  19. Accurate reading with sequential presentation of single letters

    Directory of Open Access Journals (Sweden)

    Nicholas Seow Chiang Price

    2012-10-01

    Full Text Available Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically-controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 words/minute and accuracies of over 90% with the single letter reading (SLR method and naive participants achieved average reading rates over 30 wpm with >90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses.

  20. Accurate energy model for WSN node and its optimal design

    Institute of Scientific and Technical Information of China (English)

    Kan Baoqiang; Cai Li; Zhu Hongsong; Xu Yongjun

    2008-01-01

    With the development of CMOS and MEMS technologies, the implementation of a large number of wireless distributed micro-sensors that can be easily and rapidly deployed to form highly redundant, self-configuring, and ad hoc sensor networks. To facilitate ease of deployment, these sensors operate on battery for extended periods of time. A particular challenge in maintaining extended battery lifetime lies in achieving communications with low power. For better understanding of the design tradeoffs of wireless sensor network (WSN), a more accurate energy model for wireless sensor node is proposed, and an optimal design method of energy efficient wireless sensor node is described as well. Different from power models ever shown which assume the power cost of each component in WSN node is constant, the new one takes into account the energy dissipation of circuits in practical physical layer. It shows that there are some parameters, such as data rate, carrier frequency, bandwidth, Tsw, etc, which have a significant effect on the WSN node energy consumption per useful bit (EPUB). For a given quality specification, how energy consumption can be reduced by adjusting one or more of these parameters is shown.

  1. Express method of construction of accurate inverse pole figures

    Science.gov (United States)

    Perlovich, Yu; Isaenkova, M.; Fesenko, V.

    2016-04-01

    With regard to metallic materials with the FCC and BCC crystal lattice a new method for constructing the X-ray texture inverse pole figures (IPF) by using tilt curves of spinning sample, characterized by high accuracy and rapidity (express), was proposed. In contrast to the currently widespread method to construct IPF using orientation distribution function (ODF), synthesized in several partial direct pole figures, the proposed method is based on a simple geometrical interpretation of a measurement procedure, requires a minimal operating time of the X-ray diffractometer.

  2. Towards an Accurate Performance Modeling of Parallel SparseFactorization

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2006-05-26

    We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.

  3. Novel methodology for accurate resolution of fluid signatures from multi-dimensional NMR well-logging measurements

    Science.gov (United States)

    Anand, Vivek

    2017-03-01

    A novel methodology for accurate fluid characterization from multi-dimensional nuclear magnetic resonance (NMR) well-logging measurements is introduced. This methodology overcomes a fundamental challenge of poor resolution of features in multi-dimensional NMR distributions due to low signal-to-noise ratio (SNR) of well-logging measurements. Based on an unsupervised machine-learning concept of blind source separation, the methodology resolves fluid responses from simultaneous analysis of large quantities of well-logging data. The multi-dimensional NMR distributions from a well log are arranged in a database matrix that is expressed as the product of two non-negative matrices. The first matrix contains the unique fluid signatures, and the second matrix contains the relative contributions of the signatures for each measurement sample. No a priori information or subjective assumptions about the underlying features in the data are required. Furthermore, the dimensionality of the data is reduced by several orders of magnitude, which greatly simplifies the visualization and interpretation of the fluid signatures. Compared to traditional methods of NMR fluid characterization which only use the information content of a single measurement, the new methodology uses the orders-of-magnitude higher information content of the entire well log. Simulations show that the methodology can resolve accurate fluid responses in challenging SNR conditions. The application of the methodology to well-logging data from a heavy oil reservoir shows that individual fluid signatures of heavy oil, water associated with clays and water in interstitial pores can be accurately obtained.

  4. Network Characterization Service (NCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Guojun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, George [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Crowley, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Agarwal, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2001-06-06

    Distributed applications require information to effectively utilize the network. Some of the information they require is the current and maximum bandwidth, current and minimum latency, bottlenecks, burst frequency, and congestion extent. This type of information allows applications to determine parameters like optimal TCP buffer size. In this paper, we present a cooperative information-gathering tool called the network characterization service (NCS). NCS runs in user space and is used to acquire network information. Its protocol is designed for scalable and distributed deployment, similar to DNS. Its algorithms provide efficient, speedy and accurate detection of bottlenecks, especially dynamic bottlenecks. On current and future networks, dynamic bottlenecks do and will affect network performance dramatically.

  5. Accurate MTF measurement in digital radiography using noise response

    Energy Technology Data Exchange (ETDEWEB)

    Kuhls-Gilcrist, Andrew; Jain, Amit; Bednarek, Daniel R.; Hoffmann, Kenneth R.; Rudin, Stephen [Toshiba Stroke Research Center, University at Buffalo, State University of New York, Biomedical Research Building, Room 445, 3435 Main Street, Buffalo, New York 14214 (United States)

    2010-02-15

    .4%. Above 0.5 f{sub N}, differences increased to an average of 20%. Deviations of the experimental results largely followed the trend seen in the simulation results, suggesting that differences between the two methods could be explained as resulting from the inherent inaccuracies of the edge-response measurement technique used in this study. Aliasing of the correlated noise component was shown to have a minimal effect on the measured MTF for the three detectors studied. Systems with significant aliasing of the correlated noise component (e.g., a-Se based detectors) would likely require a more sophisticated fitting scheme to provide accurate results. Conclusions: Results indicate that the noise-response method, a simple technique, can be used to accurately measure the MTF of digital x-ray detectors, while alleviating the problems and inaccuracies associated with use of precision test objects, such as a slit or an edge.

  6. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    Science.gov (United States)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  7. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    Science.gov (United States)

    Anderson, Amos Gerald

    2010-06-01

    The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our

  8. Cephalometric analysis for accurately determining the vertical dimension (case report

    Directory of Open Access Journals (Sweden)

    Wahipa Wiro

    2017-04-01

    Full Text Available Objective : Determination of the vertical dimension of occlusion (DVO tends to changes throughout the human life. The vertical dimension is determined by the interocclusal point of the upper and lower teeth contact so the application is limited when the natural teeth was missing. As the result, many functional and aesthetic changes are occurred in the whole orofacial region and stomatognathic system. DVO is one of the difficult stages in prosthodontic treatment. Most of the techniques to determine DVO in edentulous patients are based on the soft tissue references, which can cause the different measurements. Cephalometric analysis allows the evaluation of bone growth changes and can be used as a diagnostic tool in prosthodontics to evaluate the results of prosthodontic rehabilitation. Methods : The purpose of this case report was to find out the results of the vertical dimension of occlusion measurements in maxillomandibular relation by using cephalometric photo in patients who have been long lost their teeth and have never been using denture. Results : A 50 year old female patient, partially edentulous on the upper and lower jaw with the remaining teeth were 12 (residual root, 11,21,23,33 and 43. The remaining teeth were endodontically treated prior the complete denture procedure. Cephalometric photo was done in patients after making bite rim, upper and lower bite rim were given metal marker, the image was traced, then measured between metal to get the vertical dimension of occlusion. Conclusion : The measurement results of the vertical dimension of occlusion by using cephalometric photo on making full denture were more accurate, so it could improve and restore the masticatory function, aesthetic function and phonetics.

  9. How accurate are the weather forecasts for Bierun (southern Poland)?

    Science.gov (United States)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why

  10. A fast and accurate algorithm for diploid individual haplotype reconstruction.

    Science.gov (United States)

    Wu, Jingli; Liang, Binbin

    2013-08-01

    Haplotypes can provide significant information in many research fields, including molecular biology and medical therapy. However, haplotyping is much more difficult than genotyping by using only biological techniques. With the development of sequencing technologies, it becomes possible to obtain haplotypes by combining sequence fragments. The haplotype reconstruction problem of diploid individual has received considerable attention in recent years. It assembles the two haplotypes for a chromosome given the collection of fragments coming from the two haplotypes. Fragment errors significantly increase the difficulty of the problem, and which has been shown to be NP-hard. In this paper, a fast and accurate algorithm, named FAHR, is proposed for haplotyping a single diploid individual. Algorithm FAHR reconstructs the SNP sites of a pair of haplotypes one after another. The SNP fragments that cover some SNP site are partitioned into two groups according to the alleles of the corresponding SNP site, and the SNP values of the pair of haplotypes are ascertained by using the fragments in the group that contains more SNP fragments. The experimental comparisons were conducted among the FAHR, the Fast Hare and the DGS algorithms by using the haplotypes on chromosome 1 of 60 individuals in CEPH samples, which were released by the International HapMap Project. Experimental results under different parameter settings indicate that the reconstruction rate of the FAHR algorithm is higher than those of the Fast Hare and the DGS algorithms, and the running time of the FAHR algorithm is shorter than those of the Fast Hare and the DGS algorithms. Moreover, the FAHR algorithm has high efficiency even for the reconstruction of long haplotypes and is very practical for realistic applications.

  11. Toward accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km(2) , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  12. Accurate diagnosis of CHD by Paediatricians with Expertise in Cardiology.

    Science.gov (United States)

    Jacob, Hannah C; Massey, Hannah; Yates, Robert W M; Kelsall, A Wilfred

    2017-08-01

    Introduction Paediatricians with Expertise in Cardiology assess children with a full history, examination, and often perform an echocardiogram. A minority are then referred to an outreach clinic run jointly with a visiting paediatric cardiologist. The accuracy of the echocardiography diagnosis made by the Paediatrician with Expertise in Cardiology is unknown. Materials and methods We conducted a retrospective review of clinic letters for children seen in the outreach clinic for the first time between March, 2004 and March, 2011. Children with CHD diagnosed antenatally or elsewhere were excluded. We recorded the echocardiography diagnosis made by the paediatric cardiologist and previously by the Paediatrician with Expertise in Cardiology. The Paediatrician with Expertise in Cardiology referred 317/3145 (10%) children seen in the local cardiac clinics to the outreach clinic over this period, and among them 296 were eligible for inclusion. Their median age was 1.5 years (range 1 month-15.1 years). For 244 (82%) children, there was complete diagnostic agreement between the Paediatrician with Expertise in Cardiology and the paediatric cardiologist. For 29 (10%) children, the main diagnosis was identical with additional findings made by the paediatric cardiologist. The abnormality had resolved in 17 (6%) cases by the time of clinic attendance. In six (2%) patients, the paediatric cardiologist made a different diagnosis. In total, 138 (47%) patients underwent a surgical or catheter intervention. Discussion Paediatricians with Expertise in Cardiology can make accurate diagnoses of CHD in children referred to their clinics. This can allow effective triage of children attending the outreach clinic, making best use of limited specialist resources.

  13. Accurate Classification of RNA Structures Using Topological Fingerprints

    Science.gov (United States)

    Li, Kejie; Gribskov, Michael

    2016-01-01

    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571

  14. Accurate source location from waves scattered by surface topography

    Science.gov (United States)

    Wang, Nian; Shen, Yang; Flinders, Ashton; Zhang, Wei

    2016-06-01

    Accurate source locations of earthquakes and other seismic events are fundamental in seismology. The location accuracy is limited by several factors, including velocity models, which are often poorly known. In contrast, surface topography, the largest velocity contrast in the Earth, is often precisely mapped at the seismic wavelength (>100 m). In this study, we explore the use of P coda waves generated by scattering at surface topography to obtain high-resolution locations of near-surface seismic events. The Pacific Northwest region is chosen as an example to provide realistic topography. A grid search algorithm is combined with the 3-D strain Green's tensor database to improve search efficiency as well as the quality of hypocenter solutions. The strain Green's tensor is calculated using a 3-D collocated-grid finite difference method on curvilinear grids. Solutions in the search volume are obtained based on the least squares misfit between the "observed" and predicted P and P coda waves. The 95% confidence interval of the solution is provided as an a posteriori error estimation. For shallow events tested in the study, scattering is mainly due to topography in comparison with stochastic lateral velocity heterogeneity. The incorporation of P coda significantly improves solution accuracy and reduces solution uncertainty. The solution remains robust with wide ranges of random noises in data, unmodeled random velocity heterogeneities, and uncertainties in moment tensors. The method can be extended to locate pairs of sources in close proximity by differential waveforms using source-receiver reciprocity, further reducing errors caused by unmodeled velocity structures.

  15. SNPdetector: a software tool for sensitive and accurate SNP detection.

    Directory of Open Access Journals (Sweden)

    Jinghui Zhang

    2005-10-01

    Full Text Available Identification of single nucleotide polymorphisms (SNPs and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool, and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozygosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov.

  16. SNPdetector: A Software Tool for Sensitive and Accurate SNP Detection.

    Directory of Open Access Journals (Sweden)

    2005-10-01

    Full Text Available Identification of single nucleotide polymorphisms (SNPs and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool, and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov.

  17. Accurate deterministic solutions for the classic Boltzmann shock profile

    Science.gov (United States)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  18. Accurate Sound Velocity Measurement in Ocean Near-Surface Layer

    Science.gov (United States)

    Lizarralde, D.; Xu, B. L.

    2015-12-01

    Accurate sound velocity measurement is essential in oceanography because sound is the only wave that can propagate in sea water. Due to its measuring difficulties, sound velocity is often not measured directly but instead calculated from water temperature, salinity, and depth, which are much easier to obtain. This research develops a new method to directly measure the sound velocity in the ocean's near-surface layer using multi-channel seismic (MCS) hydrophones. This system consists of a device to make a sound pulse and a long cable with hundreds of hydrophones to record the sound. The distance between the source and each receiver is the offset. The time it takes the pulse to arrive to each receiver is the travel time.The errors of measuring offset and travel time will affect the accuracy of sound velocity if we calculated with just one offset and one travel time. However, by analyzing the direct arrival signal from hundreds of receivers, the velocity can be determined as the slope of a straight line in the travel time-offset graph. The errors in distance and time measurement result in only an up or down shift of the line and do not affect the slope. This research uses MCS data of survey MGL1408 obtained from the Marine Geoscience Data System and processed with Seismic Unix. The sound velocity can be directly measured to an accuracy of less than 1m/s. The included graph shows the directly measured velocity verses the calculated velocity along 100km across the Mid-Atlantic continental margin. The directly measured velocity shows a good coherence to the velocity computed from temperature and salinity. In addition, the fine variations in the sound velocity can be observed, which is hardly seen from the calculated velocity. Using this methodology, both large area acquisition and fine resolution can be achieved. This directly measured sound velocity will be a new and powerful tool in oceanography.

  19. Why Accurate Knowledge of Zygosity is Important to Twins.

    Science.gov (United States)

    Cutler, Tessa L; Murphy, Kate; Hopper, John L; Keogh, Louise A; Dai, Yun; Craig, Jeffrey M

    2015-06-01

    All same-sex dizygotic (DZ) twins and approximately one-third of monozygotic (MZ) twin pairs have separate placentas, making it impossible to use the number of placentas to determine zygosity. Zygosity determination is further complicated because incorrect assumptions are often made, such as that only DZ pairs have two placentas and that all MZ pairs are phenotypically identical. These assumptions, by twins, their families and health professionals, along with the lack of universal zygosity testing for same-sex twins, has led to confusion within the twin community, yet little research has been conducted with twins about their understanding and assumptions about zygosity. We aimed to explore and quantify understanding and assumptions about zygosity using twins attending an Australian twin festival. We recruited 91 twin pairs younger than 18 years of age and their parents, and 30 adult twin pairs who were all uncertain of their zygosity, to complete one pen and paper questionnaire and one online questionnaire about their assumed zygosity, reasons for their assumptions and the importance of accurate zygosity knowledge. Responses were compared with their true zygosity measured using a genetic test. We found a substantial proportion of parents and twins had been misinformed by their own parents or medical professionals, and that knowledge of their true zygosity status provided peace of mind and positive emotional responses. For these reasons we propose universal zygosity testing of same-sex twins as early in life as possible and an increase in education of medical professionals, twins and families of twins about zygosity issues.

  20. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  1. Robust, accurate and fast automatic segmentation of the spinal cord.

    Science.gov (United States)

    De Leener, Benjamin; Kadoury, Samuel; Cohen-Adad, Julien

    2014-09-01

    Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord.

  2. Imaging tests for accurate diagnosis of acute biliary pancreatitis.

    Science.gov (United States)

    Şurlin, Valeriu; Săftoiu, Adrian; Dumitrescu, Daniela

    2014-11-28

    Gallstones represent the most frequent aetiology of acute pancreatitis in many statistics all over the world, estimated between 40%-60%. Accurate diagnosis of acute biliary pancreatitis (ABP) is of outmost importance because clearance of lithiasis [gallbladder and common bile duct (CBD)] rules out recurrences. Confirmation of biliary lithiasis is done by imaging. The sensitivity of the ultrasonography (US) in the detection of gallstones is over 95% in uncomplicated cases, but in ABP, sensitivity for gallstone detection is lower, being less than 80% due to the ileus and bowel distension. Sensitivity of transabdominal ultrasonography (TUS) for choledocolithiasis varies between 50%-80%, but the specificity is high, reaching 95%. Diameter of the bile duct may be orientative for diagnosis. Endoscopic ultrasonography (EUS) seems to be a more effective tool to diagnose ABP rather than endoscopic retrograde cholangiopancreatography (ERCP), which should be performed only for therapeutic purposes. As the sensitivity and specificity of computerized tomography are lower as compared to state-of-the-art magnetic resonance cholangiopancreatography (MRCP) or EUS, especially for small stones and small diameter of CBD, the later techniques are nowadays preferred for the evaluation of ABP patients. ERCP has the highest accuracy for the diagnosis of choledocholithiasis and is used as a reference standard in many studies, especially after sphincterotomy and balloon extraction of CBD stones. Laparoscopic ultrasonography is a useful tool for the intraoperative diagnosis of choledocholithiasis. Routine exploration of the CBD in cases of patients scheduled for cholecystectomy after an attack of ABP was not proven useful. A significant rate of the so-called idiopathic pancreatitis is actually caused by microlithiasis and/or biliary sludge. In conclusion, the general algorithm for CBD stone detection starts with anamnesis, serum biochemistry and then TUS, followed by EUS or MRCP. In the end

  3. Guidelines for accurate EC50/IC50 estimation.

    Science.gov (United States)

    Sebaugh, J L

    2011-01-01

    This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.

  4. Accurate mobile malware detection and classification in the cloud.

    Science.gov (United States)

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  5. Accurate estimation of the boundaries of a structured light pattern.

    Science.gov (United States)

    Lee, Sukhan; Bui, Lam Quang

    2011-06-01

    Depth recovery based on structured light using stripe patterns, especially for a region-based codec, demands accurate estimation of the true boundary of a light pattern captured on a camera image. This is because the accuracy of the estimated boundary has a direct impact on the accuracy of the depth recovery. However, recovering the true boundary of a light pattern is considered difficult due to the deformation incurred primarily by the texture-induced variation of the light reflectance at surface locales. Especially for heavily textured surfaces, the deformation of pattern boundaries becomes rather severe. We present here a novel (to the best of our knowledge) method to estimate the true boundaries of a light pattern that are severely deformed due to the heavy textures involved. First, a general formula that models the deformation of the projected light pattern at the imaging end is presented, taking into account not only the light reflectance variation but also the blurring along the optical passages. The local reflectance indices are then estimated by applying the model to two specially chosen reference projections, all-bright and all-dark. The estimated reflectance indices are to transform the edge-deformed, captured pattern signal into the edge-corrected, canonical pattern signal. A canonical pattern implies the virtual pattern that would have resulted if there were neither the reflectance variation nor the blurring in imaging optics. Finally, we estimate the boundaries of a light pattern by intersecting the canonical form of a light pattern with that of its inverse pattern. The experimental results show that the proposed method results in significant improvements in the accuracy of the estimated boundaries under various adverse conditions.

  6. Can ultrasound biomicroscopy be used to predict accommodation accurately?

    Science.gov (United States)

    Ramasubramanian, Viswanathan; Glasser, Adrian

    2015-04-01

    Clinical accommodation testing involves measuring either accommodative optical changes or accommodative biometric changes. Quantifying both optical and biometric changes during accommodation might be helpful in the design and evaluation of accommodation restoration concepts. This study aims to establish the accuracy of ultrasound biomicroscopy (UBM) in predicting the accommodative optical response (AOR) from biometric changes. Static AOR from 0 to 6 diopters (D) stimuli in 1-D steps were measured with infrared photorefraction and a Grand Seiko autorefractor (WR-5100 K; Shigiya Machinery Works Ltd., Hiroshima, Japan) in 26 human subjects aged 21 to 36 years. Objective measurements of accommodative biometric changes to the same stimulus demands were measured from UBM (Vu-MAX; Sonomed Escalon, Lake Success, NY) images in the same group of subjects. AOR was predicted from biometry using linear regressions, 95% confidence intervals, and 95% prediction intervals. Bland-Altman analysis showed 0.52 D greater AOR with photorefraction than with the Grand Seiko autorefractor. Per-diopter changes in accommodative biometry were: anterior chamber depth (ACD): -0.055 mm/D, lens thickness (LT): +0.076 mm/D, anterior lens radii of curvature (ALRC): -0.854 mm/D, posterior lens radii of curvature (PLRC): -0.222 mm/D, and anterior segment length (ASL): +0.030 mm/D. The standard deviation of AOR predicted from linear regressions for various biometry parameters were: ACD: 0.24 D, LT: 0.30 D, ALRC: 0.24 D, PLRC: 0.43 D, ASL: 0.50 D. UBM measured parameters can, on average, predict AOR with a standard deviation of 0.50 D or less using linear regression. UBM is a useful and accurate objective technique for measuring accommodation in young phakic eyes. Copyright 2015, SLACK Incorporated.

  7. Artificial neural network accurately predicts hepatitis B surface antigen seroclearance.

    Directory of Open Access Journals (Sweden)

    Ming-Hua Zheng

    Full Text Available BACKGROUND & AIMS: Hepatitis B surface antigen (HBsAg seroclearance and seroconversion are regarded as favorable outcomes of chronic hepatitis B (CHB. This study aimed to develop artificial neural networks (ANNs that could accurately predict HBsAg seroclearance or seroconversion on the basis of available serum variables. METHODS: Data from 203 untreated, HBeAg-negative CHB patients with spontaneous HBsAg seroclearance (63 with HBsAg seroconversion, and 203 age- and sex-matched HBeAg-negative controls were analyzed. ANNs and logistic regression models (LRMs were built and tested according to HBsAg seroclearance and seroconversion. Predictive accuracy was assessed with area under the receiver operating characteristic curve (AUROC. RESULTS: Serum quantitative HBsAg (qHBsAg and HBV DNA levels, qHBsAg and HBV DNA reduction were related to HBsAg seroclearance (P<0.001 and were used for ANN/LRM-HBsAg seroclearance building, whereas, qHBsAg reduction was not associated with ANN-HBsAg seroconversion (P = 0.197 and LRM-HBsAg seroconversion was solely based on qHBsAg (P = 0.01. For HBsAg seroclearance, AUROCs of ANN were 0.96, 0.93 and 0.95 for the training, testing and genotype B subgroups respectively. They were significantly higher than those of LRM, qHBsAg and HBV DNA (all P<0.05. Although the performance of ANN-HBsAg seroconversion (AUROC 0.757 was inferior to that for HBsAg seroclearance, it tended to be better than those of LRM, qHBsAg and HBV DNA. CONCLUSIONS: ANN identifies spontaneous HBsAg seroclearance in HBeAg-negative CHB patients with better accuracy, on the basis of easily available serum data. More useful predictors for HBsAg seroconversion are still needed to be explored in the future.

  8. DNACLUST: accurate and efficient clustering of phylogenetic marker genes

    Directory of Open Access Journals (Sweden)

    Liu Bo

    2011-06-01

    Full Text Available Abstract Background Clustering is a fundamental operation in the analysis of biological sequence data. New DNA sequencing technologies have dramatically increased the rate at which we can generate data, resulting in datasets that cannot be efficiently analyzed by traditional clustering methods. This is particularly true in the context of taxonomic profiling of microbial communities through direct sequencing of phylogenetic markers (e.g. 16S rRNA - the domain that motivated the work described in this paper. Many analysis approaches rely on an initial clustering step aimed at identifying sequences that belong to the same operational taxonomic unit (OTU. When defining OTUs (which have no universally accepted definition, scientists must balance a trade-off between computational efficiency and biological accuracy, as accurately estimating an environment's phylogenetic composition requires computationally-intensive analyses. We propose that efficient and mathematically well defined clustering methods can benefit existing taxonomic profiling approaches in two ways: (i the resulting clusters can be substituted for OTUs in certain applications; and (ii the clustering effectively reduces the size of the data-sets that need to be analyzed by complex phylogenetic pipelines (e.g., only one sequence per cluster needs to be provided to downstream analyses. Results To address the challenges outlined above, we developed DNACLUST, a fast clustering tool specifically designed for clustering highly-similar DNA sequences. Given a set of sequences and a sequence similarity threshold, DNACLUST creates clusters whose radius is guaranteed not to exceed the specified threshold. Underlying DNACLUST is a greedy clustering strategy that owes its performance to novel sequence alignment and k-mer based filtering algorithms. DNACLUST can also produce multiple sequence alignments for every cluster, allowing users to manually inspect clustering results, and enabling more

  9. Towards an accurate real-time locator of infrasonic sources

    Science.gov (United States)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-06-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability

  10. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    Science.gov (United States)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  11. Fast and accurate line scanner based on white light interferometry

    Science.gov (United States)

    Lambelet, Patrick; Moosburger, Rudolf

    2013-04-01

    White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is

  12. Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms

    Science.gov (United States)

    Chen, J.; Kunkel, V.; Skov, T. M.

    2015-12-01

    Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph

  13. UAV multirotor platform for accurate turbulence measurements in the atmosphere

    Science.gov (United States)

    Carbajo Fuertes, Fernando; Wilhelm, Lionel; Sin, Kevin Edgar; Hofer, Matthias; Porté-Agel, Fernando

    2017-04-01

    One of the most challenging tasks in atmospheric field studies for wind energy is to obtain accurate turbulence measurements at any location inside the region of interest for a wind farm study. This volume would ideally include from several hundred meters to several kilometers around it and from ground height to the top of the boundary layer. An array of meteorological masts equipped with several sonic anemometers to cover all points of interest would be the best in terms of accuracy and data availability, but it is an obviously unfeasible solution. On the other hand, the evolution of wind LiDAR technology allows to measure at any point in space but unfortunately it involves two important limitations: the first one is the relatively low spatial and temporal resolution when compared to a sonic anemometer and the second one is the fact that the measurements are limited to the velocity component parallel to the laser beam (radial velocity). To overcome the aforementioned drawbacks, a UAV multirotor platform has been developed. It is based on a state-of-the-art octocopter with enough payload to carry laboratory-grade instruments for the measurement of time-resolved atmospheric pressure, three-component velocity vector and temperature; and enough autonomy to fly from 10 to 20 minutes, which is a standard averaging time in most atmospheric measurement applications. The UAV uses a gyroscope, an accelerometer, a GPS and an algorithm has been developed and integrated for the correction of any orientation and movement. This UAV platform opens many possibilities for the study of features that have been almost exclusively studied until now in wind tunnel such as wind turbine blade tip vortex characteristics, near-wake to far-wake transition, momentum entrainment from the higher part of the boundary layer in wind farms, etc. The validation of this new measurement technique has been performed against sonic anemometry in terms of wind speed and temperature time series as well as

  14. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  15. Fast and accurate predictions of covalent bonds in chemical space

    Science.gov (United States)

    Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  16. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  17. Integrated Translatome and Proteome: Approach for Accurate Portraying of Widespread Multifunctional Aspects of Trichoderma

    Science.gov (United States)

    Sharma, Vivek; Salwan, Richa; Sharma, P. N.; Gulati, Arvind

    2017-01-01

    Genome-wide studies of transcripts expression help in systematic monitoring of genes and allow targeting of candidate genes for future research. In contrast to relatively stable genomic data, the expression of genes is dynamic and regulated both at time and space level at different level in. The variation in the rate of translation is specific for each protein. Both the inherent nature of an mRNA molecule to be translated and the external environmental stimuli can affect the efficiency of the translation process. In biocontrol agents (BCAs), the molecular response at translational level may represents noise-like response of absolute transcript level and an adaptive response to physiological and pathological situations representing subset of mRNAs population actively translated in a cell. The molecular responses of biocontrol are complex and involve multistage regulation of number of genes. The use of high-throughput techniques has led to rapid increase in volume of transcriptomics data of Trichoderma. In general, almost half of the variations of transcriptome and protein level are due to translational control. Thus, studies are required to integrate raw information from different “omics” approaches for accurate depiction of translational response of BCAs in interaction with plants and plant pathogens. The studies on translational status of only active mRNAs bridging with proteome data will help in accurate characterization of only a subset of mRNAs actively engaged in translation. This review highlights the associated bottlenecks and use of state-of-the-art procedures in addressing the gap to accelerate future accomplishment of biocontrol mechanisms. PMID:28900417

  18. Methods for accurate analysis of galaxy clustering on non-linear scales

    Science.gov (United States)

    Vakili, Mohammadjavad

    2017-01-01

    Measurements of galaxy clustering with the low-redshift galaxy surveys provide sensitive probe of cosmology and growth of structure. Parameter inference with galaxy clustering relies on computation of likelihood functions which requires estimation of the covariance matrix of the observables used in our analyses. Therefore, accurate estimation of the covariance matrices serves as one of the key ingredients in precise cosmological parameter inference. This requires generation of a large number of independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast method based on low-resolution N-body simulations and approximate galaxy biasing technique for generating mock catalogs. Using a reference catalog that was created using the high resolution Big-MultiDark N-body simulation, we show that our method is able to produce catalogs that describe galaxy clustering at a percentage-level accuracy down to highly non-linear scales in both real-space and redshift-space.In most large-scale structure analyses, modeling of galaxy bias on non-linear scales is performed assuming a halo model. Clustering of dark matter halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies assume that halo mass alone is sufficient in characterizing the connection between galaxies and halos. However, modeling of galaxy bias can face systematic effects if the number of galaxies are correlated with other halo properties. Using the Small MultiDark-Planck high resolution N-body simulation and the clustering measurements of Sloan Digital Sky Survey DR7 main galaxy sample, we investigate the extent to which the dependence of galaxy bias on halo concentration can improve our modeling of galaxy clustering.

  19. An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows

    KAUST Repository

    Liang, Tengfei

    2013-01-01

    Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.

  20. MITF accurately highlights epidermal melanocytes in atypical intraepidermal melanocytic proliferations.

    Science.gov (United States)

    Nybakken, Grant E; Sargen, Michael; Abraham, Ronnie; Zhang, Paul J; Ming, Michael; Xu, Xiaowei

    2013-02-01

    Atypical intraepidermal melanocytic proliferations (AIMP) have random cytologic atypia and other histologic features that are concerning for malignancy and often require immunohistochemistry to differentiate from melanoma in situ. Immunostaining with S100, Melan-A, and microphthalmia-associated transcription factor (MITF) was performed for 49 morphologically well-characterized AIMP lesions. The percentage of cells in the basal layer of the epidermis that were identified as melanocytes by immunohistochemistry was compared with the percentage observed by morphology on hematoxylin and eosin staining, which is the gold standard stain for identifying cytologic atypia within an AIMP. Melan-A estimated the highest percentage of melanocytes and S100 the fewest in 47 of the 49 lesions examined. The estimated percentage of melanocytes was 23.3% (95% confidence interval: 18.6-28.1; P Melanocyte estimates were similar for hematoxylin and eosin and MITF (P = 0.15) although S100 estimated 21.8% (95% confidence interval: -27.2 to -16.4; P melanocytes than hematoxylin and eosin. Melan-A staining produces higher estimates of epidermal melanocytes than S100 and MITF, which may increase the likelihood of diagnosing melanoma in situ. In contrast, melanoma in situ may be underdiagnosed with the use of S100, which results in lower estimates of melanocytes than the other 2 immunostains. Therefore, the best immunohistochemical marker for epidermal melanocytes is MITF.

  1. Accurate length control of supramolecular oligomerization: Vernier assemblies.

    Science.gov (United States)

    Hunter, Christopher A; Tomas, Salvador

    2006-07-12

    Linear oligomeric supramolecular assemblies of defined length have been generated using the Vernier principle. Two molecules, containing a different number (n and m) of mutually complementary binding sites, separated by the same distance, interact with each other to form an assembly of length (n x m). The assembly grows in the same way as simple supramolecular polymers, but at a molecular stop signal, when the binding sites come into register, the assembly terminates giving an oligomer of defined length. This strategy has been realized using tin and zinc porphyrin oligomers as the molecular building blocks. In the presence of isonicotinic acid, a zinc porphyrin trimer and a tin porphyrin dimer form a 3:4 triple stranded Vernier assembly six porphyrins long. The triple strand Vernier architecture introduced here adds an additional level of cooperativity, yielding a stability and selectivity that cannot be achieved via a simple Vernier approach. The assembly properties of the system were characterized using fluorescence titrations and size-exclusion chromatography (SEC). Assembly of the Vernier complex is efficient at micromolar concentrations in nonpolar solvents, and under more competitive conditions, a variety of fragmentation assemblies can be detected, allowing determination of the stability constants for this system and detailed speciation profiles to be constructed.

  2. Accurate determination of the absolute phase and temporal-pulse phase of few-cycle laser pulses

    Institute of Scientific and Technical Information of China (English)

    Xia Ke-Yu; Gong Shang-Qing; Niu Yue-Ping; Li Ru-Xin; Xu Zhi-Zhan

    2007-01-01

    A Fourier analysis method is used to accurately determine not only the absolute phase but also the temporalpulse phase of an isolated few-cycle (chirped) laser pulse. This method is independent of the pulse shape and can fully characterize the light wave even though only a few samples per optical cycle are available. It paves the way for investigating the absolute phase-dependent extreme nonlinear optics, and the evolutions of the absolute phase and the temporal-pulse phase of few-cycle laser pulses.

  3. History and progress on accurate measurements of the Planck constant

    Science.gov (United States)

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved

  4. Critical Considerations for Accurate Soil CO2 Flux Measurement

    Science.gov (United States)

    Xu, L.; Furtaw, M.; Madsen, R.; Welles, J.; Demetriades-Shah, T.; Anderson, D.; Garcia, R.; McDermitt, D.

    2005-12-01

    Soil respiration is a significant component of the carbon balance for an ecosystem, but the environmental (soil moisture, rain event, temperature etc.) and biological (photosynthesis, LAI etc.) factors that contribute to soil respiration remain poorly understood. This limits our ability to understand the carbon budget at the ecosystem level making it difficult to predict the impacts of climate change. Two important reasons for this poor understanding have been the difficulty in making accurate soil respiration measurements and the lack of continuous and long-term soil respiration data at sufficiently fine temporal and spatial scales. To meet these needs, we have developed a new automated multiplexing system, the LI-8100M, for obtaining reliable soil CO2 flux data at high spatial and temporal resolution. The system has the capability to continuously measure the soil CO2 flux at up to 16 locations. Soil CO2 flux is driven primarily by the CO2 diffusion gradient across the soil surface. Ideally, the flux measurement should be made without affecting the diffusion gradient and without having any chamber-induced pressure perturbation. In a closed-chamber system the slope of dCO2/dt is required to compute the flux. To obtain the slope of dCO2/dt, the chamber CO2 concentration must be allowed to rise. Consequently, soil CO2 flux will be affected because of the decreased CO2 diffusion gradient. To minimize the impact of decreased CO2 diffusion gradient on CO2 flux measurement in LI-8100M, the chamber CO2 concentration versus time is fitted with an exponential function. Soil CO2 flux is then estimated by calculating the initial slope from the exponential function at time zero when the chamber touches the soil, and that is when the chamber CO2 concentration is equal to the ambient. Our results show that the flux estimated from a linear function, the widely used method, could underestimate CO2 flux by more than 10% as compared with that from the exponential function. An

  5. Interaction potentials for water from accurate cluster calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xantheas, Sotiris S.

    2006-03-15

    simulations such as the first computer simulation of liquid water by Barker and Watts and Rahman and Stillinger, the first parametrization of a pair potential for water from ab-initio calculations by Clementi and co-workers and the first simulation of liquid water from first principles by Parrinello and Carr. Many of the empirical pair potentials for water that are used widely even nowadays were developed in the early 1980's. These early models were mainly parameterized in order to reproduce measured thermodynamic bulk properties due to the fact that molecular level information for small water clusters was limited or even nonexisting at that time. Subsequent attempts have focused in introducing self-consistent polarization as a means of explicitly accounting for the magnitude of the non-additive many-body effects via an induction scheme. Again the lack of accurate experimental or theoretical water cluster energetic information has prevented the assessment of the accuracy of those models.

  6. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    Science.gov (United States)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  7. Accurate, reliable control of process gases by mass flow controllers

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, J.; McKnight, T.

    1997-02-01

    The thermal mass flow controller, or MFC, has become an instrument of choice for the monitoring and controlling of process gas flow throughout the materials processing industry. These MFCs are used on CVD processes, etching tools, and furnaces and, within the semiconductor industry, are used on 70% of the processing tools. Reliability and accuracy are major concerns for the users of the MFCs. Calibration and characterization technologies for the development and implementation of mass flow devices are described. A test facility is available to industry and universities to test and develop gas floe sensors and controllers and evaluate their performance related to environmental effects, reliability, reproducibility, and accuracy. Additional work has been conducted in the area of accuracy. A gravimetric calibrator was invented that allows flow sensors to be calibrated in corrosive, reactive gases to an accuracy of 0.3% of reading, at least an order of magnitude better than previously possible. Although MFCs are typically specified with accuracies of 1% of full scale, MFCs may often be implemented with unwarranted confidence due to the conventional use of surrogate gas factors. Surrogate gas factors are corrections applied to process flow indications when an MFC has been calibrated on a laboratory-safe surrogate gas, but is actually used on a toxic, or corrosive process gas. Previous studies have indicated that the use of these factors may cause process flow errors of typically 10%, but possibly as great as 40% of full scale. This paper will present possible sources of error in MFC process gas flow monitoring and control, and will present an overview of corrective measures which may be implemented with MFC use to significantly reduce these sources of error.

  8. Improved Benefit of SPECT/CT Compared to SPECT Alone for the Accurate Localization of Endocrine and Neuroendocrine Tumors

    Directory of Open Access Journals (Sweden)

    Gonca G. Bural

    2012-12-01

    Full Text Available Objective: To assess the clinical utility of SPECT/ CT in subjects with endocrine and neuroendocrine tumors compared to SPECT alone. Material and Methods: 48 subjects (31 women;17 men; mean age 54±11 with clinical suspicion or diagnosis of endocrine and neuroendocrine tumor had 50 SPECT/CT scans (32 Tc-99m MIBI, 5 post treatment I-131, 8 In-111 Pentetreotide, and 5 I-123 MIBG. SPECT alone findings were compared to SPECT/CT and to pathology or radiological follow up. Results: From the 32 Tc-99m MIBI scans, SPECT accurately localized the lesion in 22 positive subjects while SPECT/CT did in 31 subjects. Parathyroid lesions not seen on SPECT alone were smaller than 10 mm. In five post treatment I-131 scans, SPECT alone neither characterized, nor localized any lesions accurately. SPECT/CT revealed 3 benign etiologies, a metastatic lymph node, and one equivocal lesion. In 8 In-111 Pentetreotide scans, SPECT alone could not localize primary or metastatic lesions in 6 subjects all of which were localized with SPECT/CT. In five I-123 MIBG scans, SPECT alone could not detect a 1.1 cm adrenal lesion or correctly characterize normal physiologic adrenal uptake in consecutive scans of the same patient with prior history of adrenelectomy, all of which were correctly localized and characterized with SPECT/CT. Conclusion: SPECT/CT is superior to SPECT alone in the assessment of endocrine and neuroendocrine tumors. It is better in lesion localization and lesion characterization leading to a decrease in the number of equivocal findings. SPECT/CT should be included in the clinical work up of all patients with diagnosis or suspicion of endocrine and neuroendocrine tumors. (MIRT 2012;21:91-96

  9. Decision Fusion Based on Hyperspectral and Multispectral Satellite Imagery for Accurate Forest Species Mapping

    Directory of Open Access Journals (Sweden)

    Dimitris G. Stavrakoudis

    2014-07-01

    Full Text Available This study investigates the effectiveness of combining multispectral very high resolution (VHR and hyperspectral satellite imagery through a decision fusion approach, for accurate forest species mapping. Initially, two fuzzy classifications are conducted, one for each satellite image, using a fuzzy output support vector machine (SVM. The classification result from the hyperspectral image is then resampled to the multispectral’s spatial resolution and the two sources are combined using a simple yet efficient fusion operator. Thus, the complementary information provided from the two sources is effectively exploited, without having to resort to computationally demanding and time-consuming typical data fusion or vector stacking approaches. The effectiveness of the proposed methodology is validated in a complex Mediterranean forest landscape, comprising spectrally similar and spatially intermingled species. The decision fusion scheme resulted in an accuracy increase of 8% compared to the classification using only the multispectral imagery, whereas the increase was even higher compared to the classification using only the hyperspectral satellite image. Perhaps most importantly, its accuracy was significantly higher than alternative multisource fusion approaches, although the latter are characterized by much higher computation, storage, and time requirements.

  10. High-accurate optical vector analysis based on optical single-sideband modulation

    Science.gov (United States)

    Xue, Min; Pan, Shilong

    2016-11-01

    Most of the efforts devoted to the area of optical communications were on the improvement of the optical spectral efficiency. Varies innovative optical devices are thus developed to finely manipulate the optical spectrum. Knowing the spectral responses of these devices, including the magnitude, phase and polarization responses, is of great importance for their fabrication and application. To achieve high-resolution characterization, optical vector analyzers (OVAs) based on optical single-sideband (OSSB) modulation have been proposed and developed. Benefiting from the mature and highresolution microwave technologies, the OSSB-based OVA can potentially achieve a resolution of sub-Hz. However, the accuracy is restricted by the measurement errors induced by the unwanted first-order sideband and the high-order sidebands in the OSSB signal, since electrical-to-optical conversion and optical-to-electrical conversion are essentially required to achieve high-resolution frequency sweeping and extract the magnitude and phase information in the electrical domain. Recently, great efforts have been devoted to improve the accuracy of the OSSB-based OVA. In this paper, the influence of the unwanted-sideband induced measurement errors and techniques for implementing high-accurate OSSB-based OVAs are discussed.

  11. Accurate determination of minority carrier mobility in silicon from quasi-steady-state photoluminescence

    Science.gov (United States)

    Giesecke, J. A.; Schindler, F.; Bühler, M.; Schubert, M. C.; Warta, W.

    2013-06-01

    Minority carrier mobility is a crucial transport property affecting the performance of semiconductor devices such as solar cells. Compensation of dopant species and novel multicrystalline materials call for accurate knowledge of minority carrier mobility for device simulation and characterization. Yet, measurement techniques of minority carrier mobility are scarce, and published data scatter significantly even on monocrystalline material. In this paper, the determination of minority carrier mobility from self-consistent quasi-steady-state photoluminescence measurements of effective carrier lifetime is presented. The measurement design is distinguished by a limitation of carrier recombination through minority carrier transport—with excess carrier generation and recombination confined to opposite interfaces, respectively. Minority carrier mobility is inferred from the minority carrier diffusion coefficient via the Einstein relation. An experimental proof of concept on monocrystalline p-type material is provided, showing good agreement with state-of-the-art data and models. Considerations for the applicability of the method to compensated and multicrystalline silicon materials are discussed.

  12. Accurate discrimination of conserved coding and non-coding regions through multiple indicators of evolutionary dynamics

    Directory of Open Access Journals (Sweden)

    Pesole Graziano

    2009-09-01

    Full Text Available Abstract Background The conservation of sequences between related genomes has long been recognised as an indication of functional significance and recognition of sequence homology is one of the principal approaches used in the annotation of newly sequenced genomes. In the context of recent findings that the number non-coding transcripts in higher organisms is likely to be much higher than previously imagined, discrimination between conserved coding and non-coding sequences is a topic of considerable interest. Additionally, it should be considered desirable to discriminate between coding and non-coding conserved sequences without recourse to the use of sequence similarity searches of protein databases as such approaches exclude the identification of novel conserved proteins without characterized homologs and may be influenced by the presence in databases of sequences which are erroneously annotated as coding. Results Here we present a machine learning-based approach for the discrimination of conserved coding sequences. Our method calculates various statistics related to the evolutionary dynamics of two aligned sequences. These features are considered by a Support Vector Machine which designates the alignment coding or non-coding with an associated probability score. Conclusion We show that our approach is both sensitive and accurate with respect to comparable methods and illustrate several situations in which it may be applied, including the identification of conserved coding regions in genome sequences and the discrimination of coding from non-coding cDNA sequences.

  13. Cluster abundance in chameleon f(R) gravity I: toward an accurate halo mass function prediction

    Science.gov (United States)

    Cataneo, Matteo; Rapetti, David; Lombriser, Lucas; Li, Baojiu

    2016-12-01

    We refine the mass and environment dependent spherical collapse model of chameleon f(R) gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution N-body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the enhancement of the f(R) halo abundance with respect to that of General Relativity (GR) within a precision of lesssim 5% from the results obtained in the simulations. Similar accuracy can be achieved for the full f(R) mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexibility of our method allows also for this to be applied to other scalar-tensor theories characterized by a mass and environment dependent spherical collapse.

  14. A Statistical Method for Assessing Peptide Identification Confidence in Accurate Mass and Time Tag Proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.

    2011-07-15

    High-throughput proteomics is rapidly evolving to require high mass measurement accuracy for a variety of different applications. Increased mass measurement accuracy in bottom-up proteomics specifically allows for an improved ability to distinguish and characterize detected MS features, which may in turn be identified by, e.g., matching to entries in a database for both precursor and fragmentation mass identification methods. Many tools exist with which to score the identification of peptides from LC-MS/MS measurements or to assess matches to an accurate mass and time (AMT) tag database, but these two calculations remain distinctly unrelated. Here we present a statistical method, Statistical Tools for AMT tag Confidence (STAC), which extends our previous work incorporating prior probabilities of correct sequence identification from LC-MS/MS, as well as the quality with which LC-MS features match AMT tags, to evaluate peptide identification confidence. Compared to existing tools, we are able to obtain significantly more high-confidence peptide identifications at a given false discovery rate and additionally assign confidence estimates to individual peptide identifications. Freely available software implementations of STAC are available in both command line and as a Windows graphical application.

  15. Novel methodology for accurate resolution of fluid signatures from multi-dimensional NMR well-logging measurements.

    Science.gov (United States)

    Anand, Vivek

    2017-03-01

    A novel methodology for accurate fluid characterization from multi-dimensional nuclear magnetic resonance (NMR) well-logging measurements is introduced. This methodology overcomes a fundamental challenge of poor resolution of features in multi-dimensional NMR distributions due to low signal-to-noise ratio (SNR) of well-logging measurements. Based on an unsupervised machine-learning concept of blind source separation, the methodology resolves fluid responses from simultaneous analysis of large quantities of well-logging data. The multi-dimensional NMR distributions from a well log are arranged in a database matrix that is expressed as the product of two non-negative matrices. The first matrix contains the unique fluid signatures, and the second matrix contains the relative contributions of the signatures for each measurement sample. No a priori information or subjective assumptions about the underlying features in the data are required. Furthermore, the dimensionality of the data is reduced by several orders of magnitude, which greatly simplifies the visualization and interpretation of the fluid signatures. Compared to traditional methods of NMR fluid characterization which only use the information content of a single measurement, the new methodology uses the orders-of-magnitude higher information content of the entire well log. Simulations show that the methodology can resolve accurate fluid responses in challenging SNR conditions. The application of the methodology to well-logging data from a heavy oil reservoir shows that individual fluid signatures of heavy oil, water associated with clays and water in interstitial pores can be accurately obtained. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Accurate characterization of 3D diffraction gratings using time domain discontinuous Galerkin method with exact absorbing boundary conditions

    KAUST Repository

    Sirenko, Kostyantyn

    2013-07-01

    Exact absorbing and periodic boundary conditions allow to truncate grating problems\\' infinite physical domains without introducing any errors. This work presents exact absorbing boundary conditions for 3D diffraction gratings and describes their discretization within a high-order time-domain discontinuous Galerkin finite element method (TD-DG-FEM). The error introduced by the boundary condition discretization matches that of the TD-DG-FEM; this results in an optimal solver in terms of accuracy and computation time. Numerical results demonstrate the superiority of this solver over TD-DG-FEM with perfectly matched layers (PML)-based domain truncation. © 2013 IEEE.

  17. Develop Accurate Methods for Characterizing And Quantifying Cohesive Sediment Erosion Under Combined Current Wave Conditions: Project ER 1497

    Science.gov (United States)

    2017-09-01

    shear-stress time histories collected with the optical sensors within SEAWOLF will be used to develop a means to program the computer -driven piston...8 Task 1–Determine shear-stress time histories and turbulent structure within the enclosed SEAWOLF channel with smooth and rough walls under...23 Task 2–Create data base of shear-stress time histories by demonstrating and applying newly developed optical sensing

  18. A New Amide Proton R{sub 1{rho}} Experiment Permits Accurate Characterization of Microsecond Time-scale Conformational Exchange

    Energy Technology Data Exchange (ETDEWEB)

    Eichmueller, Christian; Skrynnikov, Nikolai R. [Purdue University, Department of Chemistry (United States)], E-mail: nikolai@purdue.edu

    2005-08-15

    A new off-resonance spin-lock experiment to record relaxation dispersion profiles of amide protons is presented. The sensitivity-enhanced HSQC-type sequence is designed to minimize the interference from cross-relaxation effects and ensure that the dispersion profiles in the absence of {mu}s-ms time-scale dynamics are flat. Toward this end (i) the proton background is eliminated by sample deuteration (Ishima et al., 1998), (ii) {sup 1}H spin lock is applied to two-spin modes 2(H{sub x}Sin {theta} + H{sub z}Cos {theta})N{sub z}, and (iii) the tilt angle {theta} {approx} 35 deg. is maintained throughout the series of measurements (Desvaux et al. Mol. Phys., 86 (1995) 1059). The relaxation dispersion profiles recorded in this manner sample a wide range of effective rf field strengths (up to and in excess of 20 kHz) which makes them particularly suitable for studies of motions on the time scale {<=}100 {mu}s. The new experiment has been tested on the Ca{sup 2+}-loaded regulatory domain of cardiac troponin C. Many residues show pronounced dispersions with remarkably similar correlation times of 30 {mu}s. Furthermore, these residues are localized in the regions that have been previously implicated in conformational changes (Spyracopoulos et al. Biochemistry, 36 (1997) 12138)

  19. Accurate statistical associating fluid theory for chain molecules formed from Mie segments.

    Science.gov (United States)

    Lafitte, Thomas; Apostolakou, Anastasia; Avendaño, Carlos; Galindo, Amparo; Adjiman, Claire S; Müller, Erich A; Jackson, George

    2013-10-21

    A highly accurate equation of state (EOS) for chain molecules formed from spherical segments interacting through Mie potentials (i.e., a generalized Lennard-Jones form with variable repulsive and attractive exponents) is presented. The quality of the theoretical description of the vapour-liquid equilibria (coexistence densities and vapour pressures) and the second-derivative thermophysical properties (heat capacities, isobaric thermal expansivities, and speed of sound) are critically assessed by comparison with molecular simulation and with experimental data of representative real substances. Our new EOS represents a notable improvement with respect to previous versions of the statistical associating fluid theory for variable range interactions (SAFT-VR) of the generic Mie form. The approach makes rigorous use of the Barker and Henderson high-temperature perturbation expansion up to third order in the free energy of the monomer Mie system. The radial distribution function of the reference monomer fluid, which is a prerequisite for the representation of the properties of the fluid of Mie chains within a Wertheim first-order thermodynamic perturbation theory (TPT1), is calculated from a second-order expansion. The resulting SAFT-VR Mie EOS can now be applied to molecular fluids characterized by a broad range of interactions spanning from soft to very repulsive and short-ranged Mie potentials. A good representation of the corresponding molecular-simulation data is achieved for model monomer and chain fluids. When applied to the particular case of the ubiquitous Lennard-Jones potential, our rigorous description of the thermodynamic properties is of equivalent quality to that obtained with the empirical EOSs for LJ monomer (EOS of Johnson et al.) and LJ chain (soft-SAFT) fluids. A key feature of our reformulated SAFT-VR approach is the greatly enhanced accuracy in the near-critical region for chain molecules. This attribute, combined with the accurate modeling of second

  20. Accurate and Simplified Prediction of AVF for Delay and Energy Efficient Cache Design

    Institute of Scientific and Technical Information of China (English)

    An-Guo Ma; Yu Cheng; Zuo-Cheng Xing

    2011-01-01

    With continuous technology scaling, on-chip structures are becoming more and more susceptible to soft errors. Architectural vulnerability factor (AVF) has been introduced to quantify the architectural vulnerability of on-chip structures to soft errors. Recent studies have found that designing soft error protection techniques with the awareness of AVF is greatly helpful to achieve a tradeoff between performance and reliability for several structures (i.e., issue queue, reorder buffer). Cache is one of the most susceptible components to soft errors and is commonly protected with error correcting codes (ECC). However, protecting caches closer to the processor (i.e., L1 data cache (L1D)) using ECC could result in high overhead. Protecting caches without accurate knowledge of the vulnerability characteristics may lead to over-protection. Therefore, designing AVF-aware ECC is attractive for designers to balance among performance, power and reliability for cache, especially at early design stage. In this paper, we improve the methodology of cache AVF computation and develop a new AVF estimation framework, soft error reliability analysis based on SimpleScalar. Then we characterize dynamic vulnerability behavior of L1D and detect the correlations between LID AVF and various performance metrics. We propose to employ Bayesian additive regression trees to accurately model the variation of L1D AVF and to quantitatively explain the important effects of several key performance metrics on L1D AVF. Then, we employ bump hunting technique to reduce the complexity of L1D AVF prediction and extract some simple selecting rules based on several key performance metrics, thus enabling a simplified and fast estimation of L1D AVF. Based on the simplified and fast estimation of L1D AVF, intervals of high L1D AVF can be identified online, enabling us to develop the AVF-aware ECC technique to reduce the overhead of ECC. Experimental results show that compared with traditional ECC technique

  1. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    Science.gov (United States)

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  2. On accurate computations of bound state properties in three- and four-electron atomic systems

    CERN Document Server

    Frolov, Alexei M

    2016-01-01

    Results of accurate computations of bound states in three- and four-electron atomic systems are discussed. Bound state properties of the four-electron lithium ion Li$^{-}$ in its ground $2^{2}S-$state are determined from the results of accurate, variational computations. We also consider a closely related problem of accurate numerical evaluation of the half-life of the beryllium-7 isotope. This problem is of paramount importance for modern radiochemistry.

  3. Using highly accurate 3D nanometrology to model the optical properties of highly irregular nanoparticles: a powerful tool for rational design of plasmonic devices.

    Science.gov (United States)

    Perassi, Eduardo M; Hernandez-Garrido, Juan C; Moreno, M Sergio; Encina, Ezequiel R; Coronado, Eduardo A; Midgley, Paul A

    2010-06-09

    The realization of materials at the nanometer scale creates new challenges for quantitative characterization and modeling as many physical and chemical properties at the nanoscale are highly size and shape-dependent. In particular, the accurate nanometrological characterization of noble metal nanoparticles (NPs) is crucial for understanding their optical response that is determined by the collective excitation of conduction electrons, known as localized surface plasmons. Its manipulation gives place to a variety of applications in ultrasensitive spectroscopies, photonics, improved photovoltaics, imaging, and cancer therapy. Here we show that by combining electron tomography with electrodynamic simulations an accurate optical model of a highly irregular gold NP synthesized by chemical methods could be achieved. This constitutes a novel and rigorous tool for understanding the plasmonic properties of real three-dimensional nano-objects.

  4. Accurate bond energies of biodiesel methyl esters from multireference averaged coupled-pair functional calculations.

    Science.gov (United States)

    Oyeyemi, Victor B; Keith, John A; Carter, Emily A

    2014-09-04

    Accurate bond dissociation energies (BDEs) are important for characterizing combustion chemistry, particularly the initial stages of pyrolysis. Here we contribute to evaluating the thermochemistry of biodiesel methyl ester molecules using ab initio BDEs derived from a multireference averaged coupled-pair functional (MRACPF2)-based scheme. Having previously validated this approach for hydrocarbons and a variety of oxygenates, herein we provide further validation for bonds within carboxylic acids and methyl esters, finding our scheme predicts BDEs within chemical accuracy (i.e., within 1 kcal/mol) for these molecules. Insights into BDE trends with ester size are then analyzed for methyl formate through methyl crotonate. We find that the carbonyl group in the ester moiety has only a local effect on BDEs. C═C double bonds in ester alkyl chains are found to increase the strengths of bonds adjacent to the double bond. An important exception are bonds beta to C═C or C═O bonds, which produce allylic-like radicals upon dissociation. The observed trends arise from different degrees of geometric relaxation and resonance stabilization in the radicals produced. We also compute BDEs in various small alkanes and alkenes as models for the long hydrocarbon chain of actual biodiesel methyl esters. We again show that allylic bonds in the alkenes are much weaker than those in the small methyl esters, indicating that hydrogen abstractions are more likely at the allylic site and even more likely at bis-allylic sites of alkyl chains due to more electrons involved in π-resonance in the latter. Lastly, we use the BDEs in small surrogates to estimate heretofore unknown BDEs in large methyl esters of biodiesel fuels.

  5. Initial conditions for accurate N-body simulations of massive neutrino cosmologies

    Science.gov (United States)

    Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.

    2017-04-01

    The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, i.e. 1 per cent level, numerical simulations for this cosmological scenario.

  6. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    Science.gov (United States)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  7. A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures

    Energy Technology Data Exchange (ETDEWEB)

    Mangipudi, K.R., E-mail: mangipudi@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Radisch, V., E-mail: vradisch@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Holzer, L., E-mail: holz@zhaw.ch [Züricher Hochschule für Angewandte Wissenschaften, Institute of Computational Physics, Wildbachstrasse 21, CH-8400 Winterthur (Switzerland); Volkert, C.A., E-mail: volkert@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany)

    2016-04-15

    We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. - Highlights: • FIB nanotomography of nanoporous structure with features sizes of ∼40 nm or less. • Accurate determination of individual slice thickness with subpixel precision. • The method preserves surface topography. • Quantitative 3D microstructural analysis of materials with open porosity.

  8. Laser characterization with advanced digital signal processing

    DEFF Research Database (Denmark)

    Piels, Molly; Tafur Monroy, Idelfonso; Zibar, Darko

    2015-01-01

    The use of machine learning techniques to characterize lasers with low output power is reviewed. Optimized phase tracking algorithms that can produce accurate noise spectra are discussed, and a method for inferring the amplitude noise spectrum and rate equation model of the laser under test is pr...

  9. Atomic force microscopy characterization of cellulose nanocrystals

    Science.gov (United States)

    Roya R. Lahiji; Xin Xu; Ronald Reifenberger; Arvind Raman; Alan Rudie; Robert J. Moon

    2010-01-01

    Cellulose nanocrystals (CNCs) are gaining interest as a “green” nanomaterial with superior mechanical and chemical properties for high-performance nanocomposite materials; however, there is a lack of accurate material property characterization of individual CNCs. Here, a detailed study of the topography, elastic and adhesive properties of individual wood-derived CNCs...

  10. A thermal displacement sensor in MEMS [Towards accurate small-scale manipulation

    NARCIS (Netherlands)

    Krijnen, Bram; Hogervorst, Richard; Dijkstra, Jan Willem; Engelen, Johan; Woldering, Léon; Brouwer, Dannis; Abelmann, Leon; Soemers, Herman

    2011-01-01

    Accurate manipulation of small objects is becoming more and more important. Besides accurate manipulation, the demand for small manipulators is also increasing. Some examples are high-density data storage, (digital) light processing, accelerometers, rate sensors, and the use of cantilevers in atomic

  11. Mitigating the Erratic Behavior of the Transportation Working Capital Fund Through Accurate Forecasting

    Science.gov (United States)

    2015-06-19

    reference to Statistics for Business and Economics, 11th Edition (McClave et al., 2011). Data and Scope The data used for forecasting was provided by two...MITIGATING THE ERRATIC BEHAVIOR OF THE TRANSPORTATION WORKING CAPITAL FUND THROUGH ACCURATE FORECASTING ...CAPITAL FUND THROUGH ACCURATE FORECASTING GRADUATE RESEARCH PAPER Presented to the Faculty Department of Operational Sciences Air

  12. Smart and Accurate State-of-Charge Indication in Portable Applications

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Regtien, P.P.L.

    2005-01-01

    Accurate State-of-Charge (SoC) and remaining run-time indication for portable devices is important for the user-convenience and to prolong the lifetime of batteries. However, the known methods of SoC indication in portable applications are not accurate enough under all practical conditions. The meth

  13. Validation of a method for accurate and highly reproducible quantification of brain dopamine transporter SPECT studies

    DEFF Research Database (Denmark)

    Jensen, Peter S; Ziebell, Morten; Skouboe, Glenna

    2011-01-01

    In nuclear medicine brain imaging, it is important to delineate regions of interest (ROIs) so that the outcome is both accurate and reproducible. The purpose of this study was to validate a new time-saving algorithm (DATquan) for accurate and reproducible quantification of the striatal dopamine...... transporter (DAT) with appropriate radioligands and SPECT and without the need for structural brain scanning....

  14. A water-renewal system that accurately delivers small volumes of water to exposure chambers

    Science.gov (United States)

    Zumwalt, D. C.; Dwyer, F.J.; Greer, I.E.; Ingersoll, C.G.

    1994-01-01

    This paper describes a system that can accurately deliver small volumes of water (50 ml per cycle) to eight 300-ml beakers. The system is inexpensive <$100), easy to build (<8 h), and easy to calibrate (<15 min), and accurately delivers small volumes of water (<5% variability).

  15. Using In-Service and Coaching to Increase Teachers' Accurate Use of Research-Based Strategies

    Science.gov (United States)

    Kretlow, Allison G.; Cooke, Nancy L.; Wood, Charles L.

    2012-01-01

    Increasing the accurate use of research-based practices in classrooms is a critical issue. Professional development is one of the most practical ways to provide practicing teachers with training related to research-based practices. This study examined the effects of in-service plus follow-up coaching on first grade teachers' accurate delivery of…

  16. A new method and instrument for accurately measuring interval between ultrashort pulses

    Institute of Scientific and Technical Information of China (English)

    Zhonggang Ji; Yuxin Leng; Yunpei Deng; Bin Tang; Haihe Lu; Ruxin Li; Zhizhan Xu

    2005-01-01

    @@ Using second-order autocorrelation conception, a novel method and instrument for accurately measuring interval between two linearly polarized ultrashort pulses with real time were presented. The experiment demonstrated that the measuring method and instrument were simple and accurate (the measurement error < 5 fs). During measuring, there was no moving element resulting in dynamic measurement error.

  17. Accurate location estimation of moving object with energy constraint & adaptive update algorithms to save data

    CERN Document Server

    Semwal, Vijay Bhaskar; Bhaskar, Vinay S; Sati, Meenakshi

    2011-01-01

    In research paper "Accurate estimation of the target location of object with energy constraint & Adaptive Update Algorithms to Save Data" one of the central issues in sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation system. In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. we are using minimum three sensor node to get the accurate position .We can extend it upto four or five to find more accurate location ...

  18. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture

    Directory of Open Access Journals (Sweden)

    Zhiquan Gao

    2015-09-01

    Full Text Available Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.

  19. Measuring laser power as a force: a new paradigm to accurately monitor optical power during laser-based machining operations

    Science.gov (United States)

    Williams, Paul; Simonds, Brian; Sowards, Jeffrey; Hadler, Joshua

    2016-03-01

    In laser manufacturing operations, accurate measurement of laser power is important for product quality, operational repeatability, and process validation. Accurate real-time measurement of high-power lasers, however, is difficult. Typical thermal power meters must absorb all the laser power in order to measure it. This constrains power meters to be large, slow and exclusive (that is, the laser cannot be used for its intended purpose during the measurement). To address these limitations, we have developed a different paradigm in laser power measurement where the power is not measured according to its thermal equivalent but rather by measuring the laser beam's momentum (radiation pressure). Very simply, light reflecting from a mirror imparts a small force perpendicular to the mirror which is proportional to the optical power. By mounting a high-reflectivity mirror on a high-sensitivity force transducer (scale), we are able to measure laser power in the range of tens of watts up to ~ 100 kW. The critical parameters for such a device are mirror reflectivity, angle of incidence, and scale sensitivity and accuracy. We will describe our experimental characterization of a radiation-pressure-based optical power meter. We have tested it for modulated and CW laser powers up to 92 kW in the laboratory and up to 20 kW in an experimental laser welding booth. We will describe present accuracy, temporal response, sources of measurement uncertainty, and hurdles which must be overcome to have an accurate power meter capable of routine operation as a turning mirror within a laser delivery head.

  20. FRACTURING FLUID CHARACTERIZATION FACILITY

    Energy Technology Data Exchange (ETDEWEB)

    Subhash Shah

    2000-08-01

    Hydraulic fracturing technology has been successfully applied for well stimulation of low and high permeability reservoirs for numerous years. Treatment optimization and improved economics have always been the key to the success and it is more so when the reservoirs under consideration are marginal. Fluids are widely used for the stimulation of wells. The Fracturing Fluid Characterization Facility (FFCF) has been established to provide the accurate prediction of the behavior of complex fracturing fluids under downhole conditions. The primary focus of the facility is to provide valuable insight into the various mechanisms that govern the flow of fracturing fluids and slurries through hydraulically created fractures. During the time between September 30, 1992, and March 31, 2000, the research efforts were devoted to the areas of fluid rheology, proppant transport, proppant flowback, dynamic fluid loss, perforation pressure losses, and frictional pressure losses. In this regard, a unique above-the-ground fracture simulator was designed and constructed at the FFCF, labeled ''The High Pressure Simulator'' (HPS). The FFCF is now available to industry for characterizing and understanding the behavior of complex fluid systems. To better reflect and encompass the broad spectrum of the petroleum industry, the FFCF now operates under a new name of ''The Well Construction Technology Center'' (WCTC). This report documents the summary of the activities performed during 1992-2000 at the FFCF.

  1. MetaShot: an accurate workflow for taxon classification of host-associated microbiome from shotgun metagenomic data.

    Science.gov (United States)

    Fosso, B; Santamaria, M; D'Antonio, M; Lovero, D; Corrado, G; Vizza, E; Passaro, N; Garbuglia, A R; Capobianchi, M R; Crescenzi, M; Valiente, G; Pesole, G

    2017-06-01

    Shotgun metagenomics by high-throughput sequencing may allow deep and accurate characterization of host-associated total microbiomes, including bacteria, viruses, protists and fungi. However, the analysis of such sequencing data is still extremely challenging in terms of both overall accuracy and computational efficiency, and current methodologies show substantial variability in misclassification rate and resolution at lower taxonomic ranks or are limited to specific life domains (e.g. only bacteria). We present here MetaShot, a workflow for assessing the total microbiome composition from host-associated shotgun sequence data, and show its overall optimal accuracy performance by analyzing both simulated and real datasets. https://github.com/bfosso/MetaShot. graziano.pesole@uniba.it. Supplementary data are available at Bioinformatics online.

  2. Accurate prediction of X-ray pulse properties from a free-electron laser using machine learning

    Science.gov (United States)

    Sanchez-Gonzalez, A.; Micaelli, P.; Olivier, C.; Barillot, T. R.; Ilchen, M.; Lutman, A. A.; Marinelli, A.; Maxwell, T.; Achner, A.; Agåker, M.; Berrah, N.; Bostedt, C.; Bozek, J. D.; Buck, J.; Bucksbaum, P. H.; Montero, S. Carron; Cooper, B.; Cryan, J. P.; Dong, M.; Feifel, R.; Frasinski, L. J.; Fukuzawa, H.; Galler, A.; Hartmann, G.; Hartmann, N.; Helml, W.; Johnson, A. S.; Knie, A.; Lindahl, A. O.; Liu, J.; Motomura, K.; Mucke, M.; O'Grady, C.; Rubensson, J.-E.; Simpson, E. R.; Squibb, R. J.; Såthe, C.; Ueda, K.; Vacher, M.; Walke, D. J.; Zhaunerchyk, V.; Coffee, R. N.; Marangos, J. P.

    2017-06-01

    Free-electron lasers providing ultra-short high-brightness pulses of X-ray radiation have great potential for a wide impact on science, and are a critical element for unravelling the structural dynamics of matter. To fully harness this potential, we must accurately know the X-ray properties: intensity, spectrum and temporal profile. Owing to the inherent fluctuations in free-electron lasers, this mandates a full characterization of the properties for each and every pulse. While diagnostics of these properties exist, they are often invasive and many cannot operate at a high-repetition rate. Here, we present a technique for circumventing this limitation. Employing a machine learning strategy, we can accurately predict X-ray properties for every shot using only parameters that are easily recorded at high-repetition rate, by training a model on a small set of fully diagnosed pulses. This opens the door to fully realizing the promise of next-generation high-repetition rate X-ray lasers.

  3. Mitotic Protein CSPP1 Interacts with CENP-H Protein to Coordinate Accurate Chromosome Oscillation in Mitosis.

    Science.gov (United States)

    Zhu, Lijuan; Wang, Zhikai; Wang, Wenwen; Wang, Chunli; Hua, Shasha; Su, Zeqi; Brako, Larry; Garcia-Barrio, Minerva; Ye, Mingliang; Wei, Xuan; Zou, Hanfa; Ding, Xia; Liu, Lifang; Liu, Xing; Yao, Xuebiao

    2015-11-06

    Mitotic chromosome segregation is orchestrated by the dynamic interaction of spindle microtubules with the kinetochores. During chromosome alignment, kinetochore-bound microtubules undergo dynamic cycles between growth and shrinkage, leading to an oscillatory movement of chromosomes along the spindle axis. Although kinetochore protein CENP-H serves as a molecular control of kinetochore-microtubule dynamics, the mechanistic link between CENP-H and kinetochore microtubules (kMT) has remained less characterized. Here, we show that CSPP1 is a kinetochore protein essential for accurate chromosome movements in mitosis. CSPP1 binds to CENP-H in vitro and in vivo. Suppression of CSPP1 perturbs proper mitotic progression and compromises the satisfaction of spindle assembly checkpoint. In addition, chromosome oscillation is greatly attenuated in CSPP1-depleted cells, similar to what was observed in the CENP-H-depleted cells. Importantly, CSPP1 depletion enhances velocity of kinetochore movement, and overexpression of CSPP1 decreases the speed, suggesting that CSPP1 promotes kMT stability during cell division. Specific perturbation of CENP-H/CSPP1 interaction using a membrane-permeable competing peptide resulted in a transient mitotic arrest and chromosome segregation defect. Based on these findings, we propose that CSPP1 cooperates with CENP-H on kinetochores to serve as a novel regulator of kMT dynamics for accurate chromosome segregation.

  4. Comparison of methods for accurate quantification of DNA mass concentration with traceability to the international system of units.

    Science.gov (United States)

    Bhat, Somanath; Curach, Natalie; Mostyn, Thomas; Bains, Gursharan Singh; Griffiths, Kate R; Emslie, Kerry R

    2010-09-01

    Accurate estimation of total DNA concentration (mass concentration, e.g., ng/muL) that is traceable to the International System of Units (SI) is a crucial starting point for improving reproducible measurements in many applications involving nucleic acid testing and requires a DNA reference material which has been certified for its total DNA concentration. In this study, the concentrations of six different lambda DNA preparations were determined using different measurement platforms: UV Absorbance at 260 nm (A(260)) with and without prior sodium hydroxide (NaOH) treatment of the DNA, PicoGreen assay, and digital polymerase chain reaction (dPCR). DNA concentration estimates by A(260) with and without prior NaOH treatment were significantly different for five of the six samples tested. There were no significant differences in concentration estimates based on A(260) with prior NaOH treatment, PicoGreen analysis, and dPCR for two of the three samples tested using dPCR. Since the measurand in dPCR is amount (copy number) concentration (copies/muL), the results suggest that accurate estimation of DNA mass concentration based on copy number concentration is achievable provided the DNA is fully characterized and in the double-stranded form or amplification is designed to be initiated from only one of the two complementary strands.

  5. Easy characterization of petroleum fractions: Part 1

    Energy Technology Data Exchange (ETDEWEB)

    Miquel, J. (Univ. Politecnica de Catalunya (Spain)); Castells, F. (Univ. Rovira i Virgili, Catalunya (Spain))

    1993-12-01

    A new method for characterizing petroleum fractions, based on pseudocomponent breakdown using the integral method has been developed. It requires only that one has an atmospheric true boiling point (tbp) distillation curve and known the entire fraction density. The proposed characterization procedure is valid for representing any oil fraction (light or heavy) with a boiling point range smaller than 300 K. It is based on the hypothesis of constant Watson's characterization factor, K[sub w], for all the pseudocomponents. Outside this range, it is less accurate (greater errors in material and molar balances). Therefore, a method considering the variable K[sub w] is best to treat these fractions.

  6. Characterization of Cloud Water-Content Distribution

    Science.gov (United States)

    Lee, Seungwon

    2010-01-01

    The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.

  7. Highly accurate local pseudopotentials of Li, Na, and Mg for orbital free density functional theory

    Science.gov (United States)

    Legrain, Fleur; Manzhos, Sergei

    2015-02-01

    We present a method to make highly accurate pseudopotentials for use with orbital-free density functional theory (OF-DFT) with given exchange-correlation and kinetic energy functionals, which avoids the compounding of errors of Kohn-Sham DFT and OF-DFT. The pseudopotentials are fitted to reference (experimental or highly accurate quantum chemistry) values of interaction energies, geometries, and mechanical properties, using a genetic algorithm. This can enable routine large-scale ab initio simulations of many practically relevant materials. Pseudopotentials for Li, Na, and Mg resulting in accurate geometries and energies of different phases as well as of vacancy formation and bulk moduli are presented as examples.

  8. Three-stage Stiffly Accurate Runge-Kutta Methods for Stiff Stochastic Differential Equations

    Institute of Scientific and Technical Information of China (English)

    WANG PENG

    2011-01-01

    In this paper we discuss diagonally implicit and semi-implicit methods based on the three-stage stiffly accurate Runge-Kutta methods for solving Stratonovich stochastic differential equations (SDEs). Two methods, a three-stage stiffly accurate semi-implicit (SASI3) method and a three-stage stiffly accurate diagonally implicit (SADI3) method, are constructed in this paper. In particular, the truncated random variable is used in the implicit method. The stability properties and numerical results show the effectiveness of these methods in the pathwise approximation of stiff SDEs.

  9. Efficient and Accurate Computational Framework for Injector Design and Analysis Project

    Data.gov (United States)

    National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...

  10. Progress towards an accurate determination of the Boltzmann constant by Doppler spectroscopy

    National Research Council Canada - National Science Library

    Lemarchand, Cyril; Triki, Meriam; Darquié, Benoît; J. Bordé, Christian; Chardonnet, Christian; Daussy, Christophe

    2011-01-01

    In this paper, we present significant progress performed on an experiment dedicated to the determination of the Boltzmann constant, k, by accurately measuring the Doppler absorption profile of a line...

  11. Accurate and Precise Determination of Uranium by Means of Extraction Spectrophotometric

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>Uranium is an important nuclear material. Accurate determination of uranium is significant in the nuclear fuel production, accountancy, nuclear safeguards and other procedures of nuclear fuel cycle.

  12. Signal-intensity-ratio MRI accurately estimates hepatic iron load in hemodialysis patients

    Directory of Open Access Journals (Sweden)

    Guy Rostoker

    2017-01-01

    Conclusions: This pilot study shows that liver iron determination based on signal-intensity-ratio MRI (Rennes University algorithm very accurately identifies iron load in hemodialysis patients, by comparison with liver histology.

  13. Fast and Accurate Discovery of Degenerate Linear Motifs in Protein Sequences

    Science.gov (United States)

    Levy, Emmanuel D.; Michnick, Stephen W.

    2014-01-01

    Linear motifs mediate a wide variety of cellular functions, which makes their characterization in protein sequences crucial to understanding cellular systems. However, the short length and degenerate nature of linear motifs make their discovery a difficult problem. Here, we introduce MotifHound, an algorithm particularly suited for the discovery of small and degenerate linear motifs. MotifHound performs an exact and exhaustive enumeration of all motifs present in proteins of interest, including all of their degenerate forms, and scores the overrepresentation of each motif based on its occurrence in proteins of interest relative to a background (e.g., proteome) using the hypergeometric distribution. To assess MotifHound, we benchmarked it together with state-of-the-art algorithms. The benchmark consists of 11,880 sets of proteins from S. cerevisiae; in each set, we artificially spiked-in one motif varying in terms of three key parameters, (i) number of occurrences, (ii) length and (iii) the number of degenerate or “wildcard” positions. The benchmark enabled the evaluation of the impact of these three properties on the performance of the different algorithms. The results showed that MotifHound and SLiMFinder were the most accurate in detecting degenerate linear motifs. Interestingly, MotifHound was 15 to 20 times faster at comparable accuracy and performed best in the discovery of highly degenerate motifs. We complemented the benchmark by an analysis of proteins experimentally shown to bind the FUS1 SH3 domain from S. cerevisiae. Using the full-length protein partners as sole information, MotifHound recapitulated most experimentally determined motifs binding to the FUS1 SH3 domain. Moreover, these motifs exhibited properties typical of SH3 binding peptides, e.g., high intrinsic disorder and evolutionary conservation, despite the fact that none of these properties were used as prior information. MotifHound is available (http://michnick.bcm.umontreal.ca or http

  14. Accurate Mass Determination of Amino Alcohols by Turboionspray/Time-of-Flight Mass Spectrometry

    Institute of Scientific and Technical Information of China (English)

    GENG,Yu(耿昱); GUO,Yin-Long(郭寅龙); ZHAO,Shi-Min(赵士民); MA,Sheng-Ming(麻生明)

    2002-01-01

    Amino alcohols were studied by turboionspray/time-of-flight mass spectrometry (TIS/TOF-MS) with the aim of determining the accurate mass of their protonated molecule ions.Polyethylene glycol (PEG) was used as the internal reference.Compared with the theoretical values, all relative errors were less than 5×10-6. The effects of nozzle potential, nozzle temperature, acquisition rate etc. on accurate mass determination were also studied.

  15. Defining Allowable Physical Property Variations for High Accurate Measurements on Polymer Parts

    DEFF Research Database (Denmark)

    Mohammadi, Ali; Sonne, Mads Rostgaard; Madruga, Daniel González

    2015-01-01

    cooling down after injection molding. In order to obtain accurate simulations, accurate inputs to the model are crucial. In reality however, the material and physical properties will have some variations. Although these variations may be small, they can act as a source of uncertainty for the measurement....... In this paper, we investigated how big the variation in material and physical properties are allowed in order to reach the 5 μm target on the uncertainty....

  16. Equation of state for hard sphere fluids offering accurate virial coefficients

    CERN Document Server

    Tian, Jianxiang; Gui, Yuanxing; Mulero, A

    2016-01-01

    The asymptotic expansion method is extended by using currently available accurate values for the first ten virial coefficients for hard sphere fluids. It is then used to yield an equation of state for hard sphere fluids, which accurately represents the currently accepted values for the first sixteen virial coefficients and compressibility factor data in both the stable and the metastable regions of the phase diagram.

  17. Effective Temperatures of Selected Main-sequence Stars with Most Accurate Parameters

    CERN Document Server

    Soydugan, F; Soydugan, E; Bilir, S; Gökçe, E Yaz; Steer, I; Tüysüz, M; Şenyüz, T; Demircan, O

    2014-01-01

    In this study, the distributions of the double-lined detached binaries (DBs) on the planes of mass-luminosity, mass radius and mass-effective temperature have been studied. We improved the classical mass-luminosity relation based on the database of DBs by Eker et al. (2004a). With accurate observational data available to us, a method for improving effective temperatures for eclipsing binaries with accurate masses and radii were suggested.

  18. Effective Temperatures of Selected Main-Sequence Stars with the Most Accurate Parameters

    Science.gov (United States)

    Soydugan, F.; Eker, Z.; Soydugan, E.; Bilir, S.; Gökçe, E. Y.; Steer, I.; Tüysüz, M.; Šenyüz, T.; Demircan, O.

    2015-07-01

    In this study we investigate the distributions of the properties of detached double-lined binaries (DBs) in the mass-luminosity, mass-radius, and mass-effective temperature diagrams. We have improved the classical mass-luminosity relation based on the database of DBs by Eker et al. (2014a). Based on the accurate observational data available to us we propose a method for improving the effective temperatures of eclipsing binaries with accurate mass and radius determinations.

  19. A self-interaction-free local hybrid functional: Accurate binding energies vis-\\`a-vis accurate ionization potentials from Kohn-Sham eigenvalues

    CERN Document Server

    Schmidt, Tobias; Makmal, Adi; Kronik, Leeor; Kümmel, Stephan

    2014-01-01

    We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potent...

  20. Multi-parametric (ADC/PWI/T2-w) image fusion approach for accurate semi-automatic segmentation of tumorous regions in glioblastoma multiforme.

    Science.gov (United States)

    Fathi Kazerooni, Anahita; Mohseni, Meysam; Rezaei, Sahar; Bakhshandehpour, Gholamreza; Saligheh Rad, Hamidreza

    2015-02-01

    Glioblastoma multiforme (GBM) brain tumor is heterogeneous in nature, so its quantification depends on how to accurately segment different parts of the tumor, i.e. viable tumor, edema and necrosis. This procedure becomes more effective when metabolic and functional information, provided by physiological magnetic resonance (MR) imaging modalities, like diffusion-weighted-imaging (DWI) and perfusion-weighted-imaging (PWI), is incorporated with the anatomical magnetic resonance imaging (MRI). In this preliminary tumor quantification work, the idea is to characterize different regions of GBM tumors in an MRI-based semi-automatic multi-parametric approach to achieve more accurate characterization of pathogenic regions. For this purpose, three MR sequences, namely T2-weighted imaging (anatomical MR imaging), PWI and DWI of thirteen GBM patients, were acquired. To enhance the delineation of the boundaries of each pathogenic region (peri-tumoral edema, viable tumor and necrosis), the spatial fuzzy C-means algorithm is combined with the region growing method. The results show that exploiting the multi-parametric approach along with the proposed semi-automatic segmentation method can differentiate various tumorous regions with over 80 % sensitivity, specificity and dice score. The proposed MRI-based multi-parametric segmentation approach has the potential to accurately segment tumorous regions, leading to an efficient design of the pre-surgical treatment planning.

  1. There's plenty of gloom at the bottom: the many challenges of accurate quantitation in size-based oligomeric separations.

    Science.gov (United States)

    Striegel, André M

    2013-11-01

    There is a variety of small-molecule species (e.g., tackifiers, plasticizers, oligosaccharides) the size-based characterization of which is of considerable scientific and industrial importance. Likewise, quantitation of the amount of oligomers in a polymer sample is crucial for the import and export of substances into the USA and European Union (EU). While the characterization of ultra-high molar mass macromolecules by size-based separation techniques is generally considered a challenge, it is this author's contention that a greater challenge is encountered when trying to perform, for quantitation purposes, separations in and of the oligomeric region. The latter thesis is expounded herein, by detailing the various obstacles encountered en route to accurate, quantitative oligomeric separations by entropically dominated techniques such as size-exclusion chromatography, hydrodynamic chromatography, and asymmetric flow field-flow fractionation, as well as by methods which are, principally, enthalpically driven such as liquid adsorption and temperature gradient interaction chromatography. These obstacles include, among others, the diminished sensitivity of static light scattering (SLS) detection at low molar masses, the non-constancy of the response of SLS and of commonly employed concentration-sensitive detectors across the oligomeric region, and the loss of oligomers through the accumulation wall membrane in asymmetric flow field-flow fractionation. The battle is not lost, however, because, with some care and given a sufficient supply of sample, the quantitation of both individual oligomeric species and of the total oligomeric region is often possible.

  2. Temporal variation of traffic on highways and the development of accurate temporal allocation factors for air pollution analyses

    Science.gov (United States)

    Batterman, Stuart; Cook, Richard; Justin, Thomas

    2015-04-01

    Traffic activity encompasses the number, mix, speed and acceleration of vehicles on roadways. The temporal pattern and variation of traffic activity reflects vehicle use, congestion and safety issues, and it represents a major influence on emissions and concentrations of traffic-related air pollutants. Accurate characterization of vehicle flows is critical in analyzing and modeling urban and local-scale pollutants, especially in near-road environments and traffic corridors. This study describes methods to improve the characterization of temporal variation of traffic activity. Annual, monthly, daily and hourly temporal allocation factors (TAFs), which describe the expected temporal variation in traffic activity, were developed using four years of hourly traffic activity data recorded at 14 continuous counting stations across the Detroit, Michigan, U.S. region. Five sites also provided vehicle classification. TAF-based models provide a simple means to apportion annual average estimates of traffic volume to hourly estimates. The analysis shows the need to separate TAFs for total and commercial vehicles, and weekdays, Saturdays, Sundays and observed holidays. Using either site-specific or urban-wide TAFs, nearly all of the variation in historical traffic activity at the street scale could be explained; unexplained variation was attributed to adverse weather, traffic accidents and construction. The methods and results presented in this paper can improve air quality dispersion modeling of mobile sources, and can be used to evaluate and model temporal variation in ambient air quality monitoring data and exposure estimates.

  3. Liquid propellant rocket engine combustion simulation with a time-accurate CFD method

    Science.gov (United States)

    Chen, Y. S.; Shang, H. M.; Liaw, Paul; Hutt, J.

    1993-01-01

    Time-accurate computational fluid dynamics (CFD) algorithms are among the basic requirements as an engineering or research tool for realistic simulations of transient combustion phenomena, such as combustion instability, transient start-up, etc., inside the rocket engine combustion chamber. A time-accurate pressure based method is employed in the FDNS code for combustion model development. This is in connection with other program development activities such as spray combustion model development and efficient finite-rate chemistry solution method implementation. In the present study, a second-order time-accurate time-marching scheme is employed. For better spatial resolutions near discontinuities (e.g., shocks, contact discontinuities), a 3rd-order accurate TVD scheme for modeling the convection terms is implemented in the FDNS code. Necessary modification to the predictor/multi-corrector solution algorithm in order to maintain time-accurate wave propagation is also investigated. Benchmark 1-D and multidimensional test cases, which include the classical shock tube wave propagation problems, resonant pipe test case, unsteady flow development of a blast tube test case, and H2/O2 rocket engine chamber combustion start-up transient simulation, etc., are investigated to validate and demonstrate the accuracy and robustness of the present numerical scheme and solution algorithm.

  4. On the accurate estimation of gap fraction during daytime with digital cover photography

    Science.gov (United States)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  5. A highly accurate method for determination of dissolved oxygen: Gravimetric Winkler method

    Energy Technology Data Exchange (ETDEWEB)

    Helm, Irja; Jalukse, Lauri [University of Tartu, Institute of Chemistry, 14a Ravila str., 50411 Tartu (Estonia); Leito, Ivo, E-mail: ivo.leito@ut.ee [University of Tartu, Institute of Chemistry, 14a Ravila str., 50411 Tartu (Estonia)

    2012-09-05

    Highlights: Black-Right-Pointing-Pointer Probably the most accurate method available for dissolved oxygen concentration measurement was developed. Black-Right-Pointing-Pointer Careful analysis of uncertainty sources was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. Black-Right-Pointing-Pointer This development enables more accurate calibration of dissolved oxygen sensors for routine analysis than has been possible before. - Abstract: A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012-0.018 mg dm{sup -3} corresponding to the k = 2 expanded uncertainty in the range of 0.023-0.035 mg dm{sup -3} (0.27-0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.

  6. Theory of bi-molecular association dynamics in 2D for accurate model and experimental parameterization of binding rates.

    Science.gov (United States)

    Yogurtcu, Osman N; Johnson, Margaret E

    2015-08-28

    The dynamics of association between diffusing and reacting molecular species are routinely quantified using simple rate-equation kinetics that assume both well-mixed concentrations of species and a single rate constant for parameterizing the binding rate. In two-dimensions (2D), however, even when systems are well-mixed, the assumption of a single characteristic rate constant for describing association is not generally accurate, due to the properties of diffusional searching in dimensions d ≤ 2. Establishing rigorous bounds for discriminating between 2D reactive systems that will be accurately described by rate equations with a single rate constant, and those that will not, is critical for both modeling and experimentally parameterizing binding reactions restricted to surfaces such as cellular membranes. We show here that in regimes of intrinsic reaction rate (ka) and diffusion (D) parameters ka/D > 0.05, a single rate constant cannot be fit to the dynamics of concentrations of associating species independently of the initial conditions. Instead, a more sophisticated multi-parametric description than rate-equations is necessary to robustly characterize bimolecular reactions from experiment. Our quantitative bounds derive from our new analysis of 2D rate-behavior predicted from Smoluchowski theory. Using a recently developed single particle reaction-diffusion algorithm we extend here to 2D, we are able to test and validate the predictions of Smoluchowski theory and several other theories of reversible reaction dynamics in 2D for the first time. Finally, our results also mean that simulations of reactive systems in 2D using rate equations must be undertaken with caution when reactions have ka/D > 0.05, regardless of the simulation volume. We introduce here a simple formula for an adaptive concentration dependent rate constant for these chemical kinetics simulations which improves on existing formulas to better capture non-equilibrium reaction dynamics from dilute

  7. Accurate, robust and reliable calculations of Poisson-Boltzmann solvation energies

    CERN Document Server

    Wang, Bao

    2016-01-01

    Developing accurate solvers for the Poisson Boltzmann (PB) model is the first step to make the PB model suitable for implicit solvent simulation. Reducing the grid size influence on the performance of the solver benefits to increasing the speed of solver and providing accurate electrostatics analysis for solvated molecules. In this work, we explore the accurate coarse grid PB solver based on the Green's function treatment of the singular charges, matched interface and boundary (MIB) method for treating the geometric singularities, and posterior electrostatic potential field extension for calculating the reaction field energy. We made our previous PB software, MIBPB, robust and provides almost grid size independent reaction field energy calculation. Large amount of the numerical tests verify the grid size independence merit of the MIBPB software. The advantage of MIBPB software directly make the acceleration of the PB solver from the numerical algorithm instead of utilization of advanced computer architectures...

  8. Defining Allowable Physical Property Variations for High Accurate Measurements on Polymer Parts

    DEFF Research Database (Denmark)

    Mohammadi, Ali; Sonne, Mads Rostgaard; Madruga, Daniel González;

    2015-01-01

    cooling down after injection molding. In order to obtain accurate simulations, accurate inputs to the model are crucial. In reality however, the material and physical properties will have some variations. Although these variations may be small, they can act as a source of uncertainty for the measurement...... high accurate measurements in non-controlled ambient. Most of polymer parts are manufactured by injection moulding and their inspection is carried out after stabilization, around 200 hours. The overall goal of this work is to reach ±5μm in uncertainty measurements a polymer products which...... is a challenge in today‘s production and metrology environments. The residual deformations in polymer products at room temperature after injection molding are important when micrometer accuracy needs to be achieved. Numerical modelling can give a valuable insight to what is happening in the polymer during...

  9. Second-order accurate finite volume method for well-driven flows

    CERN Document Server

    Dotlić, Milan; Pokorni, Boris; Pušić, Milenko; Dimkić, Milan

    2013-01-01

    We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman correction. Coupling this correction with a second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still not even first order accurate on coarse grids. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.

  10. Asymptotic expansion based equation of state for hard-disk fluids offering accurate virial coefficients

    CERN Document Server

    Tian, Jianxiang; Mulero, A

    2016-01-01

    Despite the fact that more that more than 30 analytical expressions for the equation of state of hard-disk fluids have been proposed in the literature, none of them is capable of reproducing the currently accepted numeric or estimated values for the first eighteen virial coefficients. Using the asymptotic expansion method, extended to the first ten virial coefficients for hard-disk fluids, fifty-seven new expressions for the equation of state have been studied. Of these, a new equation of state is selected which reproduces accurately all the first eighteen virial coefficients. Comparisons for the compressibility factor with computer simulations show that this new equation is as accurate as other similar expressions with the same number of parameters. Finally, the location of the poles of the 57 new equations shows that there are some particular configurations which could give both the accurate virial coefficients and the correct closest packing fraction in the future when higher virial coefficients than the t...

  11. Fast and accurate calculation of dilute quantum gas using Uehling-Uhlenbeck model equation

    Science.gov (United States)

    Yano, Ryosuke

    2017-02-01

    The Uehling-Uhlenbeck (U-U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U-U model equation. DSMC analysis based on the U-U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U-U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green-Kubo expression and the shock layer of a dilute Bose gas around a cylinder.

  12. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Directory of Open Access Journals (Sweden)

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  13. Measuring Accurate Body Parameters of Dressed Humans with Large-Scale Motion Using a Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Sidan Du

    2013-08-01

    Full Text Available Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods.

  14. An Accurate Arabic Root-Based Lemmatizer for Information Retrieval Purposes

    CERN Document Server

    El-Shishtawy, Tarek

    2012-01-01

    In spite of its robust syntax, semantic cohesion, and less ambiguity, lemma level analysis and generation does not yet focused in Arabic NLP literatures. In the current research, we propose the first non-statistical accurate Arabic lemmatizer algorithm that is suitable for information retrieval (IR) systems. The proposed lemmatizer makes use of different Arabic language knowledge resources to generate accurate lemma form and its relevant features that support IR purposes. As a POS tagger, the experimental results show that, the proposed algorithm achieves a maximum accuracy of 94.8%. For first seen documents, an accuracy of 89.15% is achieved, compared to 76.7% of up to date Stanford accurate Arabic model, for the same, dataset.

  15. Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem

    KAUST Repository

    Younis, Mohammad I.

    2014-08-17

    We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data in the literature and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximations they are based on. In such cases, multi-mode reduced order models need to be utilized.

  16. Ab Initio Treatment of Bond-Breaking Reactions: Accurate Course of HO3 Dissociation and Revisit to Isomerization.

    Science.gov (United States)

    Varandas, A J C

    2012-02-14

    An efficient scheme is devised for accurate studies of bond-breaking/forming reactions and illustrated for HO3. It is suggested and numerically demonstrated that an accurate dissociation path for the title system can be obtained by defining the central OO bond as the reaction coordinate. The approach consists of optimizing the dissociation path at the full-valence-complete-active space level of theory followed by single-point multireference configuration interaction calculations along it. Using large diffusely augmented basis sets of the correlation consistent type, accurate dissociation curves are then obtained for both the cis- and trans-HO3 isomers by extrapolating the calculated raw energies to the complete basis set limit. The profiles show a weak van der Waals type minimum and a small barrier, both lying below the dissociation asymptote. It is shown that this barrier arises at the break-off of the central OO chemical bond and onset of the OH···O2 hydrogen bond. The calculated dissociation energies (De) are 4.5 ± 0.1 and 4.7 ± 0.1 kcal mol(-1) for the cis- and trans-HO3 isomers, respectively, with a very conservative estimate of the dissociation energy (D0) for trans-HO3 being 2.7 ± 0.2 kcal mol(-1) and a more focused one being 2.8 ± 0.1 kcal mol(-1). This result improves upon our previous estimate of this quantity while overlapping in the lower range of 2.9 ± 0.1 kcal mol(-1), the commonly accepted value from the low-temperature CRESU experiments. Since the cis-HO3 isomer is predicted to be 0.15 kcal mol(-1) less stable than trans-HO3, this may partly explain the failure to obtain a clear characterization of the former. The isomerization (torsional) potential is also revisited and a comparison presented with a curve inferred from spectroscopic measurements. Good agreement is observed, with the accuracy of the new calculated data commending its use for the reanalysis of the available vibrational-rotational spectroscopic data.

  17. Effects of antecedent variables on disruptive behavior and accurate responding in young children in outpatient settings.

    Science.gov (United States)

    Boelter, Eric W; Wacker, David P; Call, Nathan A; Ringdahl, Joel E; Kopelman, Todd; Gardner, Andrew W

    2007-01-01

    The effects of manipulations of task variables on inaccurate responding and disruption were investigated with 3 children who engaged in noncompliance. With 2 children in an outpatient clinic, task directives were first manipulated to identify directives that guided accurate responding; then, additional dimensions of the task were manipulated to evaluate their influence on disruptive behavior. With a 3rd child, similar procedures were employed at school. Results showed one-step directives set the occasion for accurate responding and that other dimensions of the task (e.g., preference) functioned as motivating operations for negative reinforcement.

  18. Building Accurate 3D Spatial Networks to Enable Next Generation Intelligent Transportation Systems

    DEFF Research Database (Denmark)

    Kaul, Manohar; Yang, Bin; Jensen, Christian S.

    2013-01-01

    model with elevation information extracted from massive aerial laser scan data and thus yields an accurate 3D model. We present a filtering technique that is capable of pruning irrelevant laser scan points in a single pass, but assumes that the 2D network fits in internal memory and that the points......The use of accurate 3D spatial network models can enable substantial improvements in vehicle routing. Notably, such models enable eco-routing, which reduces the environmental impact of transportation. We propose a novel filtering and lifting framework that augments a standard 2D spatial network...

  19. The SPECIES and ORGANISMS Resources for Fast and Accurate Identification of Taxonomic Names in Text

    DEFF Research Database (Denmark)

    Pafilis, Evangelos; Pletscher-Frankild, Sune; Fanini, Lucia

    2013-01-01

    The exponential growth of the biomedical literature is making the need for efficient, accurate text-mining tools increasingly clear. The identification of named biological entities in text is a central and difficult task. We have developed an efficient algorithm and implementation of a dictionary......-based approach to named entity recognition, which we here use to identify names of species and other taxa in text. The tool, SPECIES, is more than an order of magnitude faster and as accurate as existing tools. The precision and recall was assessed both on an existing gold-standard corpus and on a new corpus...

  20. An Accurate Approach to Large-Scale IP Traffic Matrix Estimation

    Science.gov (United States)

    Jiang, Dingde; Hu, Guangmin

    This letter proposes a novel method of large-scale IP traffic matrix (TM) estimation, called algebraic reconstruction technique inference (ARTI), which is based on the partial flow measurement and Fratar model. In contrast to previous methods, ARTI can accurately capture the spatio-temporal correlations of TM. Moreover, ARTI is computationally simple since it uses the algebraic reconstruction technique. We use the real data from the Abilene network to validate ARTI. Simulation results show that ARTI can accurately estimate large-scale IP TM and track its dynamics.