WorldWideScience

Sample records for accurately characterizing openmp

  1. How Good is OpenMP

    Directory of Open Access Journals (Sweden)

    Timothy G. Mattson

    2003-01-01

    Full Text Available The OpenMP standard defines an Application Programming Interface (API for shared memory computers. Since its introduction in 1997, it has grown to become one of the most commonly used API's for parallel programming. But success in the market doesn't necessarily imply successful computer science. Is OpenMP a "good" programming environment? What does it even mean to call a programming environment good? And finally, once we understand how good or bad OpenMP is; what can we do to make it even better? In this paper, we will address these questions.

  2. OpenMP 4.5 Validation and Verification Suite

    Energy Technology Data Exchange (ETDEWEB)

    2017-12-15

    OpenMP, a directive-based programming API, introduce directives for accelerator devices that programmers are starting to use more frequently in production codes. To make sure OpenMP directives work correctly across architectures, it is critical to have a mechanism that tests for an implementation's conformance to the OpenMP standard. This testing process can uncover ambiguities in the OpenMP specification, which helps compiler developers and users make a better use of the standard. We fill this gap through our validation and verification test suite that focuses on the offload directives available in OpenMP 4.5.

  3. Extending OpenMP for NUMA Machines

    Directory of Open Access Journals (Sweden)

    John Bircsak

    2000-01-01

    Full Text Available This paper describes extensions to OpenMP that implement data placement features needed for NUMA architectures. OpenMP is a collection of compiler directives and library routines used to write portable parallel programs for shared-memory architectures. Writing efficient parallel programs for NUMA architectures, which have characteristics of both shared-memory and distributed-memory architectures, requires that a programmer control the placement of data in memory and the placement of computations that operate on that data. Optimal performance is obtained when computations occur on processors that have fast access to the data needed by those computations. OpenMP -- designed for shared-memory architectures -- does not by itself address these issues. The extensions to OpenMP Fortran presented here have been mainly taken from High Performance Fortran. The paper describes some of the techniques that the Compaq Fortran compiler uses to generate efficient code based on these extensions. It also describes some additional compiler optimizations, and concludes with some preliminary results.

  4. OpenMP for Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Beyer, J C; Stotzer, E J; Hart, A; de Supinski, B R

    2011-03-15

    OpenMP [13] is the dominant programming model for shared-memory parallelism in C, C++ and Fortran due to its easy-to-use directive-based style, portability and broad support by compiler vendors. Similar characteristics are needed for a programming model for devices such as GPUs and DSPs that are gaining popularity to accelerate compute-intensive application regions. This paper presents extensions to OpenMP that provide that programming model. Our results demonstrate that a high-level programming model can provide accelerated performance comparable to hand-coded implementations in CUDA.

  5. A Case for Including Transactions in OpenMP

    Energy Technology Data Exchange (ETDEWEB)

    Wong, M; Bihari, B L; de Supinski, B R; Wu, P; Michael, M; Liu, Y; Chen, W

    2010-01-25

    Transactional Memory (TM) has received significant attention recently as a mechanism to reduce the complexity of shared memory programming. We explore the potential of TM to improve OpenMP applications. We combine a software TM (STM) system to support transactions with an OpenMP implementation to start thread teams and provide task and loop-level parallelization. We apply this system to two application scenarios that reflect realistic TM use cases. Our results with this system demonstrate that even with the relatively high overheads of STM, transactions can outperform OpenMP critical sections by 10%. Overall, our study demonstrates that extending OpenMP to include transactions would ease programming effort while allowing improved performance.

  6. A Multiprogramming Aware OpenMP Implementation

    Directory of Open Access Journals (Sweden)

    Vasileios K. Barekas

    2003-01-01

    Full Text Available In this work, we present an OpenMP implementation suitable for multiprogrammed environments on Intel-based SMP systems. This implementation consists of a runtime system and a resource manager, while we use the NanosCompiler to transform OpenMP-coded applications into code with calls to our runtime system. The resource manager acts as the operating system scheduler for the applications built with our runtime system. It executes a custom made scheduling policy to distribute the available physical processors to the active applications. The runtime system cooperates with the resource manager in order to adapt each application's generated parallelism to the number of processors allocated to it, according to the resource manager scheduling policy. We use the OpenMP version of the NAS Parallel Benchmark suite in order to evaluate the performance of our implementation. In our experiments we compare the performance of our implementation with that of a commercial OpenMP implementation. The comparison proves that our approach performs better both on a dedicated and on a heavily multiprogrammed environment.

  7. Overlapping Communication and Computation with OpenMP and MPI

    Directory of Open Access Journals (Sweden)

    Timothy H. Kaiser

    2001-01-01

    Full Text Available Machines comprised of a distributed collection of shared memory or SMP nodes are becoming common for parallel computing. OpenMP can be combined with MPI on many such machines. Motivations for combing OpenMP and MPI are discussed. While OpenMP is typically used for exploiting loop-level parallelism it can also be used to enable coarse grain parallelism, potentially leading to less overhead. We show how coarse grain OpenMP parallelism can also be used to facilitate overlapping MPI communication and computation for stencil-based grid programs such as a program performing Gauss-Seidel iteration with red-black ordering. Spatial subdivision or domain decomposition is used to assign a portion of the grid to each thread. One thread is assigned a null calculation region so it was free to perform communication. Example calculations were run on an IBM SP using both the Kuck & Associates and IBM compilers.

  8. A Transparent Runtime Data Distribution Engine for OpenMP

    Directory of Open Access Journals (Sweden)

    Dimitrios S. Nikolopoulos

    2000-01-01

    Full Text Available This paper makes two important contributions. First, the paper investigates the performance implications of data placement in OpenMP programs running on modern NUMA multiprocessors. Data locality and minimization of the rate of remote memory accesses are critical for sustaining high performance on these systems. We show that due to the low remote-to-local memory access latency ratio of contemporary NUMA architectures, reasonably balanced page placement schemes, such as round-robin or random distribution, incur modest performance losses. Second, the paper presents a transparent, user-level page migration engine with an ability to gain back any performance loss that stems from suboptimal placement of pages in iterative OpenMP programs. The main body of the paper describes how our OpenMP runtime environment uses page migration for implementing implicit data distribution and redistribution schemes without programmer intervention. Our experimental results verify the effectiveness of the proposed framework and provide a proof of concept that it is not necessary to introduce data distribution directives in OpenMP and warrant the simplicity or the portability of the programming model.

  9. Early Experiences Writing Performance Portable OpenMP 4 Codes

    Energy Technology Data Exchange (ETDEWEB)

    Joubert, Wayne [ORNL; Hernandez, Oscar R [ORNL

    2016-01-01

    In this paper, we evaluate the recently available directives in OpenMP 4 to parallelize a computational kernel using both the traditional shared memory approach and the newer accelerator targeting capabilities. In addition, we explore various transformations that attempt to increase application performance portability, and examine the expressiveness and performance implications of using these approaches. For example, we want to understand if the target map directives in OpenMP 4 improve data locality when mapped to a shared memory system, as opposed to the traditional first touch policy approach in traditional OpenMP. To that end, we use recent Cray and Intel compilers to measure the performance variations of a simple application kernel when executed on the OLCF s Titan supercomputer with NVIDIA GPUs and the Beacon system with Intel Xeon Phi accelerators attached. To better understand these trade-offs, we compare our results from traditional OpenMP shared memory implementations to the newer accelerator programming model when it is used to target both the CPU and an attached heterogeneous device. We believe the results and lessons learned as presented in this paper will be useful to the larger user community by providing guidelines that can assist programmers in the development of performance portable code.

  10. Performance monitoring and analysis of task-based OpenMP.

    Directory of Open Access Journals (Sweden)

    Yi Ding

    Full Text Available OpenMP, a typical shared memory programming paradigm, has been extensively applied in high performance computing community due to the popularity of multicore architectures in recent years. The most significant feature of the OpenMP 3.0 specification is the introduction of the task constructs to express parallelism at a much finer level of detail. This feature, however, has posed new challenges for performance monitoring and analysis. In particular, task creation is separated from its execution, causing the traditional monitoring methods to be ineffective. This paper presents a mechanism to monitor task-based OpenMP programs with interposition and proposes two demonstration graphs for performance analysis as well. The results of two experiments are discussed to evaluate the overhead of monitoring mechanism and to verify the effects of demonstration graphs using the BOTS benchmarks.

  11. Execution Model of Three Parallel Languages: OpenMP, UPC and CAF

    Directory of Open Access Journals (Sweden)

    Ami Marowka

    2005-01-01

    Full Text Available The aim of this paper is to present a qualitative evaluation of three state-of-the-art parallel languages: OpenMP, Unified Parallel C (UPC and Co-Array Fortran (CAF. OpenMP and UPC are explicit parallel programming languages based on the ANSI standard. CAF is an implicit programming language. On the one hand, OpenMP designs for shared-memory architectures and extends the base-language by using compiler directives that annotate the original source-code. On the other hand, UPC and CAF designs for distribute-shared memory architectures and extends the base-language by new parallel constructs. We deconstruct each language into its basic components, show examples, make a detailed analysis, compare them, and finally draw some conclusions.

  12. Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Alok [Stony Brook Univ., Stony Brook, NY (United States); Li, Lingda [Brookhaven National Lab. (BNL), Upton, NY (United States); Kong, Martin [Brookhaven National Lab. (BNL), Upton, NY (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Chapman, Barbara [Stony Brook Univ., Stony Brook, NY (United States); Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-01-01

    Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers should use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.

  13. Improving Security at Work with Software that Uses OpenMP

    Directory of Open Access Journals (Sweden)

    P. S. Polishuk

    2010-03-01

    Full Text Available A model of the offender and the list of major types of threats, the conditions for the realization of which are created by using the software that uses OpenMP is considered. A method for verification of software using OpenMP for the presence of vulnerabilities associated with multi-threaded execution is offered. We give basic algorithms and the system architecture that implements the proposed method. The results of testing the method on various programs, including those containing malicious code, as well as assessment of the possibilities of applying the method in different computing environments are given.

  14. Benchmarking MILC code with OpenMP and MPI

    International Nuclear Information System (INIS)

    Gottlieb, Steven; Tamhankar, Sonali

    2001-01-01

    A trend in high performance computers that is becoming increasingly popular is the use of symmetric multi-processing (SMP) rather than the older paradigm of MPP. MPI codes that ran and scaled well on MPP machines can often be run on an SMP machine using the vendor's version of MPI. However, this approach may not make optimal use of the (expensive) SMP hardware. More significantly, there are machines like Blue Horizon, an IBM SP with 8-way SMP nodes at the San Diego Supercomputer Center that can only support 4 MPI processes per node (with the current switch). On such a machine it is imperative to be able to use OpenMP parallelism on the node, and MPI between nodes. We describe the challenges of converting MILC MPI code to using a second level of OpenMP parallelism, and benchmarks on IBM and Sun computers

  15. Effective Vectorization with OpenMP 4.5

    Energy Technology Data Exchange (ETDEWEB)

    Huber, Joseph N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hernandez, Oscar R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lopez, Matthew Graham [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-03-01

    This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMD is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.

  16. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Barbara Chapman

    2012-02-01

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

  17. A Proposal for User-defined Reductions in OpenMP

    Energy Technology Data Exchange (ETDEWEB)

    Duran, A; Ferrer, R; Klemm, M; de Supinski, B R; Ayguade, E

    2010-03-22

    Reductions are commonly used in parallel programs to produce a global result from partial results computed in parallel. Currently, OpenMP only supports reductions for primitive data types and a limited set of base language operators. This is a significant limitation for those applications that employ user-defined data types (e. g., objects). Implementing manual reduction algorithms makes software development more complex and error-prone. Additionally, an OpenMP runtime system cannot optimize a manual reduction algorithm in ways typically applied to reductions on primitive types. In this paper, we propose new mechanisms to allow the use of most pre-existing binary functions on user-defined data types as User-Defined Reduction (UDR) operators. Our measurements show that our UDR prototype implementation provides consistently good performance across a range of thread counts without increasing general runtime overheads.

  18. Development of Mixed Mode MPI / OpenMP Applications

    Directory of Open Access Journals (Sweden)

    Lorna Smith

    2001-01-01

    Full Text Available MPI / OpenMP mixed mode codes could potentially offer the most effective parallelisation strategy for an SMP cluster, as well as allowing the different characteristics of both paradigms to be exploited to give the best performance on a single SMP. This paper discusses the implementation, development and performance of mixed mode MPI / OpenMP applications. The results demonstrate that this style of programming will not always be the most effective mechanism on SMP systems and cannot be regarded as the ideal programming model for all codes. In some situations, however, significant benefit may be obtained from a mixed mode implementation. For example, benefit may be obtained if the parallel (MPI code suffers from: poor scaling with MPI processes due to load imbalance or too fine a grain problem size, memory limitations due to the use of a replicated data strategy, or a restriction on the number of MPI processes combinations. In addition, if the system has a poorly optimised or limited scaling MPI implementation then a mixed mode code may increase the code performance.

  19. Experiences with OpenMP in tmLQCD

    Energy Technology Data Exchange (ETDEWEB)

    Deuzeman, A. [Bern Univ. (Switzerland). Albert Einstein Center for Fundamental Physics; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Kostrzewa, B. [Humboldt Univ. Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Urbach, C. [Bonn Univ. (Germany). HISKP (Theory); Collaboration: European Twisted Mass Collaboration

    2013-11-15

    An overview is given of the lessons learned from the introduction of multi-threading using OpenMP in tmLQCD. In particular, programming style, performance measurements, cache misses, scaling, thread distribution for hybrid codes, race conditions, the overlapping of communication and computation and the measurement and reduction of certain overheads are discussed. Performance measurements and sampling profiles are given for different implementations of the hopping matrix computational kernel.

  20. Experiences with OpenMP in tmLQCD

    International Nuclear Information System (INIS)

    Deuzeman, A.

    2013-11-01

    An overview is given of the lessons learned from the introduction of multi-threading using OpenMP in tmLQCD. In particular, programming style, performance measurements, cache misses, scaling, thread distribution for hybrid codes, race conditions, the overlapping of communication and computation and the measurement and reduction of certain overheads are discussed. Performance measurements and sampling profiles are given for different implementations of the hopping matrix computational kernel.

  1. Performance Tuning of x86 OpenMP Codes with MAQAO

    Science.gov (United States)

    Barthou, Denis; Charif Rubial, Andres; Jalby, William; Koliai, Souad; Valensi, Cédric

    Failing to find the best optimization sequence for a given application code can lead to compiler generated codes with poor performances or inappropriate code. It is necessary to analyze performances from the assembly generated code to improve over the compilation process. This paper presents a tool for the performance analysis of multithreaded codes (OpenMP programs support at the moment). MAQAO relies on static performance evaluation to identify compiler optimizations and assess performance of loops. It exploits static binary rewriting for reading and instrumenting object files or executables. Static binary instrumentation allows the insertion of probes at instruction level. Memory accesses can be captured to help tune the code, but such traces require to be compressed. MAQAO can analyze the results and provide hints for tuning the code. We show on some examples how this can help users improve their OpenMP applications.

  2. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

    Science.gov (United States)

    Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.

    2003-01-01

    Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.

  3. OpenMP performance for benchmark 2D shallow water equations using LBM

    Science.gov (United States)

    Sabri, Khairul; Rabbani, Hasbi; Gunawan, Putu Harry

    2018-03-01

    Shallow water equations or commonly referred as Saint-Venant equations are used to model fluid phenomena. These equations can be solved numerically using several methods, like Lattice Boltzmann method (LBM), SIMPLE-like Method, Finite Difference Method, Godunov-type Method, and Finite Volume Method. In this paper, the shallow water equation will be approximated using LBM or known as LABSWE and will be simulated in performance of parallel programming using OpenMP. To evaluate the performance between 2 and 4 threads parallel algorithm, ten various number of grids Lx and Ly are elaborated. The results show that using OpenMP platform, the computational time for solving LABSWE can be decreased. For instance using grid sizes 1000 × 500, the speedup of 2 and 4 threads is observed 93.54 s and 333.243 s respectively.

  4. OpenMP Issues Arising in the Development of Parallel BLAS and LAPACK Libraries

    Directory of Open Access Journals (Sweden)

    C. Addison

    2003-01-01

    Full Text Available Dense linear algebra libraries need to cope efficiently with a range of input problem sizes and shapes. Inherently this means that parallel implementations have to exploit parallelism wherever it is present. While OpenMP allows relatively fine grain parallelism to be exploited in a shared memory environment it currently lacks features to make it easy to partition computation over multiple array indices or to overlap sequential and parallel computations. The inherent flexible nature of shared memory paradigms such as OpenMP poses other difficulties when it becomes necessary to optimise performance across successive parallel library calls. Notions borrowed from distributed memory paradigms, such as explicit data distributions help address some of these problems, but the focus on data rather than work distribution appears misplaced in an SMP context.

  5. Position Paper: OpenMP scheduling on ARM big.LITTLE architecture

    OpenAIRE

    Butko , Anastasiia; Bessad , Louisa; Novo , David; Bruguier , Florent; Gamatié , Abdoulaye; Sassatelli , Gilles; Torres , Lionel; Robert , Michel

    2016-01-01

    International audience; Single-ISA heterogeneous multicore systems are emerging as a promising direction to achieve a more suitable balance between performance and energy consumption. However, a proper utilization of these architectures is essential to reach the energy benefits. In this paper, we demonstrate the ineffectiveness of popular OpenMP scheduling policies executing Rodinia benchmark on the Exynos 5 Octa (5422) SoC, which integrates the ARM big.LITTLE architecture.

  6. Performance Comparison of OpenMP, MPI, and MapReduce in Practical Problems

    Directory of Open Access Journals (Sweden)

    Sol Ji Kang

    2015-01-01

    Full Text Available With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.

  7. Innovative Language-Based & Object-Oriented Structured AMR Using Fortran 90 and OpenMP

    Science.gov (United States)

    Norton, C.; Balsara, D.

    1999-01-01

    Parallel adaptive mesh refinement (AMR) is an important numerical technique that leads to the efficient solution of many physical and engineering problems. In this paper, we describe how AMR programing can be performed in an object-oreinted way using the modern aspects of Fortran 90 combined with the parallelization features of OpenMP.

  8. Improvement and speed optimization of numerical tsunami modelling program using OpenMP technology

    Science.gov (United States)

    Chernov, A.; Zaytsev, A.; Yalciner, A.; Kurkin, A.

    2009-04-01

    Currently, the basic problem of tsunami modeling is low speed of calculations which is unacceptable for services of the operative notification. Existing algorithms of numerical modeling of hydrodynamic processes of tsunami waves are developed without taking the opportunities of modern computer facilities. There is an opportunity to have considerable acceleration of process of calculations by using parallel algorithms. We discuss here new approach to parallelization tsunami modeling code using OpenMP Technology (for multiprocessing systems with the general memory). Nowadays, multiprocessing systems are easily accessible for everyone. The cost of the use of such systems becomes much lower comparing to the costs of clusters. This opportunity also benefits all programmers to apply multithreading algorithms on desktop computers of researchers. Other important advantage of the given approach is the mechanism of the general memory - there is no necessity to send data on slow networks (for example Ethernet). All memory is the common for all computing processes; it causes almost linear scalability of the program and processes. In the new version of NAMI DANCE using OpenMP technology and multi-threading algorithm provide 80% gain in speed in comparison with the one-thread version for dual-processor unit. The speed increased and 320% gain was attained for four core processor unit of PCs. Thus, it was possible to reduce considerably time of performance of calculations on the scientific workstations (desktops) without complete change of the program and user interfaces. The further modernization of algorithms of preparation of initial data and processing of results using OpenMP looks reasonable. The final version of NAMI DANCE with the increased computational speed can be used not only for research purposes but also in real time Tsunami Warning Systems.

  9. Efficient Programming for Multicore Processor Heterogeneity: OpenMP versus OmpSs

    OpenAIRE

    Butko , Anastasiia; Bruguier , Florent; Gamatié , Abdoulaye; Sassatelli , Gilles

    2017-01-01

    International audience; ARM single-ISA heterogeneous multicore processors combine high-performance big cores with power-efficient small cores. They aim at achieving a suitable balance between performance and energy. How- ever, a main challenge is to program such architectures so as to efficiently exploit their features. In this paper, we study the impact on performance and energy trade-offs of single-ISA architecture according to OpenMP 3.0 and the OmpSs programming models. We consider differ...

  10. An OpenMP Parallelisation of Real-time Processing of CERN LHC Beam Position Monitor Data

    CERN Document Server

    Renshall, H

    2012-01-01

    SUSSIX is a FORTRAN program for the post processing of turn-by-turn Beam Position Monitor (BPM) data, which computes the frequency, amplitude, and phase of tunes and resonant lines to a high degree of precision. For analysis of LHC BPM data a specific version run through a C steering code has been implemented in the CERN Control Centre to run on a server under the Linux operating system but became a real time computational bottleneck preventing truly online study of the BPM data. Timing studies showed that the independent processing of each BPMs data was a candidate for parallelization and the Open Multiprocessing (OpenMP) package with its simple insertion of compiler directives was tried. It proved to be easy to learn and use, problem free and efficient in this case reaching a factor of ten reductions in real-time over twelve cores on a dedicated server. This paper reviews the problem, shows the critical code fragments with their OpenMP directives and the results obtained.

  11. Solution of finite element problems using hybrid parallelization with MPI and OpenMP Solution of finite element problems using hybrid parallelization with MPI and OpenMP

    Directory of Open Access Journals (Sweden)

    José Miguel Vargas-Félix

    2012-11-01

    Full Text Available The Finite Element Method (FEM is used to solve problems like solid deformation and heat diffusion in domains with complex geometries. This kind of geometries requires discretization with millions of elements; this is equivalent to solve systems of equations with sparse matrices and tens or hundreds of millions of variables. The aim is to use computer clusters to solve these systems. The solution method used is Schur substructuration. Using it is possible to divide a large system of equations into many small ones to solve them more efficiently. This method allows parallelization. MPI (Message Passing Interface is used to distribute the systems of equations to solve each one in a computer of a cluster. Each system of equations is solved using a solver implemented to use OpenMP as a local parallelization method.The Finite Element Method (FEM is used to solve problems like solid deformation and heat diffusion in domains with complex geometries. This kind of geometries requires discretization with millions of elements; this is equivalent to solve systems of equations with sparse matrices and tens or hundreds of millions of variables. The aim is to use computer clusters to solve these systems. The solution method used is Schur substructuration. Using it is possible to divide a large system of equations into many small ones to solve them more efficiently. This method allows parallelization. MPI (Message Passing Interface is used to distribute the systems of equations to solve each one in a computer of a cluster. Each system of equations is solved using a solver implemented to use OpenMP as a local parallelization method.

  12. Issues Identified During September 2016 IBM OpenMP 4.5 Hackathon

    Energy Technology Data Exchange (ETDEWEB)

    Richards, David F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-03-15

    In September, 2016 IBM hosted an OpenMP 4.5 Hackathon at the TJ Watson Research Center. Teams from LLNL, ORNL, SNL, LANL, and LBNL attended the event. As with the 2015 hackathon, IBM produced an extremely useful and successful event with unmatched support from compiler team, applications staff, and facilities. Approximately 24 IBM staff supported 4-day hackathon and spent significant time 4-6 weeks out to prepare environment and become familiar with apps. This hackathon was also the first event to feature LLVM & XL C/C++ and Fortran compilers. This report records many of the issues encountered by the LLNL teams during the hackathon.

  13. Performance of a Code Migration for the Simulation of Supersonic Ejector Flow to SMP, MIC, and GPU Using OpenMP, OpenMP+LEO, and OpenACC Directives

    Directory of Open Access Journals (Sweden)

    C. Couder-Castañeda

    2015-01-01

    Full Text Available A serial source code for simulating a supersonic ejector flow is accelerated using parallelization based on OpenMP and OpenACC directives. The purpose is to reduce the development costs and to simplify the maintenance of the application due to the complexity of the FORTRAN source code. This research follows well-proven strategies in order to obtain the best performance in both OpenMP and OpenACC. OpenMP has become the programming standard for scientific multicore software and OpenACC is one true alternative for graphics accelerators without the need of programming low level kernels. The strategies using OpenMP are oriented towards reducing the creation of parallel regions, tasks creation to handle boundary conditions, and a nested control of the loop time for the programming in offload mode specifically for the Xeon Phi. In OpenACC, the strategy focuses on maintaining the data regions among the executions of the kernels. Experiments for performance and validation are conducted here on a 12-core Xeon CPU, Xeon Phi 5110p, and Tesla C2070, obtaining the best performance from the latter. The Tesla C2070 presented an acceleration factor of 9.86X, 1.6X, and 4.5X compared against the serial version on CPU, 12-core Xeon CPU, and Xeon Phi, respectively.

  14. Algorithmic differentiation of pragma-defined parallel regions differentiating computer programs containing OpenMP

    CERN Document Server

    Förster, Michael

    2014-01-01

    Numerical programs often use parallel programming techniques such as OpenMP to compute the program's output values as efficient as possible. In addition, derivative values of these output values with respect to certain input values play a crucial role. To achieve code that computes not only the output values simultaneously but also the derivative values, this work introduces several source-to-source transformation rules. These rules are based on a technique called algorithmic differentiation. The main focus of this work lies on the important reverse mode of algorithmic differentiation. The inh

  15. Parallelization of maximum likelihood fits with OpenMP and CUDA

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; Pantaleo, F

    2011-01-01

    Data analyses based on maximum likelihood fits are commonly used in the high energy physics community for fitting statistical models to data samples. This technique requires the numerical minimization of the negative log-likelihood function. MINUIT is the most common package used for this purpose in the high energy physics community. The main algorithm in this package, MIGRAD, searches the minimum by using the gradient information. The procedure requires several evaluations of the function, depending on the number of free parameters and their initial values. The whole procedure can be very CPU-time consuming in case of complex functions, with several free parameters, many independent variables and large data samples. Therefore, it becomes particularly important to speed-up the evaluation of the negative log-likelihood function. In this paper we present an algorithm and its implementation which benefits from data vectorization and parallelization (based on OpenMP) and which was also ported to Graphics Processi...

  16. Parallel processing implementation for the coupled transport of photons and electrons using OpenMP

    Science.gov (United States)

    Doerner, Edgardo

    2016-05-01

    In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.

  17. BLESS 2: accurate, memory-efficient and fast error correction method.

    Science.gov (United States)

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Locality-Aware Task Scheduling and Data Distribution for OpenMP Programs on NUMA Systems and Manycore Processors

    Directory of Open Access Journals (Sweden)

    Ananya Muddukrishna

    2015-01-01

    Full Text Available Performance degradation due to nonuniform data access latencies has worsened on NUMA systems and can now be felt on-chip in manycore processors. Distributing data across NUMA nodes and manycore processor caches is necessary to reduce the impact of nonuniform latencies. However, techniques for distributing data are error-prone and fragile and require low-level architectural knowledge. Existing task scheduling policies favor quick load-balancing at the expense of locality and ignore NUMA node/manycore cache access latencies while scheduling. Locality-aware scheduling, in conjunction with or as a replacement for existing scheduling, is necessary to minimize NUMA effects and sustain performance. We present a data distribution and locality-aware scheduling technique for task-based OpenMP programs executing on NUMA systems and manycore processors. Our technique relieves the programmer from thinking of NUMA system/manycore processor architecture details by delegating data distribution to the runtime system and uses task data dependence information to guide the scheduling of OpenMP tasks to reduce data stall times. We demonstrate our technique on a four-socket AMD Opteron machine with eight NUMA nodes and on the TILEPro64 processor and identify that data distribution and locality-aware task scheduling improve performance up to 69% for scientific benchmarks compared to default policies and yet provide an architecture-oblivious approach for programmers.

  19. Getting More From Your Multicore: Exploiting OpenMP for Astronomy

    Science.gov (United States)

    Noble, M. S.

    2008-08-01

    Motivated by the emergence of multicore architectures, and the reality that parallelism is rarely used for analysis in observational astronomy, we demonstrate how general users may employ tightly-coupled multiprocessors in scriptable research calculations while requiring no special knowledge of parallel programming. Our method rests on the observation that much of the appeal of high-level vectorized languages like IDL or MatLab stems from relatively simple internal loops over regular array structures, and that these loops are highly amenable to automatic parallelization with OpenMP. We discuss how ISIS, an open-source astrophysical analysis system embedding the slang numerical language, was easily adapted to exploit this pattern. Drawing from a common astrophysical problem, model fitting, we present beneficial speedups for several machine and compiler configurations. These results complement our previous efforts with PVM, and together lead us to believe that ISIS is the only general purpose spectroscopy system in which such a range of parallelism - from single processors on multiple machines to multiple processors on single machines - has been demonstrated.

  20. A chemical approach to accurately characterize the coverage rate of gold nanoparticles

    International Nuclear Information System (INIS)

    Zhu, Xiaoli; Liu, Min; Zhang, Huihui; Wang, Haiyan; Li, Genxi

    2013-01-01

    Gold nanoparticles (AuNPs) have been widely used in many areas, and the nanoparticles usually have to be functionalized with some molecules before use. However, the information about the characterization of the functionalization of the nanoparticles is still limited or unclear, which has greatly restricted the better functionalization and application of AuNPs. Here, we propose a chemical way to accurately characterize the functionalization of AuNPs. Unlike the traditional physical methods, this method, which is based on the catalytic property of AuNPs, may give accurate coverage rate and some derivative information about the functionalization of the nanoparticles with different kinds of molecules. The performance of the characterization has been approved by adopting three independent molecules to functionalize AuNPs, including both covalent and non-covalent functionalization. Some interesting results are thereby obtained, and some are the first time to be revealed. The method may also be further developed as a useful tool for the characterization of a solid surface

  1. A chemical approach to accurately characterize the coverage rate of gold nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xiaoli; Liu, Min; Zhang, Huihui [Shanghai University, Laboratory of Biosensing Technology, School of Life Sciences (China); Wang, Haiyan [Nanjing University, State Key Laboratory of Pharmaceutical Biotechnology, Department of Biochemistry (China); Li, Genxi, E-mail: genxili@nju.edu.cn [Shanghai University, Laboratory of Biosensing Technology, School of Life Sciences (China)

    2013-09-15

    Gold nanoparticles (AuNPs) have been widely used in many areas, and the nanoparticles usually have to be functionalized with some molecules before use. However, the information about the characterization of the functionalization of the nanoparticles is still limited or unclear, which has greatly restricted the better functionalization and application of AuNPs. Here, we propose a chemical way to accurately characterize the functionalization of AuNPs. Unlike the traditional physical methods, this method, which is based on the catalytic property of AuNPs, may give accurate coverage rate and some derivative information about the functionalization of the nanoparticles with different kinds of molecules. The performance of the characterization has been approved by adopting three independent molecules to functionalize AuNPs, including both covalent and non-covalent functionalization. Some interesting results are thereby obtained, and some are the first time to be revealed. The method may also be further developed as a useful tool for the characterization of a solid surface.

  2. OpenMP parallelization of a gridded SWAT (SWATG)

    Science.gov (United States)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  3. Analysis OpenMP performance of AMD and Intel architecture for breaking waves simulation using MPS

    Science.gov (United States)

    Alamsyah, M. N. A.; Utomo, A.; Gunawan, P. H.

    2018-03-01

    Simulation of breaking waves by using Navier-Stokes equation via moving particle semi-implicit method (MPS) over close domain is given. The results show the parallel computing on multicore architecture using OpenMP platform can reduce the computational time almost half of the serial time. Here, the comparison using two computer architectures (AMD and Intel) are performed. The results using Intel architecture is shown better than AMD architecture in CPU time. However, in efficiency, the computer with AMD architecture gives slightly higher than the Intel. For the simulation by 1512 number of particles, the CPU time using Intel and AMD are 12662.47 and 28282.30 respectively. Moreover, the efficiency using similar number of particles, AMD obtains 50.09 % and Intel up to 49.42 %.

  4. OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation

    Science.gov (United States)

    Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun

    2017-11-01

    We present Open Multi-Processing (OpenMP) version of Fortran 90 programs for solving the Gross-Pitaevskii (GP) equation for a Bose-Einstein condensate in one, two, and three spatial dimensions, optimized for use with GNU and Intel compilers. We use the split-step Crank-Nicolson algorithm for imaginary- and real-time propagation, which enables efficient calculation of stationary and non-stationary solutions, respectively. The present OpenMP programs are designed for computers with multi-core processors and optimized for compiling with both commercially-licensed Intel Fortran and popular free open-source GNU Fortran compiler. The programs are easy to use and are elaborated with helpful comments for the users. All input parameters are listed at the beginning of each program. Different output files provide physical quantities such as energy, chemical potential, root-mean-square sizes, densities, etc. We also present speedup test results for new versions of the programs. Program files doi:http://dx.doi.org/10.17632/y8zk3jgn84.2 Licensing provisions: Apache License 2.0 Programming language: OpenMP GNU and Intel Fortran 90. Computer: Any multi-core personal computer or workstation with the appropriate OpenMP-capable Fortran compiler installed. Number of processors used: All available CPU cores on the executing computer. Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1888; ibid.204 (2016) 209. Does the new version supersede the previous version?: Not completely. It does supersede previous Fortran programs from both references above, but not OpenMP C programs from Comput. Phys. Commun. 204 (2016) 209. Nature of problem: The present Open Multi-Processing (OpenMP) Fortran programs, optimized for use with commercially-licensed Intel Fortran and free open-source GNU Fortran compilers, solve the time-dependent nonlinear partial differential (GP) equation for a trapped Bose-Einstein condensate in one (1d), two (2d), and three (3d) spatial dimensions for

  5. O impacto da paralelização com OpenMP no desempenho e na qualidade das soluções de um Algoritmo Genético

    Directory of Open Access Journals (Sweden)

    Henrique de Oliveira Gressler

    2014-11-01

    Full Text Available O Problema do Roteamento de Veículos (PRV é um problema combinatório de difícil solução, aplicável tanto para logística de empresas de transporte quanto para melhor ocupação das vias públicas. Resolvê-lo testando todas as combinações possíveis (método de força bruta torna-se inviável à medida que o problema escala, pois demanda um tempo de computação muito grande. Os Algoritmos Genéticos (AG são meta-heurísticas capazes de encontrar soluções em um tempo computacional aceitável. Entretanto, mesmo os AG podem demandar um elevado tempo de processamento, dependendo das configurações utilizadas. Com a evolução das arquiteturas computacionais e a difusão das arquiteturas multicore, o uso da programação multithread torna-se uma alternativa para reduzir o tempo envolvido na solução de problemas combinatórios. Este artigo objetiva acelerar a resolução do PRV por meio da paralelização do AG com OpenMP, que é um padrão amplamente difundido para programação multithread. Nossos resultados atingiram um speedup acima de 2, utilizando 4 threads em um processador quadcore. Esse ganho está limitado à forma como o AG está implementado. Além do impacto no desempenho do AG também comprovou-se que o uso do OpenMP não afeta a qualidade das soluções. Adicionalmente, o uso do OpenMP permitiu que o AG encontrasse melhores soluções devido ao aumento do número de evoluções computadas num mesmo intervalo de tempo.

  6. Experiences in the parallelization of the discrete ordinates method using OpenMP and MPI

    Energy Technology Data Exchange (ETDEWEB)

    Pautz, A. [TUV Hannover/Sachsen-Anhalt e.V. (Germany); Langenbuch, S. [Gesellschaft fur Anlagen- und Reaktorsicherheit (GRS) mbH (Germany)

    2003-07-01

    The method of Discrete Ordinates is in principle parallelizable to a high degree, since the transport 'mesh sweeps' are mutually independent for all angular directions. However, in the well-known production code Dort such a type of angular domain decomposition has to be done on a spatial line-byline basis, causing the parallelism in the code to be very fine-grained. The construction of scalar fluxes and moments requires a large effort for inter-thread or inter-process communication. We have implemented two different parallelization approaches in Dort: firstly, we have used a shared-memory model suitable for SMP (Symmetric Multiprocessor) machines based on the standard OpenMP. The second approach uses the well-known Message Passing Interface (MPI) to establish communication between parallel processes running in a distributed-memory environment. We investigate the benefits and drawbacks of both models and show first results on performance and scaling behaviour of the parallel Dort code. (authors)

  7. Experiences in the parallelization of the discrete ordinates method using OpenMP and MPI

    International Nuclear Information System (INIS)

    Pautz, A.; Langenbuch, S.

    2003-01-01

    The method of Discrete Ordinates is in principle parallelizable to a high degree, since the transport 'mesh sweeps' are mutually independent for all angular directions. However, in the well-known production code Dort such a type of angular domain decomposition has to be done on a spatial line-byline basis, causing the parallelism in the code to be very fine-grained. The construction of scalar fluxes and moments requires a large effort for inter-thread or inter-process communication. We have implemented two different parallelization approaches in Dort: firstly, we have used a shared-memory model suitable for SMP (Symmetric Multiprocessor) machines based on the standard OpenMP. The second approach uses the well-known Message Passing Interface (MPI) to establish communication between parallel processes running in a distributed-memory environment. We investigate the benefits and drawbacks of both models and show first results on performance and scaling behaviour of the parallel Dort code. (authors)

  8. Python for Development of OpenMP and CUDA Kernels for Multidimensional Data

    International Nuclear Information System (INIS)

    Bell, Zane W.; Davidson, Gregory G.; D'Azevedo, Ed F.; Evans, Thomas M.; Joubert, Wayne; Munro, John K. Jr.; Patlolla, Dilip Reddy; Vacaliuc, Bogdan

    2011-01-01

    Design of data structures for high performance computing (HPC) is one of the principal challenges facing researchers looking to utilize heterogeneous computing machinery. Heterogeneous systems derive cost, power, and speed efficiency by being composed of the appropriate hardware for the task. Yet, each type of processor requires a specific organization of the application state in order to achieve peak performance. Discovering this and refactoring the code can be a challenging and time-consuming task for the researcher, as the data structures and the computational model must be co-designed. We present a methodology that uses Python as the environment for which to explore tradeoffs in both the data structure design as well as the code executing on the computation accelerator. Our method enables multi-dimensional arrays to be used effectively in any target environment. We have chosen to focus on OpenMP and CUDA environments, thus exploring the development of optimized kernels for the two most common classes of computing hardware available today: multi-core CPU and GPU. Python s large palette of file and network access routines, its associative indexing syntax and support for common HPC environments makes it relevant for diverse hardware ranging from laptops through computing clusters to the highest performance supercomputers. Our work enables researchers to accelerate the development of their codes on the computing hardware of their choice.

  9. Characterization of 3D PET systems for accurate quantification of myocardial blood flow

    OpenAIRE

    Renaud, Jennifer M.; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Éric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C.; Turkington, Timothy G

    2016-01-01

    Three-dimensional (3D) mode imaging is the current standard for positron emission tomography-computed tomography (PET-CT) systems. Dynamic imaging for quantification of myocardial blood flow (MBF) with short-lived tracers, such as Rb-82- chloride (Rb-82), requires accuracy to be maintained over a wide range of isotope activities and scanner count-rates. We propose new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative...

  10. Characterizing and Mitigating Work Time Inflation in Task Parallel Programs

    Directory of Open Access Journals (Sweden)

    Stephen L. Olivier

    2013-01-01

    Full Text Available Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems. Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.

  11. Accurate mode characterization of two-mode optical fibers by in-fiber acousto-optics.

    Science.gov (United States)

    Alcusa-Sáez, E; Díez, A; Andrés, M V

    2016-03-07

    Acousto-optic interaction in optical fibers is exploited for the accurate and broadband characterization of two-mode optical fibers. Coupling between LP 01 and LP 1m modes is produced in a broadband wavelength range. Difference in effective indices, group indices, and chromatic dispersions between the guided modes, are obtained from experimental measurements. Additionally, we show that the technique is suitable to investigate the fine modes structure of LP modes, and some other intriguing features related with modes' cut-off.

  12. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy and Department of Oncology, University of Calgary and Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)

    2012-06-15

    2% for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. Conclusions: The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm.

  13. 基于OpenMP的电磁场FDTD多核并行程序设计%Design of electromagnetic field FDTD multi-core parallel program based on OpenMP

    Institute of Scientific and Technical Information of China (English)

    吕忠亭; 张玉强; 崔巍

    2013-01-01

    探讨了基于OpenMP的电磁场FDTD多核并行程序设计的方法,以期实现该方法在更复杂的算法中应用具有更理想的性能提升。针对一个一维电磁场FDTD算法问题,对其计算方法与过程做了简单描述。在Fortran语言环境中,采用OpenMP+细粒度并行的方式实现了并行化,即只对循环部分进行并行计算,并将该并行方法在一个三维瞬态场电偶极子辐射FDTD程序中进行了验证。该并行算法取得了较其他并行FDTD算法更快的加速比和更高的效率。结果表明基于OpenMP的电磁场FDTD并行算法具有非常好的加速比和效率。%The method of the electromagnetic field FDTD multi-core parallel programm design based on OpenMP is dis-cussed,in order to implement ideal performance improvement of this method in the application of more sophisticated algorithms. Aiming at a problem existing in one-dimensional electromagnetic FDTD algorithm , its calculation method and process are described briefly. In Fortran language environment,the parallelism is achieved with OpenMP technology and fine-grained parallel way,that is,the parallel computation is performed only for the cycle part. The parallel method was verified in a three-dimensional transient electromagnetic field FDTD program for dipole radiation. The parallel algorithm has achieved faster speedup and higher efficiency than other parallel FDTD algoritms. The results indicate that the electromagnetic field FDTD parallel algorithm based on OpenMP has a good speedup and efficiency.

  14. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  15. Study on Factors for Accurate Open Circuit Voltage Characterizations in Mn-Type Li-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Natthawuth Somakettarin

    2017-03-01

    Full Text Available Open circuit voltage (OCV of lithium batteries has been of interest since the battery management system (BMS requires an accurate knowledge of the voltage characteristics of any Li-ion batteries. This article presents an OCV characteristic for lithium manganese oxide (LMO batteries under several experimental operating conditions, and discusses factors for accurate OCV determination. A test system is developed for OCV characterization based on the OCV pulse test method. Various factors for the OCV behavior, such as resting period, step-size of the pulse test, testing current amplitude, hysteresis phenomena, and terminal voltage relationship, are investigated and evaluated. To this end, a general OCV model based on state of charge (SOC tracking is developed and validated with satisfactory results.

  16. Accurate thermodynamic characterization of a synthetic coal mine methane mixture

    International Nuclear Information System (INIS)

    Hernández-Gómez, R.; Tuma, D.; Villamañán, M.A.; Mondéjar, M.E.; Chamorro, C.R.

    2014-01-01

    Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations

  17. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    Science.gov (United States)

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  18. Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.

    Science.gov (United States)

    Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A

    2017-01-01

    Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  19. Fourier Transform Mass Spectrometry and Nuclear Magnetic Resonance Analysis for the Rapid and Accurate Characterization of Hexacosanoylceramide.

    Science.gov (United States)

    Ross, Charles W; Simonsick, William J; Bogusky, Michael J; Celikay, Recep W; Guare, James P; Newton, Randall C

    2016-06-28

    Ceramides are a central unit of all sphingolipids which have been identified as sites of biological recognition on cellular membranes mediating cell growth and differentiation. Several glycosphingolipids have been isolated, displaying immunomodulatory and anti-tumor activities. These molecules have generated considerable interest as potential vaccine adjuvants in humans. Accurate analyses of these and related sphingosine analogues are important for the characterization of structure, biological function, and metabolism. We report the complementary use of direct laser desorption ionization (DLDI), sheath flow electrospray ionization (ESI) Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS) and high-field nuclear magnetic resonance (NMR) analysis for the rapid, accurate identification of hexacosanoylceramide and starting materials. DLDI does not require stringent sample preparation and yields representative ions. Sheath-flow ESI yields ions of the product and byproducts and was significantly better than monospray ESI due to improved compound solubility. Negative ion sheath flow ESI provided data of starting materials and products all in one acquisition as hexacosanoic acid does not ionize efficiently when ceramides are present. NMR provided characterization of these lipid molecules complementing the results obtained from MS analyses. NMR data was able to differentiate straight chain versus branched chain alkyl groups not easily obtained from mass spectrometry.

  20. Fourier Transform Mass Spectrometry and Nuclear Magnetic Resonance Analysis for the Rapid and Accurate Characterization of Hexacosanoylceramide

    Directory of Open Access Journals (Sweden)

    Charles W. Ross

    2016-06-01

    Full Text Available Ceramides are a central unit of all sphingolipids which have been identified as sites of biological recognition on cellular membranes mediating cell growth and differentiation. Several glycosphingolipids have been isolated, displaying immunomodulatory and anti-tumor activities. These molecules have generated considerable interest as potential vaccine adjuvants in humans. Accurate analyses of these and related sphingosine analogues are important for the characterization of structure, biological function, and metabolism. We report the complementary use of direct laser desorption ionization (DLDI, sheath flow electrospray ionization (ESI Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS and high-field nuclear magnetic resonance (NMR analysis for the rapid, accurate identification of hexacosanoylceramide and starting materials. DLDI does not require stringent sample preparation and yields representative ions. Sheath-flow ESI yields ions of the product and byproducts and was significantly better than monospray ESI due to improved compound solubility. Negative ion sheath flow ESI provided data of starting materials and products all in one acquisition as hexacosanoic acid does not ionize efficiently when ceramides are present. NMR provided characterization of these lipid molecules complementing the results obtained from MS analyses. NMR data was able to differentiate straight chain versus branched chain alkyl groups not easily obtained from mass spectrometry.

  1. An X-band waveguide measurement technique for the accurate characterization of materials with low dielectric loss permittivity

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Kenneth W., E-mail: kenneth.allen@gtri.gatech.edu; Scott, Mark M.; Reid, David R.; Bean, Jeffrey A.; Ellis, Jeremy D.; Morris, Andrew P.; Marsh, Jeramy M. [Advanced Concepts Laboratory, Georgia Tech Research Institute, Atlanta, Georgia 30318 (United States)

    2016-05-15

    In this work, we present a new X-band waveguide (WR90) measurement method that permits the broadband characterization of the complex permittivity for low dielectric loss tangent material specimens with improved accuracy. An electrically long polypropylene specimen that partially fills the cross-section is inserted into the waveguide and the transmitted scattering parameter (S{sub 21}) is measured. The extraction method relies on computational electromagnetic simulations, coupled with a genetic algorithm, to match the experimental S{sub 21} measurement. The sensitivity of the technique to sample length was explored by simulating specimen lengths from 2.54 to 15.24 cm, in 2.54 cm increments. Analysis of our simulated data predicts the technique will have the sensitivity to measure loss tangent values on the order of 10{sup −3} for materials such as polymers with relatively low real permittivity values. The ability to accurately characterize low-loss dielectric material specimens of polypropylene is demonstrated experimentally. The method was validated by excellent agreement with a free-space focused-beam system measurement of a polypropylene sheet. This technique provides the material measurement community with the ability to accurately extract material properties of low-loss material specimen over the entire X-band range. This technique could easily be extended to other frequency bands.

  2. Accurate spectroscopic characterization of oxirane: A valuable route to its identification in Titan's atmosphere and the assignment of unidentified infrared bands

    Energy Technology Data Exchange (ETDEWEB)

    Puzzarini, Cristina [Dipartimento di Chimica " Giacomo Ciamician," Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy); Biczysko, Malgorzata; Bloino, Julien; Barone, Vincenzo, E-mail: cristina.puzzarini@unibo.it [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2014-04-20

    In an effort to provide an accurate spectroscopic characterization of oxirane, state-of-the-art computational methods and approaches have been employed to determine highly accurate fundamental vibrational frequencies and rotational parameters. Available experimental data were used to assess the reliability of our computations, and an accuracy on average of 10 cm{sup –1} for fundamental transitions as well as overtones and combination bands has been pointed out. Moving to rotational spectroscopy, relative discrepancies of 0.1%, 2%-3%, and 3%-4% were observed for rotational, quartic, and sextic centrifugal-distortion constants, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be useful for identification of oxirane in Titan's atmosphere and the assignment of unidentified infrared bands. Since oxirane was already observed in the interstellar medium and some astronomical objects are characterized by very high D/H ratios, we also considered the accurate determination of the spectroscopic parameters for the mono-deuterated species, oxirane-d1. For the latter, an empirical scaling procedure allowed us to improve our computed data and to provide predictions for rotational transitions with a relative accuracy of about 0.02% (i.e., an uncertainty of about 40 MHz for a transition lying at 200 GHz).

  3. TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble

    Directory of Open Access Journals (Sweden)

    Carlos Couder-Castañeda

    2013-01-01

    Full Text Available An implementation with the CUDA technology in a single and in several graphics processing units (GPUs is presented for the calculation of the forward modeling of gravitational fields from a tridimensional volumetric ensemble composed by unitary prisms of constant density. We compared the performance results obtained with the GPUs against a previous version coded in OpenMP with MPI, and we analyzed the results on both platforms. Today, the use of GPUs represents a breakthrough in parallel computing, which has led to the development of several applications with various applications. Nevertheless, in some applications the decomposition of the tasks is not trivial, as can be appreciated in this paper. Unlike a trivial decomposition of the domain, we proposed to decompose the problem by sets of prisms and use different memory spaces per processing CUDA core, avoiding the performance decay as a result of the constant calls to kernels functions which would be needed in a parallelization by observations points. The design and implementation created are the main contributions of this work, because the parallelization scheme implemented is not trivial. The performance results obtained are comparable to those of a small processing cluster.

  4. Accurate spectroscopic characterization of protonated oxirane: a potential prebiotic species in Titan's atmosphere

    International Nuclear Information System (INIS)

    Giacomo Ciamician, Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy))" data-affiliation=" (Dipartimento di Chimica Giacomo Ciamician, Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy))" >Puzzarini, Cristina; Ali, Ashraf; Biczysko, Malgorzata; Barone, Vincenzo

    2014-01-01

    An accurate spectroscopic characterization of protonated oxirane has been carried out by means of state-of-the-art computational methods and approaches. The calculated spectroscopic parameters from our recent computational investigation of oxirane together with the corresponding experimental data available were used to assess the accuracy of our predicted rotational and IR spectra of protonated oxirane. We found an accuracy of about 10 cm –1 for vibrational transitions (fundamentals as well as overtones and combination bands) and, in relative terms, of 0.1% for rotational transitions. We are therefore confident that the spectroscopic data provided herein are a valuable support for the detection of protonated oxirane not only in Titan's atmosphere but also in the interstellar medium.

  5. Accurate spectroscopic characterization of protonated oxirane: a potential prebiotic species in Titan's atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Puzzarini, Cristina [Dipartimento di Chimica " Giacomo Ciamician," Università di Bologna, Via Selmi 2, I-40126 Bologna (Italy); Ali, Ashraf [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Biczysko, Malgorzata; Barone, Vincenzo, E-mail: cristina.puzzarini@unibo.it [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2014-09-10

    An accurate spectroscopic characterization of protonated oxirane has been carried out by means of state-of-the-art computational methods and approaches. The calculated spectroscopic parameters from our recent computational investigation of oxirane together with the corresponding experimental data available were used to assess the accuracy of our predicted rotational and IR spectra of protonated oxirane. We found an accuracy of about 10 cm{sup –1} for vibrational transitions (fundamentals as well as overtones and combination bands) and, in relative terms, of 0.1% for rotational transitions. We are therefore confident that the spectroscopic data provided herein are a valuable support for the detection of protonated oxirane not only in Titan's atmosphere but also in the interstellar medium.

  6. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    Science.gov (United States)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  7. Phase rainbow refractometry for accurate droplet variation characterization.

    Science.gov (United States)

    Wu, Yingchun; Promvongsa, Jantarat; Saengkaew, Sawitree; Wu, Xuecheng; Chen, Jia; Gréhan, Gérard

    2016-10-15

    We developed a one-dimensional phase rainbow refractometer for the accurate trans-dimensional measurements of droplet size on the micrometer scale as well as the tiny droplet diameter variations at the nanoscale. The dependence of the phase shift of the rainbow ripple structures on the droplet variations is revealed. The phase-shifting rainbow image is recorded by a telecentric one-dimensional rainbow imaging system. Experiments on the evaporating monodispersed droplet stream show that the phase rainbow refractometer can measure the tiny droplet diameter changes down to tens of nanometers. This one-dimensional phase rainbow refractometer is capable of measuring the droplet refractive index and diameter, as well as variations.

  8. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  9. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  10. Accurate characterization of OPVs: Device masking and different solar simulators

    DEFF Research Database (Denmark)

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.

    2013-01-01

    One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  11. In-depth glycoproteomic characterization of γ-conglutin by high-resolution accurate mass spectrometry.

    Directory of Open Access Journals (Sweden)

    Silvia Schiarea

    Full Text Available The molecular characterization of bioactive food components is necessary for understanding the mechanisms of their beneficial or detrimental effects on human health. This study focused on γ-conglutin, a well-known lupin seed N-glycoprotein with health-promoting properties and controversial allergenic potential. Given the importance of N-glycosylation for the functional and structural characteristics of proteins, we studied the purified protein by a mass spectrometry-based glycoproteomic approach able to identify the structure, micro-heterogeneity and attachment site of the bound N-glycan(s, and to provide extensive coverage of the protein sequence. The peptide/N-glycopeptide mixtures generated by enzymatic digestion (with or without N-deglycosylation were analyzed by high-resolution accurate mass liquid chromatography-multi-stage mass spectrometry. The four main micro-heterogeneous variants of the single N-glycan bound to γ-conglutin were identified as Man2(Xyl (Fuc GlcNAc2, Man3(Xyl (Fuc GlcNAc2, GlcNAcMan3(Xyl (Fuc GlcNAc2 and GlcNAc 2Man3(Xyl (Fuc GlcNAc2. These carry both core β1,2-xylose and core α1-3-fucose (well known Cross-Reactive Carbohydrate Determinants, but corresponding fucose-free variants were also identified as minor components. The N-glycan was proven to reside on Asn131, one of the two potential N-glycosylation sites. The extensive coverage of the γ-conglutin amino acid sequence suggested three alternative N-termini of the small subunit, that were later confirmed by direct-infusion Orbitrap mass spectrometry analysis of the intact subunit.

  12. Spherical near-field antenna measurements — The most accurate antenna measurement technique

    DEFF Research Database (Denmark)

    Breinbjerg, Olav

    2016-01-01

    The spherical near-field antenna measurement technique combines several advantages and generally constitutes the most accurate technique for experimental characterization of radiation from antennas. This paper/presentation discusses these advantages, briefly reviews the early history and present...

  13. CLOMP v1.5

    Energy Technology Data Exchange (ETDEWEB)

    Gyllenhaal, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2018-02-01

    CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading. For simplicity, it does not use MPI by default but it is expected to be run on the resources a threaded MPI task would use (e.g., a portion of a shared memory compute node). Compiling with -DWITH_MPI allows packing one or more nodes with CLOMP tasks and having CLOMP report OpenMP performance for the slowest MPI task. On current systems, the strong scaling performance results for 4, 8, or 16 threads are of the most interest. Suggested weak scaling inputs are provided for evaluating future systems. Since MPI is often used to place at least one MPI task per coherence or NUMA domain, it is recommended to focus OpenMP runtime measurements on a subset of node hardware where it is most possible to have low OpenMP overheads (e.g., within one coherence domain or NUMA domain).

  14. Accurate characterization of organic thin film transistors in the presence of gate leakage current

    Directory of Open Access Journals (Sweden)

    Vinay K. Singh

    2011-12-01

    Full Text Available The presence of gate leakage through polymer dielectric in organic thin film transistors (OTFT prevents accurate estimation of transistor characteristics especially in subthreshold regime. To mitigate the impact of gate leakage on transfer characteristics and allow accurate estimation of mobility, subthreshold slope and on/off current ratio, a measurement technique involving simultaneous sweep of both gate and drain voltages is proposed. Two dimensional numerical device simulation is used to illustrate the validity of the proposed technique. Experimental results obtained with Pentacene/PMMA OTFT with significant gate leakage show a low on/off current ratio of ∼ 102 and subthreshold is 10 V/decade obtained using conventional measurement technique. The proposed technique reveals that channel on/off current ratio is more than two orders of magnitude higher at ∼104 and subthreshold slope is 4.5 V/decade.

  15. Waste Characterization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Vigil-Holterman, Luciana R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Naranjo, Felicia Danielle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-02

    This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream’s generation, characterization, and management; and not merely a list of information sources.

  16. Waste Characterization Methods

    International Nuclear Information System (INIS)

    Vigil-Holterman, Luciana R.; Naranjo, Felicia Danielle

    2016-01-01

    This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream's generation, characterization, and management; and not merely a list of information sources.

  17. On a model of three-dimensional bursting and its parallel implementation

    Science.gov (United States)

    Tabik, S.; Romero, L. F.; Garzón, E. M.; Ramos, J. I.

    2008-04-01

    A mathematical model for the simulation of three-dimensional bursting phenomena and its parallel implementation are presented. The model consists of four nonlinearly coupled partial differential equations that include fast and slow variables, and exhibits bursting in the absence of diffusion. The differential equations have been discretized by means of a second-order accurate in both space and time, linearly-implicit finite difference method in equally-spaced grids. The resulting system of linear algebraic equations at each time level has been solved by means of the Preconditioned Conjugate Gradient (PCG) method. Three different parallel implementations of the proposed mathematical model have been developed; two of these implementations, i.e., the MPI and the PETSc codes, are based on a message passing paradigm, while the third one, i.e., the OpenMP code, is based on a shared space address paradigm. These three implementations are evaluated on two current high performance parallel architectures, i.e., a dual-processor cluster and a Shared Distributed Memory (SDM) system. A novel representation of the results that emphasizes the most relevant factors that affect the performance of the paralled implementations, is proposed. The comparative analysis of the computational results shows that the MPI and the OpenMP implementations are about twice more efficient than the PETSc code on the SDM system. It is also shown that, for the conditions reported here, the nonlinear dynamics of the three-dimensional bursting phenomena exhibits three stages characterized by asynchronous, synchronous and then asynchronous oscillations, before a quiescent state is reached. It is also shown that the fast system reaches steady state in much less time than the slow variables.

  18. Final Technical Report: Characterizing Emerging Technologies.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Riley, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gonzalez, Sigifredo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    The Characterizing Emerging Technologies project focuses on developing, improving and validating characterization methods for PV modules, inverters and embedded power electronics. Characterization methods and associated analysis techniques are at the heart of technology assessments and accurate component and system modeling. Outputs of the project include measurement and analysis procedures that industry can use to accurately model performance of PV system components, in order to better distinguish and understand the performance differences between competing products (module and inverters) and new component designs and technologies (e.g., new PV cell designs, inverter topologies, etc.).

  19. Accurate Online Full Charge Capacity Modeling of Smartphone Batteries

    OpenAIRE

    Hoque, Mohammad A.; Siekkinen, Matti; Koo, Jonghoe; Tarkoma, Sasu

    2016-01-01

    Full charge capacity (FCC) refers to the amount of energy a battery can hold. It is the fundamental property of smartphone batteries that diminishes as the battery ages and is charged/discharged. We investigate the behavior of smartphone batteries while charging and demonstrate that the battery voltage and charging rate information can together characterize the FCC of a battery. We propose a new method for accurately estimating FCC without exposing low-level system details or introducing new ...

  20. Argobots: A Lightweight Low-Level Threading and Tasking Framework

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan; Bordage, Cyril; Bosilca, George; Brooks, Alex; Carns, Philip; Castello, Adrian; Genet, Damien; Herault, Thomas; Iwasaki, Shintaro; Jindal, Prateek; Kale, Laxmikant V.; Krishnamoorthy, Sriram; Lifflander, Jonathan; Lu, Huiwei; Meneses, Esteban; Snir, Marc; Sun, Yanhua; Taura, Kenjiro; Beckman, Pete

    2018-03-01

    In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, either are too specific to applications or architectures or are not as powerful or flexible. In this paper, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing a rich set of controls to allow specialization by end users or high-level programming models. We describe the design, implementation, and performance characterization of Argobots and present integrations with three high-level models: OpenMP, MPI, and colocated I/O services. Evaluations show that (1) Argobots, while providing richer capabilities, is competitive with existing simpler generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency-hiding capabilities; and (4) I/O services with Argobots reduce interference with colocated applications while achieving performance competitive with that of a Pthreads approach.

  1. Quantum-Accurate Molecular Dynamics Potential for Tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Mitchell; Thompson, Aidan P.

    2017-03-01

    The purpose of this short contribution is to report on the development of a Spectral Neighbor Analysis Potential (SNAP) for tungsten. We have focused on the characterization of elastic and defect properties of the pure material in order to support molecular dynamics simulations of plasma-facing materials in fusion reactors. A parallel genetic algorithm approach was used to efficiently search for fitting parameters optimized against a large number of objective functions. In addition, we have shown that this many-body tungsten potential can be used in conjunction with a simple helium pair potential1 to produce accurate defect formation energies for the W-He binary system.

  2. Accurate electron channeling contrast analysis of a low angle sub-grain boundary

    International Nuclear Information System (INIS)

    Mansour, H.; Crimp, M.A.; Gey, N.; Maloufi, N.

    2015-01-01

    High resolution selected area channeling pattern (HR-SACP) assisted accurate electron channeling contrast imaging (A-ECCI) was used to unambiguously characterize the structure of a low angle grain boundary in an interstitial-free-steel. The boundary dislocations were characterized using TEM-style contrast analysis. The boundary was determined to be tilt in nature with a misorientation angle of 0.13° consistent with the HR-SACP measurements. The results were verified using high accuracy electron backscatter diffraction (EBSD), confirming the approach as a discriminating tool for assessing low angle boundaries

  3. Techniques for Automated Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  4. Accurate means of detecting and characterizing abnormal patterns of ventricular activation by phase image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Botvinick, E.H.; Frais, M.A.; Shosa, D.W.; O' Connell, J.W.; Pacheco-Alvarez, J.A.; Scheinman, M.; Hattner, R.S.; Morady, F.; Faulkner, D.B.

    1982-08-01

    The ability of scintigraphic phase image analysis to characterize patterns of abnormal ventricular activation was investigated. The pattern of phase distribution and sequential phase changes over both right and left ventricular regions of interest were evaluated in 16 patients with normal electrical activation and wall motion and compared with those in 8 patients with an artificial pacemaker and 4 patients with sinus rhythm with the Wolff-Parkinson-White syndrome and delta waves. Normally, the site of earliest phase angle was seen at the base of the interventricular septum, with sequential change affecting the body of the septum and the cardiac apex and then spreading laterally to involve the body of both ventricles. The site of earliest phase angle was located at the apex of the right ventricle in seven patients with a right ventricular endocardial pacemaker and on the lateral left ventricular wall in one patient with a left ventricular epicardial pacemaker. In each case the site corresponded exactly to the position of the pacing electrode as seen on posteroanterior and left lateral chest X-ray films, and sequential phase changes spread from the initial focus to affect both ventricles. In each of the patients with the Wolff-Parkinson-White syndrome, the site of earliest ventricular phase angle was located, and it corresponded exactly to the site of the bypass tract as determined by endocardial mapping. In this way, four bypass pathways, two posterior left paraseptal, one left lateral and one right lateral, were correctly localized scintigraphically. On the basis of the sequence of mechanical contraction, phase image analysis provides an accurate noninvasive method of detecting abnormal foci of ventricular activation.

  5. Accurate means of detecting and characterizing abnormal patterns of ventricular activation by phase image analysis

    International Nuclear Information System (INIS)

    Botvinick, E.H.; Frais, M.A.; Shosa, D.W.; O'Connell, J.W.; Pacheco-Alvarez, J.A.; Scheinman, M.; Hattner, R.S.; Morady, F.; Faulkner, D.B.

    1982-01-01

    The ability of scintigraphic phase image analysis to characterize patterns of abnormal ventricular activation was investigated. The pattern of phase distribution and sequential phase changes over both right and left ventricular regions of interest were evaluated in 16 patients with normal electrical activation and wall motion and compared with those in 8 patients with an artificial pacemaker and 4 patients with sinus rhythm with the Wolff-Parkinson-White syndrome and delta waves. Normally, the site of earliest phase angle was seen at the base of the interventricular septum, with sequential change affecting the body of the septum and the cardiac apex and then spreading laterally to involve the body of both ventricles. The site of earliest phase angle was located at the apex of the right ventricle in seven patients with a right ventricular endocardial pacemaker and on the lateral left ventricular wall in one patient with a left ventricular epicardial pacemaker. In each case the site corresponded exactly to the position of the pacing electrode as seen on posteroanterior and left lateral chest X-ray films, and sequential phase changes spread from the initial focus to affect both ventricles. In each of the patients with the Wolff-Parkinson-White syndrome, the site of earliest ventricular phase angle was located, and it corresponded exactly to the site of the bypass tract as determined by endocardial mapping. In this way, four bypass pathways, two posterior left paraseptal, one left lateral and one right lateral, were correctly localized scintigraphically. On the basis of the sequence of mechanical contraction, phase image analysis provides an accurate noninvasive method of detecting abnormal foci of ventricular activation

  6. Сравнение эффективности технологий OpenMP, nVidia CUDA И StarPU на примере задачи умножения матриц

    OpenAIRE

    Ханкин, К. М.; Khankin, K. M.

    2013-01-01

    Приведено описание технологий OpenMP, nVidia CUDA и StarPU, варианты решения задачи умножения двух матриц с задействованием каждой из технологий и результаты сравнения реализаций по требовательности к ресурсам. In the article the description of OpenMP, nVidia CUDA and StarPU technologies, probable solutions of two matrix multiplication problem applying these technologies and the result of solution comparison by the criterion of resource consumption are considered. Ханкин Константи...

  7. Funnel metadynamics as accurate binding free-energy method

    Science.gov (United States)

    Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele

    2013-01-01

    A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839

  8. Exploring the relationship between sequence similarity and accurate phylogenetic trees.

    Science.gov (United States)

    Cantarel, Brandi L; Morrison, Hilary G; Pearson, William

    2006-11-01

    We have characterized the relationship between accurate phylogenetic reconstruction and sequence similarity, testing whether high levels of sequence similarity can consistently produce accurate evolutionary trees. We generated protein families with known phylogenies using a modified version of the PAML/EVOLVER program that produces insertions and deletions as well as substitutions. Protein families were evolved over a range of 100-400 point accepted mutations; at these distances 63% of the families shared significant sequence similarity. Protein families were evolved using balanced and unbalanced trees, with ancient or recent radiations. In families sharing statistically significant similarity, about 60% of multiple sequence alignments were 95% identical to true alignments. To compare recovered topologies with true topologies, we used a score that reflects the fraction of clades that were correctly clustered. As expected, the accuracy of the phylogenies was greatest in the least divergent families. About 88% of phylogenies clustered over 80% of clades in families that shared significant sequence similarity, using Bayesian, parsimony, distance, and maximum likelihood methods. However, for protein families with short ancient branches (ancient radiation), only 30% of the most divergent (but statistically significant) families produced accurate phylogenies, and only about 70% of the second most highly conserved families, with median expectation values better than 10(-60), produced accurate trees. These values represent upper bounds on expected tree accuracy for sequences with a simple divergence history; proteins from 700 Giardia families, with a similar range of sequence similarities but considerably more gaps, produced much less accurate trees. For our simulated insertions and deletions, correct multiple sequence alignments did not perform much better than those produced by T-COFFEE, and including sequences with expressed sequence tag-like sequencing errors did not

  9. Accurate modeling and evaluation of microstructures in complex materials

    Science.gov (United States)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  10. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    DEFF Research Database (Denmark)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-01-01

    to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite...

  11. PVT characterization and viscosity modeling and prediction of crude oils

    DEFF Research Database (Denmark)

    Cisneros, Eduardo Salvador P.; Dalberg, Anders; Stenby, Erling Halfdan

    2004-01-01

    In previous works, the general, one-parameter friction theory (f-theory), models have been applied to the accurate viscosity modeling of reservoir fluids. As a base, the f-theory approach requires a compositional characterization procedure for the application of an equation of state (EOS), in most...... pressure, is also presented. The combination of the mass characterization scheme presented in this work and the f-theory, can also deliver accurate viscosity modeling results. Additionally, depending on how extensive the compositional characterization is, the approach,presented in this work may also...... deliver accurate viscosity predictions. The modeling approach presented in this work can deliver accurate viscosity and density modeling and prediction results over wide ranges of reservoir conditions, including the compositional changes induced by recovery processes such as gas injection....

  12. Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos

    International Nuclear Information System (INIS)

    Ragusa, J.C.

    2003-01-01

    The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster

  13. Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)

    2003-07-01

    The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.

  14. MRI as an accurate tool for the diagnosis and characterization of different knee joint meniscal injuries

    Directory of Open Access Journals (Sweden)

    Ayman F. Ahmed

    2017-12-01

    Conclusion: MRI of the knee will give the orthopedic surgeons ability to select suitable treatment and arthroscopic interference for their patients. MRI has high accuracy in meniscal tears diagnosis allowing accurate grading of them.

  15. Accurate thickness measurement of graphene

    International Nuclear Information System (INIS)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-01-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)

  16. Development of a large-scale general purpose two-phase flow analysis code

    International Nuclear Information System (INIS)

    Terasaka, Haruo; Shimizu, Sensuke

    2001-01-01

    A general purpose three-dimensional two-phase flow analysis code has been developed for solving large-scale problems in industrial fields. The code uses a two-fluid model to describe the conservation equations for two-phase flow in order to be applicable to various phenomena. Complicated geometrical conditions are modeled by FAVOR method in structured grid systems, and the discretization equations are solved by a modified SIMPLEST scheme. To reduce computing time a matrix solver for the pressure correction equation is parallelized with OpenMP. Results of numerical examples show that the accurate solutions can be obtained efficiently and stably. (author)

  17. Characterization of condenser microphones under different environmental conditions for accurate speed of sound measurements with acoustic resonators

    Energy Technology Data Exchange (ETDEWEB)

    Guianvarc' h, Cecile; Pitre, Laurent [Laboratoire Commun de Metrologie LNE/Cnam, 61 rue du Landy, 93210 La Plaine Saint Denis (France); Gavioso, Roberto M.; Benedetto, Giuliana [Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, 10135 Turin (Italy); Bruneau, Michel [Laboratoire d' Acoustique de l' Universite du Maine UMR CNRS 6613, av. Olivier Messiaen, 72085 Le Mans Cedex 9 (France)

    2009-07-15

    Condenser microphones are more commonly used and have been extensively modeled and characterized in air at ambient temperature and static pressure. However, several applications of interest for metrology and physical acoustics require to use these transducers in significantly different environmental conditions. Particularly, the extremely accurate determination of the speed of sound in monoatomic gases, which is pursued for a determination of the Boltzmann constant k by an acoustic method, entails the use of condenser microphones mounted within a spherical cavity, over a wide range of static pressures, at the temperature of the triple point of water (273.16 K). To further increase the accuracy achievable in this application, the microphone frequency response and its acoustic input impedance need to be precisely determined over the same static pressure and temperature range. Few previous works examined the influence of static pressure, temperature, and gas composition on the microphone's sensitivity. In this work, the results of relative calibrations of 1/4 in. condenser microphones obtained using an electrostatic actuator technique are presented. The calibrations are performed in pure helium and argon gas at temperatures near 273 K and in the pressure range between 10 and 600 kPa. These experimental results are compared with the predictions of a realistic model available in the literature, finding a remarkable good agreement. The model provides an estimate of the acoustic impedance of 1/4 in. condenser microphones as a function of frequency and static pressure and is used to calculate the corresponding frequency perturbations induced on the normal modes of a spherical cavity when this is filled with helium or argon gas.

  18. The accurate assessment of small-angle X-ray scattering data.

    Science.gov (United States)

    Grant, Thomas D; Luft, Joseph R; Carter, Lester G; Matsui, Tsutomu; Weiss, Thomas M; Martel, Anne; Snell, Edward H

    2015-01-01

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  19. Accurate characterization and understanding of interface trap density trends between atomic layer deposited dielectrics and AlGaN/GaN with bonding constraint theory

    Energy Technology Data Exchange (ETDEWEB)

    Ramanan, Narayanan; Lee, Bongmook; Misra, Veena, E-mail: vmisra@ncsu.edu [Department of Electrical and Computer Engineering, North Carolina State University, 2410 Campus Shore Drive, Raleigh, North Carolina 27695 (United States)

    2015-06-15

    Many dielectrics have been proposed for the gate stack or passivation of AlGaN/GaN based metal oxide semiconductor heterojunction field effect transistors, to reduce gate leakage and current collapse, both for power and RF applications. Atomic Layer Deposition (ALD) is preferred for dielectric deposition as it provides uniform, conformal, and high quality films with precise monolayer control of film thickness. Identification of the optimum ALD dielectric for the gate stack or passivation requires a critical investigation of traps created at the dielectric/AlGaN interface. In this work, a pulsed-IV traps characterization method has been used for accurate characterization of interface traps with a variety of ALD dielectrics. High-k dielectrics (HfO{sub 2}, HfAlO, and Al{sub 2}O{sub 3}) are found to host a high density of interface traps with AlGaN. In contrast, ALD SiO{sub 2} shows the lowest interface trap density (<2 × 10{sup 12 }cm{sup −2}) after annealing above 600 °C in N{sub 2} for 60 s. The trend in observed trap densities is subsequently explained with bonding constraint theory, which predicts a high density of interface traps due to a higher coordination state and bond strain in high-k dielectrics.

  20. A highly efficient sharp-interface immersed boundary method with adaptive mesh refinement for bio-inspired flow simulations

    Science.gov (United States)

    Deng, Xiaolong; Dong, Haibo

    2017-11-01

    Developing a high-fidelity, high-efficiency numerical method for bio-inspired flow problems with flow-structure interaction is important for understanding related physics and developing many bio-inspired technologies. To simulate a fast-swimming big fish with multiple finlets or fish schooling, we need fine grids and/or a big computational domain, which are big challenges for 3-D simulations. In current work, based on the 3-D finite-difference sharp-interface immersed boundary method for incompressible flows (Mittal et al., JCP 2008), we developed an octree-like Adaptive Mesh Refinement (AMR) technique to enhance the computational ability and increase the computational efficiency. The AMR is coupled with a multigrid acceleration technique and a MPI +OpenMP hybrid parallelization. In this work, different AMR layers are treated separately and the synchronization is performed in the buffer regions and iterations are performed for the convergence of solution. Each big region is calculated by a MPI process which then uses multiple OpenMP threads for further acceleration, so that the communication cost is reduced. With these acceleration techniques, various canonical and bio-inspired flow problems with complex boundaries can be simulated accurately and efficiently. This work is supported by the MURI Grant Number N00014-14-1-0533 and NSF Grant CBET-1605434.

  1. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  2. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets.

    Science.gov (United States)

    Shrimankar, D D; Sathe, S R

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.

  3. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets

    Science.gov (United States)

    Shrimankar, D. D.; Sathe, S. R.

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868

  4. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    Science.gov (United States)

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  5. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  6. Characterization of Cloud Water-Content Distribution

    Science.gov (United States)

    Lee, Seungwon

    2010-01-01

    The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.

  7. Accurate particle speed prediction by improved particle speed measurement and 3-dimensional particle size and shape characterization technique

    DEFF Research Database (Denmark)

    Cernuschi, Federico; Rothleitner, Christian; Clausen, Sønnik

    2017-01-01

    Accurate particle mass and velocity measurement is needed for interpreting test results in erosion tests of materials and coatings. The impact and damage of a surface is influenced by the kinetic energy of a particle, i.e. particle mass and velocity. Particle mass is usually determined with optic...

  8. Implementing the PM Programming Language using MPI and OpenMP - a New Tool for Programming Geophysical Models on Parallel Systems

    Science.gov (United States)

    Bellerby, Tim

    2015-04-01

    PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks number of processors

  9. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  10. Incorporation of exact boundary conditions into a discontinuous galerkin finite element method for accurately solving 2d time-dependent maxwell equations

    KAUST Repository

    Sirenko, Kostyantyn; Liu, Meilin; Bagci, Hakan

    2013-01-01

    A scheme that discretizes exact absorbing boundary conditions (EACs) to incorporate them into a time-domain discontinuous Galerkin finite element method (TD-DG-FEM) is described. The proposed TD-DG-FEM with EACs is used for accurately characterizing

  11. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  12. Accurate determination of rates from non-uniformly sampled relaxation data

    Energy Technology Data Exchange (ETDEWEB)

    Stetz, Matthew A.; Wand, A. Joshua, E-mail: wand@upenn.edu [University of Pennsylvania Perelman School of Medicine, Johnson Research Foundation and Department of Biochemistry and Biophysics (United States)

    2016-08-15

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  13. K.I.S.S. Parallel Coding (lecture 2)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    K.I.S.S.ing parallel computing means, finally, loving it. Parallel computing will be approached in a theoretical and experimental way, using the most advanced and used C API: OpenMP. OpenMP is an open source project constantly developed and updated to hide the awful complexity of parallel coding in an awesome interface. The result is a tool which leaves plenty of space for clever solutions and terrific results in terms of efficiency and performance maximisation.

  14. CHARACTERIZING AND MODELING FERRITE-CORE PROBES

    International Nuclear Information System (INIS)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.

    2010-01-01

    In this paper, we accurately and carefully characterize a ferrite-core probe that is widely used for aircraft inspections. The characterization starts with the development of a model that can be executed using the proprietary volume-integral code, VIC-3D(c), and then the model is fitted to measured multifrequency impedance data taken with the probe in freespace and over samples of a titanium alloy and aluminum. Excellent results are achieved, and will be discussed.

  15. Triple Pulse Tester - Efficient Power Loss Characterization of Power Modules

    DEFF Research Database (Denmark)

    Trintis, Ionut; Poulsen, Thomas; Beczkowski, Szymon

    2015-01-01

    In this paper the triple pulse testing method and circuit for power loss characterization of power modules is introduced. The proposed test platform is able to accurately characterize both the switching and conduction losses of power modules in a single automated process. A configuration of a half...

  16. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    Science.gov (United States)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  17. Automation Of An l-V Characterization System

    OpenAIRE

    Noriega, J. R.; Vera-Marquina, A.; Acosta Enríquez, C.

    2010-01-01

    In this paper, an accurate I-V virtual instrument (VI) that has been developed to characterize electronic devices for research and teaching purposes is demonstrated. The virtual instrument can be used to highlight principles of measurement, instrumentation, fundamental principles of electronics, VI programming, device testing and characterization in wafer or discrete device level. It consists of a Keithley electrometer, model 6514, a programmable power supply BK Precision, model 1770, a Keith...

  18. Characterization of a signal recording system for accurate velocity estimation using a VISAR

    Science.gov (United States)

    Rav, Amit; Joshi, K. D.; Singh, Kulbhushan; Kaushik, T. C.

    2018-02-01

    The linearity of a signal recording system (SRS) in time as well as in amplitude are important for the accurate estimation of the free surface velocity history of a moving target during shock loading and unloading when measured using optical interferometers such as a velocity interferometer system for any reflector (VISAR). Signal recording being the first step in a long sequence of signal processes, the incorporation of errors due to nonlinearity, and low signal-to-noise ratio (SNR) affects the overall accuracy and precision of the estimation of velocity history. In shock experiments the small duration (a few µs) of loading/unloading, the reflectivity of moving target surface, and the properties of optical components, control the amount of input of light to the SRS of a VISAR and this in turn affects the linearity and SNR of the overall measurement. These factors make it essential to develop in situ procedures for (i) minimizing the effect of signal induced noise and (ii) determine the linear region of operation for the SRS. Here we report on a procedure for the optimization of SRS parameters such as photodetector gain, optical power, aperture etc, so as to achieve a linear region of operation with a high SNR. The linear region of operation so determined has been utilized successfully to estimate the temporal history of the free surface velocity of the moving target in shock experiments.

  19. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  20. CAST: a new program package for the accurate characterization of large and flexible molecular systems.

    Science.gov (United States)

    Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd

    2014-09-15

    The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.

  1. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    Science.gov (United States)

    Hristov, Ivan; Goranov, Goran; Hristova, Radoslava

    2018-02-01

    We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named "Ivy Bridge-EP") in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named "Knights Landing" (KNL). The results show 2 times better performance on KNL processor.

  2. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    OpenAIRE

    Hristov Ivan; Goranov Goran; Hristova Radoslava

    2018-01-01

    We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named “Ivy Bridge-EP”) in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named “Knights Landing” (KNL). The results show 2 times better per...

  3. Accurate Extraction of Charge Carrier Mobility in 4-Probe Field-Effect Transistors

    KAUST Repository

    Choi, Hyun Ho; Rodionov, Yaroslav I.; Paterson, Alexandra F.; Panidi, Julianna; Saranin, Danila; Kharlamov, Nikolai; Didenko, Sergei I.; Anthopoulos, Thomas D.; Cho, Kilwon; Podzorov, Vitaly

    2018-01-01

    Charge carrier mobility is an important characteristic of organic field-effect transistors (OFETs) and other semiconductor devices. However, accurate mobility determination in FETs is frequently compromised by issues related to Schottky-barrier contact resistance, that can be efficiently addressed by measurements in 4-probe/Hall-bar contact geometry. Here, it is shown that this technique, widely used in materials science, can still lead to significant mobility overestimation due to longitudinal channel shunting caused by voltage probes in 4-probe structures. This effect is investigated numerically and experimentally in specially designed multiterminal OFETs based on optimized novel organic-semiconductor blends and bulk single crystals. Numerical simulations reveal that 4-probe FETs with long but narrow channels and wide voltage probes are especially prone to channel shunting, that can lead to mobilities overestimated by as much as 350%. In addition, the first Hall effect measurements in blended OFETs are reported and how Hall mobility can be affected by channel shunting is shown. As a solution to this problem, a numerical correction factor is introduced that can be used to obtain much more accurate experimental mobilities. This methodology is relevant to characterization of a variety of materials, including organic semiconductors, inorganic oxides, monolayer materials, as well as carbon nanotube and semiconductor nanocrystal arrays.

  4. Accurate Extraction of Charge Carrier Mobility in 4-Probe Field-Effect Transistors

    KAUST Repository

    Choi, Hyun Ho

    2018-04-30

    Charge carrier mobility is an important characteristic of organic field-effect transistors (OFETs) and other semiconductor devices. However, accurate mobility determination in FETs is frequently compromised by issues related to Schottky-barrier contact resistance, that can be efficiently addressed by measurements in 4-probe/Hall-bar contact geometry. Here, it is shown that this technique, widely used in materials science, can still lead to significant mobility overestimation due to longitudinal channel shunting caused by voltage probes in 4-probe structures. This effect is investigated numerically and experimentally in specially designed multiterminal OFETs based on optimized novel organic-semiconductor blends and bulk single crystals. Numerical simulations reveal that 4-probe FETs with long but narrow channels and wide voltage probes are especially prone to channel shunting, that can lead to mobilities overestimated by as much as 350%. In addition, the first Hall effect measurements in blended OFETs are reported and how Hall mobility can be affected by channel shunting is shown. As a solution to this problem, a numerical correction factor is introduced that can be used to obtain much more accurate experimental mobilities. This methodology is relevant to characterization of a variety of materials, including organic semiconductors, inorganic oxides, monolayer materials, as well as carbon nanotube and semiconductor nanocrystal arrays.

  5. Multilevel Parallelization of AutoDock 4.2

    Directory of Open Access Journals (Sweden)

    Norgan Andrew P

    2011-04-01

    Full Text Available Abstract Background Virtual (computational screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4. Results Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Conclusions Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI and node-level (OpenMP parallelization to best fit both workloads and computational resources.

  6. Multilevel Parallelization of AutoDock 4.2.

    Science.gov (United States)

    Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P

    2011-04-28

    Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.

  7. RA radiological characterization database application

    International Nuclear Information System (INIS)

    Steljic, M.M; Ljubenov, V.Lj. . E-mail address of corresponding author: milijanas@vin.bg.ac.yu; Steljic, M.M.)

    2005-01-01

    Radiological characterization of the RA research reactor is one of the main activities in the first two years of the reactor decommissioning project. The raw characterization data from direct measurements or laboratory analyses (defined within the existing sampling and measurement programme) have to be interpreted, organized and summarized in order to prepare the final characterization survey report. This report should be made so that the radiological condition of the entire site is completely and accurately shown with the radiological condition of the components clearly depicted. This paper presents an electronic database application, designed as a serviceable and efficient tool for characterization data storage, review and analysis, as well as for the reports generation. Relational database model was designed and the application is made by using Microsoft Access 2002 (SP1), a 32-bit RDBMS for the desktop and client/server database applications that run under Windows XP. (author)

  8. OSM-Classic : An optical imaging technique for accurately determining strain

    Science.gov (United States)

    Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.

    OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.

  9. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    Directory of Open Access Journals (Sweden)

    Hristov Ivan

    2018-01-01

    Full Text Available We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named “Ivy Bridge-EP” in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named “Knights Landing” (KNL. The results show 2 times better performance on KNL processor.

  10. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    Science.gov (United States)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  11. SAXS Combined with UV-vis Spectroscopy and QELS: Accurate Characterization of Silver Sols Synthesized in Polymer Matrices.

    Science.gov (United States)

    Bulavin, Leonid; Kutsevol, Nataliya; Chumachenko, Vasyl; Soloviov, Dmytro; Kuklin, Alexander; Marynin, Andrii

    2016-12-01

    The present work demonstrates a validation of small-angle X-ray scattering (SAXS) combining with ultra violet and visible (UV-vis) spectroscopy and quasi-elastic light scattering (QELS) analysis for characterization of silver sols synthesized in polymer matrices. Polymer matrix internal structure and polymer chemical nature actually controlled the sol size characteristics. It was shown that for precise analysis of nanoparticle size distribution these techniques should be used simultaneously. All applied methods were in good agreement for the characterization of size distribution of small particles (less than 60 nm) in the sols. Some deviations of the theoretical curves from the experimental ones were observed. The most probable cause is that nanoparticles were not entirely spherical in form.

  12. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...

  13. Highly accurate surface maps from profilometer measurements

    Science.gov (United States)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  14. SPECTROPOLARIMETRICALLY ACCURATE MAGNETOHYDROSTATIC SUNSPOT MODEL FOR FORWARD MODELING IN HELIOSEISMOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Przybylski, D.; Shelyag, S.; Cally, P. S. [Monash Center for Astrophysics, School of Mathematical Sciences, Monash University, Clayton, Victoria 3800 (Australia)

    2015-07-01

    We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave.

  15. SPECTROPOLARIMETRICALLY ACCURATE MAGNETOHYDROSTATIC SUNSPOT MODEL FOR FORWARD MODELING IN HELIOSEISMOLOGY

    International Nuclear Information System (INIS)

    Przybylski, D.; Shelyag, S.; Cally, P. S.

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave

  16. Methods for characterizing convective cryoprobe heat transfer in ultrasound gel phantoms.

    Science.gov (United States)

    Etheridge, Michael L; Choi, Jeunghwan; Ramadhyani, Satish; Bischof, John C

    2013-02-01

    While cryosurgery has proven capable in treating of a variety of conditions, it has met with some resistance among physicians, in part due to shortcomings in the ability to predict treatment outcomes. Here we attempt to address several key issues related to predictive modeling by demonstrating methods for accurately characterizing heat transfer from cryoprobes, report temperature dependent thermal properties for ultrasound gel (a convenient tissue phantom) down to cryogenic temperatures, and demonstrate the ability of convective exchange heat transfer boundary conditions to accurately describe freezing in the case of single and multiple interacting cryoprobe(s). Temperature dependent changes in the specific heat and thermal conductivity for ultrasound gel are reported down to -150 °C for the first time here and these data were used to accurately describe freezing in ultrasound gel in subsequent modeling. Freezing around a single and two interacting cryoprobe(s) was characterized in the ultrasound gel phantom by mapping the temperature in and around the "iceball" with carefully placed thermocouple arrays. These experimental data were fit with finite-element modeling in COMSOL Multiphysics, which was used to investigate the sensitivity and effectiveness of convective boundary conditions in describing heat transfer from the cryoprobes. Heat transfer at the probe tip was described in terms of a convective coefficient and the cryogen temperature. While model accuracy depended strongly on spatial (i.e., along the exchange surface) variation in the convective coefficient, it was much less sensitive to spatial and transient variations in the cryogen temperature parameter. The optimized fit, convective exchange conditions for the single-probe case also provided close agreement with the experimental data for the case of two interacting cryoprobes, suggesting that this basic characterization and modeling approach can be extended to accurately describe more complicated

  17. Integrated Characterization of DNAPL Source Zone Architecture in Clay Till and Limestone Bedrock

    DEFF Research Database (Denmark)

    Broholm, Mette Martina; Janniche, Gry Sander; Fjordbøge, Annika Sidelmann

    2014-01-01

    Background/Objectives. Characterization of dense non-aqueous phase liquid (DNAPL) source zone architecture is essential to develop accurate site specific conceptual models, delineate and quantify contaminant mass, perform risk assessment, and select and design remediation alternatives. The activi......Background/Objectives. Characterization of dense non-aqueous phase liquid (DNAPL) source zone architecture is essential to develop accurate site specific conceptual models, delineate and quantify contaminant mass, perform risk assessment, and select and design remediation alternatives...... innovative investigation methods and characterize the source zone hydrogeology and contamination to obtain an improved conceptual understanding of DNAPL source zone architecture in clay till and bryozoan limestone bedrock. Approach/Activities. A wide range of innovative and current site investigative tools...... for direct and indirect documentation and/or evaluation of DNAPL presence were combined in a multiple lines of evidence approach. Results/Lessons Learned. Though no single technique was sufficient for characterization of DNAPL source zone architecture, the combined use of membrane interphase probing (MIP...

  18. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  19. Improvements of an objective model of compressed breasts undergoing mammography: Generation and characterization of breast shapes

    NARCIS (Netherlands)

    Rodriguez Ruiz, A.; Feng, S.S.J.; Zelst, J.C.M. van; Vreemann, S.; Mann, J.R.; D'Orsi, C.J.; Sechopoulos, I.

    2017-01-01

    PURPOSE: To develop a set of accurate 2D models of compressed breasts undergoing mammography or breast tomosynthesis, based on objective analysis, to accurately characterize mammograms with few linearly independent parameters, and to generate novel clinically realistic paired cranio-caudal (CC) and

  20. NAA in characterization of matrices at ACD, BARC

    International Nuclear Information System (INIS)

    Rajan A, Nicy; Swain, Kallola K.; Kayasth, Satish

    2006-01-01

    The use of characterization in material science means an approach that indicates those features of materials related to its composition and structure. In simple terms, characterization of a material means what atoms are present and where they are. In general characterization deals with the complete and accurate information of composition of the material (major, minor, trace elements) for specific application. Neutron activation analysis (NAA) is identified as one of the most powerful analytical technique useful for performing both qualitative and quantitative multi-element analysis of major, minor, and trace elements in almost every conceivable field of scientific/technical interest and is extensively used for wide variety of materials

  1. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  2. A Mobile Automated Characterization System (MACS) for indoor floor characterization

    International Nuclear Information System (INIS)

    Richardson, B.S.; Haley, D.C.; Dudar, A.M.; Ward, C.R.

    1995-01-01

    The Savannah River Technology Center (SRTC) and Oak Ridge National Laboratory are developing an advanced Mobile Automated Characterization System (MACS) to characterize indoor contaminated floors. MACS is based upon Semi-Intelligent Mobile Observing Navigator (SIMON), an earlier floor characterization system developed at SRTC. MACS will feature enhanced navigation systems, operator interface, and an interface to simplify integration of additional sensors. The enhanced navigation system will provide the capability to survey large open areas much more accurately than is now possible with SIMON, which is better suited for hallways and corridors that provide the means for recalibrating position and heading. MACS operator interface is designed to facilitate MACS's use as a tool for health physicists, thus eliminating the need for additional training in the robot's control language. Initial implementation of MACS will use radiation detectors. Additional sensors, such as PCB sensors currently being developed, will be integrated on MACS in the future. Initial use of MACS will be focused toward obtaining comparative results with manual methods. Surveys will be conducted both manually and with MACS to compare relative costs and data quality. While clear cost benefits anticipated, data quality benefits should be even more significant

  3. Characterization of photo-transformation products of the antibiotic drug Ciprofloxacin with liquid chromatography-tandem mass spectrometry in combination with accurate mass determination using an LTQ-Orbitrap.

    Science.gov (United States)

    Haddad, Tarek; Kümmerer, Klaus

    2014-11-01

    The presence of pharmaceuticals, especially antibiotics, in the aquatic environment is of growing concern. Several studies have been carried out on the occurrence and environmental risk of these compounds. Ciprofloxacin (CIP), a broad-spectrum anti-microbial second-generation fluoroquinolone, is widely used in human and veterinary medicine. In this work, photo-degradation of CIP in aqueous solution using UV and xenon lamps was studied. The transformation products (TPs), created from CIP, were initially analyzed by an ion trap in the MS, MS/MS and MS(3) modes. These data were used to clarify the structures of the degradation products. Furthermore, the proposed products were confirmed by accurate mass measurement and empirical formula calculation for the molecular ions of TPs using LTQ-Orbitrap XL mass spectrometer. The degree of mineralization, the abundance of detected TPs and degradation pathways were determined. Eleven TPs were detected in the present study. TP1, which was never detected before, was structurally characterized in this work. All TPs still retained the core quinolone structure, which is responsible for the biological activity. As mineralization of CIP and its transformation products did not happen, the formation of stable TPs can be expected in waste water treatment and in surface water with further follow-up problems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Micro-cantilevers for non-destructive characterization of nanograss uniformity

    DEFF Research Database (Denmark)

    Petersen, Dirch Hjorth; Wang, Fei; Olesen, Mikkel Buster

    2011-01-01

    We demonstrate an application of three-way flexible micro four-point probes for indirect uniformity characterization of surface morphology. The mean sheet conductance of a quasi-planar 3D nanostructured surface is highly dependent on the surface morphology, and thus accurate sheet conductance...... measurements may be useful for process uniformity characterization. The method is applied for characterization of TiW coated nanograss uniformity. Three-way flexible L-shaped cantilever electrodes are used to avoid damage to the fragile surface, and a relative standard deviation on measurement repeatability...... of 0.12 % is obtained with a measurement yield of 97%. Finally, variations in measured sheet conductance are correlated to the surface morphology as characterized by electron microscopy....

  5. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  6. High strain-rate soft material characterization via inertial cavitation

    Science.gov (United States)

    Estrada, Jonathan B.; Barajas, Carlos; Henann, David L.; Johnsen, Eric; Franck, Christian

    2018-03-01

    Mechanical characterization of soft materials at high strain-rates is challenging due to their high compliance, slow wave speeds, and non-linear viscoelasticity. Yet, knowledge of their material behavior is paramount across a spectrum of biological and engineering applications from minimizing tissue damage in ultrasound and laser surgeries to diagnosing and mitigating impact injuries. To address this significant experimental hurdle and the need to accurately measure the viscoelastic properties of soft materials at high strain-rates (103-108 s-1), we present a minimally invasive, local 3D microrheology technique based on inertial microcavitation. By combining high-speed time-lapse imaging with an appropriate theoretical cavitation framework, we demonstrate that this technique has the capability to accurately determine the general viscoelastic material properties of soft matter as compliant as a few kilopascals. Similar to commercial characterization algorithms, we provide the user with significant flexibility in evaluating several constitutive laws to determine the most appropriate physical model for the material under investigation. Given its straightforward implementation into most current microscopy setups, we anticipate that this technique can be easily adopted by anyone interested in characterizing soft material properties at high loading rates including hydrogels, tissues and various polymeric specimens.

  7. Characterization of geometrical random uncertainty distribution for a group of patients in radiotherapy

    International Nuclear Information System (INIS)

    Munoz Montplet, C.; Jurado Bruggeman, D.

    2010-01-01

    Geometrical random uncertainty in radiotherapy is usually characterized by a unique value in each group of patients. We propose a novel approach based on a statistically accurate characterization of the uncertainty distribution, thus reducing the risk of obtaining potentially unsafe results in CTV-PTV margins or in the selection of correction protocols.

  8. Atomic force microscopy characterization of cellulose nanocrystals

    Science.gov (United States)

    Roya R. Lahiji; Xin Xu; Ronald Reifenberger; Arvind Raman; Alan Rudie; Robert J. Moon

    2010-01-01

    Cellulose nanocrystals (CNCs) are gaining interest as a “green” nanomaterial with superior mechanical and chemical properties for high-performance nanocomposite materials; however, there is a lack of accurate material property characterization of individual CNCs. Here, a detailed study of the topography, elastic and adhesive properties of individual wood-derived CNCs...

  9. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  10. Transport processes investigation: A necessary first step in site scale characterization plans

    International Nuclear Information System (INIS)

    Roepke, C.; Glass, R.J.; Brainard, J.; Mann, M.; Kriel, K.; Holt, R.; Schwing, J.

    1995-01-01

    We propose an approach, which we call the Transport Processes Investigation or TPI, to identify and verify site-scale transport processes and their controls. The TPI aids in the formulation of an accurate conceptual model of flow and transport, an essential first step in the development of a cost effective site characterization strategy. The TPI is demonstrated in the highly complex vadose zone of glacial tills that underlie the Fernald Environmental Remediation Project (FEMP) in Fernald, Ohio. As a result of the TPI, we identify and verify the pertinent flow processes and their controls, such as extensive macropore and fracture flow through layered clays, which must be included in an accurate conceptual model of site-scale contaminant transport. We are able to conclude that the classical modeling and sampling methods employed in some site characterization programs will be insufficient to characterize contaminant concentrations or distributions at contaminated or hazardous waste facilities sited in such media

  11. Importance of geologic characterization of potential low-level radioactive waste disposal sites

    Science.gov (United States)

    Weibel, C.P.; Berg, R.C.

    1991-01-01

    Using the example of the Geff Alternative Site in Wayne County, Illinois, for the disposal of low-level radioactive waste, this paper demonstrates, from a policy and public opinion perspective, the importance of accurately determining site stratigraphy. Complete and accurate characterization of geologic materials and determination of site stratigraphy at potential low-level waste disposal sites provides the frame-work for subsequent hydrologic and geochemical investigations. Proper geologic characterization is critical to determine the long-term site stability and the extent of interactions of groundwater between the site and its surroundings. Failure to adequately characterize site stratigraphy can lead to the incorrect evaluation of the geology of a site, which in turn may result in a lack of public confidence. A potential problem of lack of public confidence was alleviated as a result of the resolution and proper definition of the Geff Alternative Site stratigraphy. The integrity of the investigation was not questioned and public perception was not compromised. ?? 1991 Springer-Verlag New York Inc.

  12. Characterization of natural groundwater colloids at Palmottu

    International Nuclear Information System (INIS)

    Vuorinen, U.; Kumpulainen, H.

    1993-01-01

    Characterization of groundwater colloids (size range from 2 nm to 500 nm) in the Palmottu natural analogue (for radioactive waste disposal in Finland) area was continued by sampling another drill hole, 346, at three depths. Results evaluated so far indicate the presence of both organic and inorganic colloids. In terms of chemical composition and morphology, the inorganic colloids differ from those found in previous studies. According to SEM/EDS and STEM/EDS they mostly contain Ca and are spherical in shape. At this stage further characterization and evaluation of results is provisional and does not allow very accurate conclusions to be drawn

  13. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    Directory of Open Access Journals (Sweden)

    Kevin R Ramkissoon

    Full Text Available The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  14. Rapid Identification of Sequences for Orphan Enzymes to Power Accurate Protein Annotation

    Science.gov (United States)

    Ojha, Sunil; Watson, Douglas S.; Bomar, Martha G.; Galande, Amit K.; Shearer, Alexander G.

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the “back catalog” of enzymology – “orphan enzymes,” those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme “back catalog” is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology’s “back catalog” another powerful tool to drive accurate genome annotation. PMID:24386392

  15. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture

    Directory of Open Access Journals (Sweden)

    Zhiquan Gao

    2015-09-01

    Full Text Available Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.

  16. Accurately bearing measurement in non-cooperative passive location system

    International Nuclear Information System (INIS)

    Liu Zhiqiang; Ma Hongguang; Yang Lifeng

    2007-01-01

    The system of non-cooperative passive location based on array is proposed. In the system, target is detected by beamforming and Doppler matched filtering; and bearing is measured by a long-base-ling interferometer which is composed of long distance sub-arrays. For the interferometer with long-base-line, the bearing is measured accurately but ambiguously. To realize unambiguous accurately bearing measurement, beam width and multiple constraint adoptive beamforming technique is used to resolve azimuth ambiguous. Theory and simulation result shows this method is effective to realize accurately bearing measurement in no-cooperate passive location system. (authors)

  17. Accurate radiotherapy positioning system investigation based on video

    International Nuclear Information System (INIS)

    Tao Shengxiang; Wu Yican

    2006-01-01

    This paper introduces the newest research production on patient positioning method in accurate radiotherapy brought by Accurate Radiotherapy Treating System (ARTS) research team of Institute of Plasma Physics of Chinese Academy of Sciences, such as the positioning system based on binocular vision, the position-measuring system based on contour matching and the breath gate controlling system for positioning. Their basic principle, the application occasion and the prospects are briefly depicted. (authors)

  18. Method for accurate determination of dissociation constants of optical ratiometric systems: chemical probes, genetically encoded sensors, and interacting molecules.

    Science.gov (United States)

    Pomorski, Adam; Kochańczyk, Tomasz; Miłoch, Anna; Krężel, Artur

    2013-12-03

    Ratiometric chemical probes and genetically encoded sensors are of high interest for both analytical chemists and molecular biologists. Their high sensitivity toward the target ligand and ability to obtain quantitative results without a known sensor concentration have made them a very useful tool in both in vitro and in vivo assays. Although ratiometric sensors are widely used in many applications, their successful and accurate usage depends on how they are characterized in terms of sensing target molecules. The most important feature of probes and sensors besides their optical parameters is an affinity constant toward analyzed molecules. The literature shows that different analytical approaches are used to determine the stability constants, with the ratio approach being most popular. However, oversimplification and lack of attention to detail results in inaccurate determination of stability constants, which in turn affects the results obtained using these sensors. Here, we present a new method where ratio signal is calibrated for borderline values of intensities of both wavelengths, instead of borderline ratio values that generate errors in many studies. At the same time, the equation takes into account the cooperativity factor or fluorescence artifacts and therefore can be used to characterize systems with various stoichiometries and experimental conditions. Accurate determination of stability constants is demonstrated utilizing four known optical ratiometric probes and sensors, together with a discussion regarding other, currently used methods.

  19. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  20. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  1. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  2. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  3. Accurate determination of light elements by charged particle activation analysis

    International Nuclear Information System (INIS)

    Shikano, K.; Shigematsu, T.

    1989-01-01

    To develop accurate determination of light elements by CPAA, accurate and practical standardization methods and uniform chemical etching are studied based on determination of carbon in gallium arsenide using the 12 C(d,n) 13 N reaction and the following results are obtained: (1)Average stopping power method with thick target yield is useful as an accurate and practical standardization method. (2)Front surface of sample has to be etched for accurate estimate of incident energy. (3)CPAA is utilized for calibration of light element analysis by physical method. (4)Calibration factor of carbon analysis in gallium arsenide using the IR method is determined to be (9.2±0.3) x 10 15 cm -1 . (author)

  4. Accurate 3D Mapping Algorithm for Flexible Antennas

    Directory of Open Access Journals (Sweden)

    Saed Asaly

    2018-01-01

    Full Text Available This work addresses the problem of performing an accurate 3D mapping of a flexible antenna surface. Consider a high-gain satellite flexible antenna; even a submillimeter change in the antenna surface may lead to a considerable loss in the antenna gain. Using a robotic subreflector, such changes can be compensated for. Yet, in order to perform such tuning, an accurate 3D mapping of the main antenna is required. This paper presents a general method for performing an accurate 3D mapping of marked surfaces such as satellite dish antennas. Motivated by the novel technology for nanosatellites with flexible high-gain antennas, we propose a new accurate mapping framework which requires a small-sized monocamera and known patterns on the antenna surface. The experimental result shows that the presented mapping method can detect changes up to 0.1-millimeter accuracy, while the camera is located 1 meter away from the dish, allowing an RF antenna optimization for Ka and Ku frequencies. Such optimization process can improve the gain of the flexible antennas and allow an adaptive beam shaping. The presented method is currently being implemented on a nanosatellite which is scheduled to be launched at the end of 2018.

  5. Development of a method to accurately calculate the Dpb and quickly predict the strength of a chemical bond

    International Nuclear Information System (INIS)

    Du, Xia; Zhao, Dong-Xia; Yang, Zhong-Zhi

    2013-01-01

    Highlights: ► A method from new respect to characterize and measure the bond strength is proposed. ► We calculate the D pb of a series of various bonds to justify our approach. ► A quite good linear relationship of the D pb with the bond lengths for series of various bonds is shown. ► Take the prediction of strengths of C–H and N–H bonds for base pairs in DNA as a practical application of our method. - Abstract: A new approach to characterize and measure bond strength has been developed. First, we propose a method to accurately calculate the potential acting on an electron in a molecule (PAEM) at the saddle point along a chemical bond in situ, denoted by D pb . Then, a direct method to quickly evaluate bond strength is established. We choose some familiar molecules as models for benchmarking this method. As a practical application, the D pb of base pairs in DNA along C–H and N–H bonds are obtained for the first time. All results show that C 7 –H of A–T and C 8 –H of G–C are the relatively weak bonds that are the injured positions in DNA damage. The significance of this work is twofold: (i) A method is developed to calculate D pb of various sizable molecules in situ quickly and accurately; (ii) This work demonstrates the feasibility to quickly predict the bond strength in macromolecules

  6. Analytical method comparisons for the accurate determination of PCBs in sediments

    Energy Technology Data Exchange (ETDEWEB)

    Numata, M.; Yarita, T.; Aoyagi, Y.; Yamazaki, M.; Takatsu, A. [National Metrology Institute of Japan, Tsukuba (Japan)

    2004-09-15

    National Metrology Institute of Japan in National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) has been developing several matrix reference materials, for example, sediments, water and biological tissues, for the determinations of heavy metals and organometallic compounds. The matrix compositions of those certified reference materials (CRMs) are similar to compositions of actual samples, and those are useful for validating analytical procedures. ''Primary methods of measurements'' are essential to obtain accurate and SI-traceable certified values in the reference materials, because the methods have the highest quality of measurement. However, inappropriate analytical operations, such as incomplete extraction of analytes or crosscontamination during analytical procedures, will cause error of analytical results, even if one of the primary methods, isotope-dilution, is utilized. To avoid possible procedural bias for the certification of reference materials, we employ more than two analytical methods which have been optimized beforehand. Because the accurate determination of trace POPs in the environment is important to evaluate their risk, reliable CRMs are required by environmental chemists. Therefore, we have also been preparing matrix CRMs for the determination of POPs. To establish accurate analytical procedures for the certification of POPs, extraction is one of the critical steps as described above. In general, conventional extraction techniques for the determination of POPs, such as Soxhlet extraction (SOX) and saponification (SAP), have been characterized well, and introduced as official methods for environmental analysis. On the other hand, emerging techniques, such as microwave-assisted extraction (MAE), pressurized fluid extraction (PFE) and supercritical fluid extraction (SFE), give higher recovery yields of analytes with relatively short extraction time and small amount of solvent, by reasons of the high

  7. Low-level waste characterization plan for the WSCF Laboratory Complex

    International Nuclear Information System (INIS)

    Morrison, J.A.

    1994-01-01

    The Waste Characterization Plan for the Waste Sampling and Characterization Facility (WSCF) complex describes the organization and methodology for characterization of all waste streams that are transferred from the WSCF Laboratory Complex to the Hanford Site 200 Areas Storage and Disposal Facilities. Waste generated at the WSCF complex typically originates from analytical or radiological procedures. Process knowledge is derived from these operations and should be considered an accurate description of WSCF generated waste. Sample contribution is accounted for in the laboratory waste designation process and unused or excess samples are returned to the originator for disposal. The report describes procedures and processes common to all waste streams; individual waste streams; and radionuclide characterization methodology

  8. Accurate spectroscopic characterization of ethyl mercaptan and dimethyl sulfide isotopologues: a route toward their astrophysical detection

    Energy Technology Data Exchange (ETDEWEB)

    Puzzarini, C. [Dipartimento di Chimica, " Giacomo Ciamician," Università diBologna, Via F. Selmi 2, I-40126 Bologna (Italy); Senent, M. L. [Departamento de Química y Física Teóricas, Institsuto de Estructura de la Materia, IEM-C.S.I.C., Serrano 121, Madrid E-28006 (Spain); Domínguez-Gómez, R. [Doctora Vinculada IEM-CSIC, Departamento de Ingeniería Civil, Cátedra de Química, E.U.I.T. Obras Públicas, Universidad Politécnica de Madrid (Spain); Carvajal, M. [Departamento de Física Aplicada, Facultad de Ciencias Experimentales, Unidad Asociada IEM-CSIC-U.Huelva, Universidad de Huelva, E-21071 Huelva (Spain); Hochlaf, M. [Université Paris-Est, Laboratoire de Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 boulevard Descartes, F-77454 Marne-la-Vallée (France); Al-Mogren, M. Mogren, E-mail: cristina.puzzarini@unibo.it, E-mail: senent@iem.cfmac.csic.es, E-mail: rosa.dominguez@upm.es, E-mail: miguel.carvajal@dfa.uhu.es, E-mail: majdi.hochlaf@u-pem.fr, E-mail: mmogren@ksu.edu.sa [Chemistry Department, Faculty of Science, King Saud University, PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2014-11-20

    Using state-of-the-art computational methodologies, we predict a set of reliable rotational and torsional parameters for ethyl mercaptan and dimethyl sulfide monosubstituted isotopologues. This includes rotational, quartic, and sextic centrifugal-distortion constants, torsional levels, and torsional splittings. The accuracy of the present data was assessed from a comparison to the available experimental data. Generally, our computed parameters should help in the characterization and the identification of these organo-sulfur molecules in laboratory settings and in the interstellar medium.

  9. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  10. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  11. Density and viscosity modeling and characterization of heavy oils

    DEFF Research Database (Denmark)

    Cisneros, Sergio; Andersen, Simon Ivar; Creek, J

    2005-01-01

    to thousands of mPa center dot s. Essential to the presented extended approach for heavy oils is, first, achievement of accurate P nu T results for the EOS-characterized fluid. In particular, it has been determined that, for accurate viscosity modeling of heavy oils, a compressibility correction in the way...... are widely used within the oil industry. Further work also established the basis for extending the approach to heavy oils. Thus, in this work, the extended f-theory approach is further discussed with the study and modeling of a wider set of representative heavy reservoir fluids with viscosities up...

  12. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  13. DNA barcode data accurately assign higher spider taxa

    Directory of Open Access Journals (Sweden)

    Jonathan A. Coddington

    2016-07-01

    Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However

  14. X-ray wavefront characterization using a rotating shearing interferometer technique.

    Science.gov (United States)

    Wang, Hongchang; Sawhney, Kawal; Berujon, Sébastien; Ziegler, Eric; Rutishauser, Simon; David, Christian

    2011-08-15

    A fast and accurate method to characterize the X-ray wavefront by rotating one of the two gratings of an X-ray shearing interferometer is described and investigated step by step. Such a shearing interferometer consists of a phase grating mounted on a rotation stage, and an absorption grating used as a transmission mask. The mathematical relations for X-ray Moiré fringe analysis when using this device are derived and discussed in the context of the previous literature assumptions. X-ray beam wavefronts without and after X-ray reflective optical elements have been characterized at beamline B16 at Diamond Light Source (DLS) using the presented X-ray rotating shearing interferometer (RSI) technique. It has been demonstrated that this improved method allows accurate calculation of the wavefront radius of curvature and the wavefront distortion, even when one has no previous information on the grating projection pattern period, magnification ratio and the initial grating orientation. As the RSI technique does not require any a priori knowledge of the beam features, it is suitable for routine characterization of wavefronts of a wide range of radii of curvature. © 2011 Optical Society of America

  15. Characterization of photomultiplier tubes with a realistic model through GPU-boosted simulation

    Science.gov (United States)

    Anthony, M.; Aprile, E.; Grandi, L.; Lin, Q.; Saldanha, R.

    2018-02-01

    The accurate characterization of a photomultiplier tube (PMT) is crucial in a wide-variety of applications. However, current methods do not give fully accurate representations of the response of a PMT, especially at very low light levels. In this work, we present a new and more realistic model of the response of a PMT, called the cascade model, and use it to characterize two different PMTs at various voltages and light levels. The cascade model is shown to outperform the more common Gaussian model in almost all circumstances and to agree well with a newly introduced model independent approach. The technical and computational challenges of this model are also presented along with the employed solution of developing a robust GPU-based analysis framework for this and other non-analytical models.

  16. Testing New Programming Paradigms with NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage

  17. Three dimensional characterization and archiving system

    International Nuclear Information System (INIS)

    Sebastian, R.L.; Clark, R.; Gallman, P.

    1996-01-01

    The Three Dimensional Characterization and Archiving System (3D-ICAS) is being developed as a remote system to perform rapid in situ analysis of hazardous organics and radionuclide contamination on structural materials. Coleman Research and its subcontractors, Thermedics Detection, Inc. (TD) and the University of Idaho (UI) are in the second phase of a three phase program to develop 3D-ICAS to support Decontamination and Decommissioning (D and D) operations. Accurate physical characterization of surfaces and the radioactive and organic is a critical D and D task. Surface characterization includes identification of potentially dangerous inorganic materials, such as asbestos and transite. Real-time remotely operable characterization instrumentation will significantly advance the analysis capabilities beyond those currently employed. Chemical analysis is a primary area where the characterization process will be improved. The 3D-ICAS system robotically conveys a multisensor probe near the surfaces to be inspected. The sensor position and orientation are monitored and controlled using coherent laser radar (CLR) tracking. The CLR also provides 3D facility maps which establish a 3D world view within which the robotic sensor system can operate

  18. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  19. An Accurate Mass Determination for Kepler-1655b, a Moderately Irradiated World with a Significant Volatile Envelope

    Science.gov (United States)

    Haywood, Raphaëlle D.; Vanderburg, Andrew; Mortier, Annelies; Giles, Helen A. C.; López-Morales, Mercedes; Lopez, Eric D.; Malavolta, Luca; Charbonneau, David; Collier Cameron, Andrew; Coughlin, Jeffrey L.; Dressing, Courtney D.; Nava, Chantanelle; Latham, David W.; Dumusque, Xavier; Lovis, Christophe; Molinari, Emilio; Pepe, Francesco; Sozzetti, Alessandro; Udry, Stéphane; Bouchy, François; Johnson, John A.; Mayor, Michel; Micela, Giusi; Phillips, David; Piotto, Giampaolo; Rice, Ken; Sasselov, Dimitar; Ségransan, Damien; Watson, Chris; Affer, Laura; Bonomo, Aldo S.; Buchhave, Lars A.; Ciardi, David R.; Fiorenzano, Aldo F.; Harutyunyan, Avet

    2018-05-01

    We present the confirmation of a small, moderately irradiated (F = 155 ± 7 F ⊕) Neptune with a substantial gas envelope in a P = 11.8728787 ± 0.0000085 day orbit about a quiet, Sun-like G0V star Kepler-1655. Based on our analysis of the Kepler light curve, we determined Kepler-1655b’s radius to be 2.213 ± 0.082 R ⊕. We acquired 95 high-resolution spectra with Telescopio Nazionale Galileo/HARPS-N, enabling us to characterize the host star and determine an accurate mass for Kepler-1655b of 5.0{+/- }2.83.1 {M}\\oplus via Gaussian-process regression. Our mass determination excludes an Earth-like composition with 98% confidence. Kepler-1655b falls on the upper edge of the evaporation valley, in the relatively sparsely occupied transition region between rocky and gas-rich planets. It is therefore part of a population of planets that we should actively seek to characterize further.

  20. Accurate formulas for the penalty caused by interferometric crosstalk

    DEFF Research Database (Denmark)

    Rasmussen, Christian Jørgen; Liu, Fenghai; Jeppesen, Palle

    2000-01-01

    New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas.......New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas....

  1. Accurate Compton scattering measurements for N{sub 2} molecules

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Kohjiro [Advanced Technology Research Center, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan); Itou, Masayoshi; Tsuji, Naruki; Sakurai, Yoshiharu [Japan Synchrotron Radiation Research Institute (JASRI), 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5198 (Japan); Hosoya, Tetsuo; Sakurai, Hiroshi, E-mail: sakuraih@gunma-u.ac.jp [Department of Production Science and Technology, Gunma University, 29-1 Hon-cho, Ota, Gunma 373-0057 (Japan)

    2011-06-14

    The accurate Compton profiles of N{sub 2} gas were measured using 121.7 keV synchrotron x-rays. The present accurate measurement proves the better agreement of the CI (configuration interaction) calculation than the Hartree-Fock calculation and suggests the importance of multi-excitation in the CI calculations for the accuracy of wavefunctions in ground states.

  2. Characterizing dispersal patterns in a threatened seabird with limited genetic structure

    NARCIS (Netherlands)

    Hall, Laurie A.; Palsboll, Per J.; Beissinger, Steven R.; Harvey, James T.; Berube, Martine; Raphael, Martin G.; Nelson, S. Kim; Golightly, Richard T.; Mcfarlane-Tranquilla, Laura; Newman, Scott H.; Peery, M. Zachariah

    2009-01-01

    Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age

  3. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  4. Characterization of lipopeptides produced by Bacillus licheniformis using liquid chromatography with accurate tandem mass spectrometry.

    Science.gov (United States)

    Favaro, Gabriella; Bogialli, Sara; Di Gangi, Iole Maria; Nigris, Sebastiano; Baldan, Enrico; Squartini, Andrea; Pastore, Paolo; Baldan, Barbara

    2016-10-30

    The plant endophyte Bacillus licheniformis, isolated from leaves of Vitis vinifera, was studied to individuate and characterize the presence of bioactive lipopeptides having amino acidic structures. Crude extracts of liquid cultures were analyzed by ultra-high-performance liquid chromatography (UHPLC) coupled to a quadrupole time-of-flight (QTOF) mass analyzer. Chromatographic conditions were optimized in order to obtain an efficient separation of the different isobaric lipopeptides, avoiding merged fragmentations of co-eluted isomeric compounds and reducing possible cross-talk phenomena. Composition of the amino acids was outlined through the interpretation of the fragmentation behavior in tandem high-resolution mass spectrometry (HRMS/MS) mode, which showed both common-class and peculiar fragment ions. Both [M + H](+) and [M + Na](+) precursor ions were fragmented in order to differentiate some isobaric amino acids, i.e. Leu/Ile. Neutral losses characteristic of the iso acyl chain were also evidenced. More than 90 compounds belonging to the classes of surfactins and lichenysins, known as biosurfactant molecules, were detected. Sequential LC/HRMS/MS analysis was used to identify linear and cyclic lipopeptides, and to single out the presence of a large number of isomers not previously reported. Some critical issues related to the simultaneous selection of different compounds by the quadrupole filter were highlighted and partially solved, leading to tentative assignments of several structures. Linear lichenysins are described here for the first time. The approach was proved to be useful for the characterization of non-target lipopeptides, and proposes a rationale MS experimental scheme aimed to investigate the difference in amino acid sequence and/or in the acyl chain of the various congeners, when standards are not available. Results expanded the knowledge about production of linear and cyclic bioactive compounds from Bacillus licheniformis, clarifying the

  5. Сравнение эффективности технологий OpenMP, nVidia cuda и StarPU на примере задачи умножения матриц

    OpenAIRE

    Ханкин, Константин

    2013-01-01

    Приведено описание технологий OpenMP, nVidia CUDA и StarPU, варианты решения задачи умножения двух матриц с задействованием каждой из технологий и результаты сравнения реализаций по требовательности к ресурсам.

  6. Remote Underwater Characterization System - Innovative Technology Summary Report

    International Nuclear Information System (INIS)

    Willis, W.D.

    1999-01-01

    Characterization and inspection of water-cooled and moderated nuclear reactors and fuel storage pools requires equipment capable of operating underwater. Similarly, the deactivation and decommissioning of older nuclear facilities often requires the facility owner to accurately characterize underwater structures and equipment which may have been sitting idle for years. The Remote Underwater Characterization System (RUCS) is a small, remotely operated submersible vehicle intended to serve multiple purposes in underwater nuclear operations. It is based on the commercially-available Scallop vehicle 1 , but has been modified by the Department of Energys Robotics Technology Development Program to add auto-depth control, and vehicle orientation and depth monitoring at the operator control panel. The RUCS is designed to provide visual and gamma radiation characterization, even in confined or limited access areas. It was demonstrated in August 1998 at the Idaho National Engineering and Environmental Laboratory (INEEL) as part of the INEEL Large Scale Demonstration and Deployment Project. During the demonstration it was compared in a ''head-to-head fashion with the baseline characterization technology. This paper summarizes the results of the demonstration and lessons learned; comparing and contrasting both technologies in the areas of cost, visual characterization, radiological characterization, and overall operations

  7. Accurate forced-choice recognition without awareness of memory retrieval

    OpenAIRE

    Voss, Joel L.; Baym, Carol L.; Paller, Ken A.

    2008-01-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit ...

  8. Three dimensional characterization and archiving system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastian, R.L.; Clark, R.; Gallman, P. [Coleman Research Corp., Springfield, VA (United States)] [and others

    1995-10-01

    The Three Dimensional Characterization and Archiving System (3D-ICAS) is being developed as a remote system to perform rapid in situ analysis of hazardous organics and radionuclide contamination on structural materials. Coleman Research and its subcontractors, Thermedics Detection, Inc. (TD) and the University of Idaho (UI) are in the second phase of a three phase program to develop 3D-ICAS to support Decontamination and Decommissioning (D&D) operations. Accurate physical characterization of surfaces and the radioactive and organic is a critical D&D task. Surface characterization includes identification of potentially dangerous inorganic materials, such as asbestos and transite. The 3D-ICAS system robotically conveys a multisensor probe near the surface to be inspected. The sensor position and orientation are monitored and controlled by Coherent laser radar (CLR) tracking. The ICAS fills the need for high speed automated organic analysis by means of gas chromatography-mass spectrometry sensors, and also by radionuclide sensors which combines alpha, beta, and gamma counting.

  9. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  10. HIFU Transducer Characterization Using a Robust Needle Hydrophone

    Science.gov (United States)

    Howard, Samuel M.; Zanelli, Claudio I.

    2007-05-01

    A robust needle hydrophone has been developed for HIFU transducer characterization and reported on earlier. After a brief review of the hydrophone design and performance, we demonstrate its use to characterize a 1.5 MHz, 10 cm diameter, F-number 1.5 spherically focused source driven to exceed an intensity of 1400 W/cm2at its focus. Quantitative characterization of this source at high powers is assisted by deconvolving the hydrophone's calibrated frequency response in order to accurately reflect the contribution of harmonics generated by nonlinear propagation in the water testing environment. Results are compared to measurements with a membrane hydrophone at 0.3% duty cycle and to theoretical calculations, using measurements of the field at the source's radiating surface as input to a numerical solution of the KZK equation.

  11. Holographic characterization of colloidal particles in turbid media

    Science.gov (United States)

    Cheong, Fook Chiong; Kasimbeg, Priya; Ruffner, David B.; Hlaing, Ei Hnin; Blusewicz, Jaroslaw M.; Philips, Laura A.; Grier, David G.

    2017-10-01

    Holographic particle characterization uses in-line holographic microscopy and the Lorenz-Mie theory of light scattering to measure the diameter and the refractive index of individual colloidal particles in their native dispersions. This wealth of information has proved invaluable in fields as diverse as soft-matter physics, biopharmaceuticals, wastewater management, and food science but so far has been available only for dispersions in transparent media. Here, we demonstrate that holographic characterization can yield precise and accurate results even when the particles of interest are dispersed in turbid media. By elucidating how multiple light scattering contributes to image formation in holographic microscopy, we establish the range conditions under which holographic characterization can reliably probe turbid samples. We validate the technique with measurements on model colloidal spheres dispersed in commercial nanoparticle slurries.

  12. Calcium ions in aqueous solutions: Accurate force field description aided by ab initio molecular dynamics and neutron scattering

    Science.gov (United States)

    Martinek, Tomas; Duboué-Dijon, Elise; Timr, Štěpán; Mason, Philip E.; Baxová, Katarina; Fischer, Henry E.; Schmidt, Burkhard; Pluhařová, Eva; Jungwirth, Pavel

    2018-06-01

    We present a combination of force field and ab initio molecular dynamics simulations together with neutron scattering experiments with isotopic substitution that aim at characterizing ion hydration and pairing in aqueous calcium chloride and formate/acetate solutions. Benchmarking against neutron scattering data on concentrated solutions together with ion pairing free energy profiles from ab initio molecular dynamics allows us to develop an accurate calcium force field which accounts in a mean-field way for electronic polarization effects via charge rescaling. This refined calcium parameterization is directly usable for standard molecular dynamics simulations of processes involving this key biological signaling ion.

  13. The Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    Science.gov (United States)

    Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.

  14. Smart and accurate state-of-charge indication in portable applications

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Regtien, P.P.L.

    2005-01-01

    Accurate state-of-charge (SoC) and remaining run-time indication for portable devices is important for the user-convenience and to prolong the lifetime of batteries. However, the known methods of SoC indication in portable applications are not accurate enough under all practical conditions. The

  15. Smart and accurate State-of-Charge indication in Portable Applications

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Regtien, Paulus P.L.

    2006-01-01

    Accurate state-of-charge (SoC) and remaining run-time indication for portable devices is important for the user-convenience and to prolong the lifetime of batteries. However, the known methods of SoC indication in portable applications are not accurate enough under all practical conditions. The

  16. Three dimensional characterization and archiving system

    International Nuclear Information System (INIS)

    Sebastian, R.L.; Clark, R.; Gallman, P.

    1995-01-01

    The Three Dimensional Characterization and Archiving System (3D-ICAS) is being developed as a remote system to perform rapid in situ analysis of hazardous organics and radionuclide contamination on structural materials. Coleman Research and its subcontractors, Thermedics Detection, Inc. (TD) and the University of Idaho (UI) are in the second phase of a three phase program to develop 3D-ICAS to support Decontamination and Decommissioning (D ampersand D) operations. Accurate physical characterization of surfaces and the radioactive and organic is a critical D ampersand D task. Surface characterization includes identification of potentially dangerous inorganic materials, such as asbestos and transite. Real-time remotely operable characterization instrumentation will significantly advance the analysis capabilities beyond those currently employed. Chemical analysis is a primary area where the characterization process will be improved. Chemical analysis plays a vital role throughout the process of decontamination. Before clean-up operations can begin the site must be characterized with respect to the type and concentration of contaminants, and detailed site mapping must clarify areas of both high and low risk. During remediation activities chemical analysis provides a means to measure progress and to adjust clean-up strategy. Once the clean-up process has been completed the results of chemical analysis will verify that the site is in compliance with federal and local regulations

  17. MEMS-based platforms for mechanical manipulation and characterization of cells

    Science.gov (United States)

    Pan, Peng; Wang, Wenhui; Ru, Changhai; Sun, Yu; Liu, Xinyu

    2017-12-01

    Mechanical manipulation and characterization of single cells are important experimental techniques in biological and medical research. Because of the microscale sizes and highly fragile structures of cells, conventional cell manipulation and characterization techniques are not accurate and/or efficient enough or even cannot meet the more and more demanding needs in different types of cell-based studies. To this end, novel microelectromechanical systems (MEMS)-based technologies have been developed to improve the accuracy, efficiency, and consistency of various cell manipulation and characterization tasks, and enable new types of cell research. This article summarizes existing MEMS-based platforms developed for cell mechanical manipulation and characterization, highlights their specific design considerations making them suitable for their designated tasks, and discuss their advantages and limitations. In closing, an outlook into future trends is also provided.

  18. Fast and accurate computation of projected two-point functions

    Science.gov (United States)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  19. A refined model for characterizing x-ray multilayers

    International Nuclear Information System (INIS)

    Oren, A.L.; Henke, B.L.

    1987-12-01

    The ability to quickly and accurately characterize arbitrary multilayers is very valuable for not only can we use the characterizations to predict the reflectivity of a multilayer for any soft x-ray wavelength, we also can generalize the results to apply to other multilayers of the same type. In addition, we can use the characterizations as a means of evaluating various sputtering environments and refining sputtering techniques to obtain better multilayers. In this report we have obtained improved characterizations for sample molybdenum-silicon and vanadium-silicon multilayers. However, we only examined five crystals overall, so the conclusions that we could draw about the structure of general multilayers is limited. Research involving many multilayers manufactured under the same sputtering conditions is clearly in order. In order to best understand multilayer structures it may be necessary to further refine our model, e.g., adopting a Gaussian form for the interface regions. With such improvements we can expect even better agreement with experimental values and continued concurrence with other characterization techniques. 18 refs., 30 figs., 7 tabs

  20. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  1. Towards a Transferable UAV-Based Framework for River Hydromorphological Characterization.

    Science.gov (United States)

    Rivas Casado, Mónica; González, Rocío Ballesteros; Ortega, José Fernando; Leinster, Paul; Wright, Ros

    2017-09-26

    The multiple protocols that have been developed to characterize river hydromorphology, partly in response to legislative drivers such as the European Union Water Framework Directive (EU WFD), make the comparison of results obtained in different countries challenging. Recent studies have analyzed the comparability of existing methods, with remote sensing based approaches being proposed as a potential means of harmonizing hydromorphological characterization protocols. However, the resolution achieved by remote sensing products may not be sufficient to assess some of the key hydromorphological features that are required to allow an accurate characterization. Methodologies based on high resolution aerial photography taken from Unmanned Aerial Vehicles (UAVs) have been proposed by several authors as potential approaches to overcome these limitations. Here, we explore the applicability of an existing UAV based framework for hydromorphological characterization to three different fluvial settings representing some of the distinct ecoregions defined by the WFD geographical intercalibration groups (GIGs). The framework is based on the automated recognition of hydromorphological features via tested and validated Artificial Neural Networks (ANNs). Results show that the framework is transferable to the Central-Baltic and Mediterranean GIGs with accuracies in feature identification above 70%. Accuracies of 50% are achieved when the framework is implemented in the Very Large Rivers GIG. The framework successfully identified vegetation, deep water, shallow water, riffles, side bars and shadows for the majority of the reaches. However, further algorithm development is required to ensure a wider range of features (e.g., chutes, structures and erosion) are accurately identified. This study also highlights the need to develop an objective and fit for purpose hydromorphological characterization framework to be adopted within all EU member states to facilitate comparison of results.

  2. Equipment upgrade - Accurate positioning of ion chambers

    International Nuclear Information System (INIS)

    Doane, Harry J.; Nelson, George W.

    1990-01-01

    Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described

  3. An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows

    KAUST Repository

    Liang, Tengfei

    2013-01-01

    Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.

  4. ROLAIDS-CPM: A code for accurate resonance absorption calculations

    International Nuclear Information System (INIS)

    Kruijf, W.J.M. de.

    1993-08-01

    ROLAIDS is used to calculate group-averaged cross sections for specific zones in a one-dimensional geometry. This report describes ROLAIDS-CPM which is an extended version of ROLAIDS. The main extension in ROLAIDS-CPM is the possibility to use the collision probability method for a slab- or cylinder-geometry instead of the less accurate interface-currents method. In this way accurate resonance absorption calculations can be performed with ROLAIDS-CPM. ROLAIDS-CPM has been developed at ECN. (orig.)

  5. Unexpected structural complexity of supernumerary marker chromosomes characterized by microarray comparative genomic hybridization

    Directory of Open Access Journals (Sweden)

    Hing Anne V

    2008-04-01

    Full Text Available Abstract Background Supernumerary marker chromosomes (SMCs are structurally abnormal extra chromosomes that cannot be unambiguously identified by conventional banding techniques. In the past, SMCs have been characterized using a variety of different molecular cytogenetic techniques. Although these techniques can sometimes identify the chromosome of origin of SMCs, they are cumbersome to perform and are not available in many clinical cytogenetic laboratories. Furthermore, they cannot precisely determine the region or breakpoints of the chromosome(s involved. In this study, we describe four patients who possess one or more SMCs (a total of eight SMCs in all four patients that were characterized by microarray comparative genomic hybridization (array CGH. Results In at least one SMC from all four patients, array CGH uncovered unexpected complexity, in the form of complex rearrangements, that could have gone undetected using other molecular cytogenetic techniques. Although array CGH accurately defined the chromosome content of all but two minute SMCs, fluorescence in situ hybridization was necessary to determine the structure of the markers. Conclusion The increasing use of array CGH in clinical cytogenetic laboratories will provide an efficient method for more comprehensive characterization of SMCs. Improved SMC characterization, facilitated by array CGH, will allow for more accurate SMC/phenotype correlation.

  6. An accurate metric for the spacetime around neutron stars

    OpenAIRE

    Pappas, George

    2016-01-01

    The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to inf...

  7. A grid for the accurate positioning of fuel batteries in a reactor core

    International Nuclear Information System (INIS)

    Berens, T.; Maansson, R.; Gunnarsson, C.

    1976-01-01

    A grid for the accurate positioning of the fuel batteries in a reactor core, said grid being constituted by a large member of so called first and second metal rails of rectangular cross-section, resting on their upper edge, said first rails being in parallel relationship and at right angles to said second rails, welded coupling and slots being provided at the intersections of said rails, characterized by relatively great height of said first rails and by the relatively small height of said second rails, and also by the construction of said slots in the high rails, said slots being in the form of elongated recesses, the height of which is smaller than the maximum height of the smaller rails, and one long said of which is provided with a few pins pointing towards the other long side and welded to the surface a small height rail located in said recess. (author)

  8. Reference and counter electrode positions affect electrochemical characterization of bioanodes in different bioelectrochemical systems

    KAUST Repository

    Zhang, Fang; Liu, Jia; Ivanov, Ivan; Hatzell, Marta C.; Yang, Wulin; Ahn, Yongtae; Logan, Bruce E.

    2014-01-01

    in biofilm characteristics because the CVs were electrochemically independent of conditions resulting from changing CE to RE distances. Placing the RE outside of the current path enabled accurate bioanode characterization using CVs and EIS due to negligible

  9. Roofline model toolkit: A practical tool for architectural and program analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Yu Jung [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Van Straalen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ligocki, Terry J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cordery, Matthew J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wright, Nicholas J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hall, Mary W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-18

    We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measure sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.

  10. Identification and accurate quantification of structurally related peptide impurities in synthetic human C-peptide by liquid chromatography-high resolution mass spectrometry.

    Science.gov (United States)

    Li, Ming; Josephs, Ralf D; Daireaux, Adeline; Choteau, Tiphaine; Westwood, Steven; Wielgosz, Robert I; Li, Hongmei

    2018-06-04

    Peptides are an increasingly important group of biomarkers and pharmaceuticals. The accurate purity characterization of peptide calibrators is critical for the development of reference measurement systems for laboratory medicine and quality control of pharmaceuticals. The peptides used for these purposes are increasingly produced through peptide synthesis. Various approaches (for example mass balance, amino acid analysis, qNMR, and nitrogen determination) can be applied to accurately value assign the purity of peptide calibrators. However, all purity assessment approaches require a correction for structurally related peptide impurities in order to avoid biases. Liquid chromatography coupled to high resolution mass spectrometry (LC-hrMS) has become the key technique for the identification and accurate quantification of structurally related peptide impurities in intact peptide calibrator materials. In this study, LC-hrMS-based methods were developed and validated in-house for the identification and quantification of structurally related peptide impurities in a synthetic human C-peptide (hCP) material, which served as a study material for an international comparison looking at the competencies of laboratories to perform peptide purity mass fraction assignments. More than 65 impurities were identified, confirmed, and accurately quantified by using LC-hrMS. The total mass fraction of all structurally related peptide impurities in the hCP study material was estimated to be 83.3 mg/g with an associated expanded uncertainty of 3.0 mg/g (k = 2). The calibration hierarchy concept used for the quantification of individual impurities is described in detail. Graphical abstract ᅟ.

  11. Accurate activity recognition in a home setting

    NARCIS (Netherlands)

    van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.

    2008-01-01

    A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its

  12. Improvement of defect characterization in ultrasonic testing by adaptative learning network

    International Nuclear Information System (INIS)

    Bieth, M.; Adamonis, D.C.; Jusino, A.

    1982-01-01

    Numerous methods exist now for signal analysis in ultrasonic testing. These methods give more or less accurate information for defects characterization. In this paper is presented the development of a particular system based on a computer Signal processing: the Adaptative Learning Network (ALN) allowing the discrimination of defects in function of their nature. The ultrasonic signal is sampled and characterized by parameters amplitude-time and amplitude-frequency. The method was tested on stainless steel tubes welds showing fatigue cracks. The ALN model developed allows, under certain conditions, the discrimination of cracks from other defects [fr

  13. Final characterization report for the 108-F Biological Laboratory

    International Nuclear Information System (INIS)

    Harris, R.A.

    1996-09-01

    This report provides a compilation of characterization data for the 108-F Biological Laboratory collected during the period of May 7, 1996 through August 29, 1996. The 108-F Biology Laboratory is located on the Hanford Site in Richland, Washington. The characterization activities were organized and implemented to evaluate the radiological status of the laboratory and to identify hazardous materials. This report reflects the current conditions and status of the laboratory. Information in this report is intended to be utilized to prepare an accurate cost estimate for building demolition, to aid in planning decontamination and demolition activities, and allow proper disposal of demolition debris

  14. Characterization of a system for measurements on soft ferrites

    International Nuclear Information System (INIS)

    Adamo, F; Attivissimo, F; Marracci, M; Tellini, B

    2012-01-01

    This paper deals with the characterization of a system for measurements on soft ferrites through a volt-amperometric method. The accurate control of the driving input field is discussed as a critical aspect for the definition of the correct operating conditions on the magnetic sample. A custom-built transimpedance amplifier is characterized in terms of total harmonic distortion and signal-to-noise and distortion ratio of the primary current and shown as a valid configuration for the required purposes. As a main contribution, the uncertainty analyses of the major loop measurement and of the magnetic accommodation measurement of minor asymmetric loops are provided. (paper)

  15. Micromechanical Characterization of Complex Polypropylene Morphologies by HarmoniX AFM

    Directory of Open Access Journals (Sweden)

    S. Liparoti

    2017-01-01

    Full Text Available This paper examines the capability of the HarmoniX Atomic Force Microscopy (AFM technique to draw accurate and reliable micromechanical characterization of complex polymer morphologies generally found in conventional thermoplastic polymers. To that purpose, injection molded polypropylene samples, containing representative morphologies, have been characterized by HarmoniX AFM. Mapping and distributions of mechanical properties of the samples surface are determined and analyzed. Effects of sample preparation and test conditions are also analyzed. Finally, the AFM determination of surface elastic moduli has been compared with that obtained by indentation tests, finding good agreement among the results.

  16. Minimally invasive three-dimensional site characterization system

    International Nuclear Information System (INIS)

    Steedman, D.; Seusy, F.E.; Gibbons, J.; Bratton, J.L.

    1993-09-01

    This paper presents an improved for hazardous site characterization. The major components of the systems are: (1) an enhanced cone penetrometer test, (2) surface geophysical surveys and (3) a field database and visualization code. The objective of the effort was to develop a method of combining geophysical data with cone penetrometer data in the field to produce a synergistic effect. Various aspects of the method were tested at three sites. The results from each site are discussed and the data compared. This method allows the data to be interpreted more fully with greater certainty, is faster, cheaper and leads to a more accurate site characterization. Utilizing the cone penetrometer test rather than the standard drilling, sampling and laboratory testing reduces the workers exposure to hazardous materials and minimizes the hazardous material disposal problems. The technologies employed in this effort are, for the most part, state-of-the-art procedures. The approach of using data from various measurement systems to develop a synergistic effect was a unique contribution to environmental site characterization. The use of the cone penetrometer for providing ''ground truth'' data and as a platform for subsurface sensors in environmental site characterization represents a significant advancement in environmental site characterization

  17. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  18. Seeing and Being Seen: Predictors of Accurate Perceptions about Classmates’ Relationships

    Science.gov (United States)

    Neal, Jennifer Watling; Neal, Zachary P.; Cappella, Elise

    2015-01-01

    This study examines predictors of observer accuracy (i.e. seeing) and target accuracy (i.e. being seen) in perceptions of classmates’ relationships in a predominantly African American sample of 420 second through fourth graders (ages 7 – 11). Girls, children in higher grades, and children in smaller classrooms were more accurate observers. Targets (i.e. pairs of children) were more accurately observed when they occurred in smaller classrooms of higher grades and involved same-sex, high-popularity, and similar-popularity children. Moreover, relationships between pairs of girls were more accurately observed than relationships between pairs of boys. As a set, these findings suggest the importance of both observer and target characteristics for children’s accurate perceptions of classroom relationships. Moreover, the substantial variation in observer accuracy and target accuracy has methodological implications for both peer-reported assessments of classroom relationships and the use of stochastic actor-based models to understand peer selection and socialization processes. PMID:26347582

  19. Parallel protein secondary structure prediction based on neural networks.

    Science.gov (United States)

    Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi

    2004-01-01

    Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.

  20. Argobots: A Lightweight Low-Level Threading and Tasking Framework

    International Nuclear Information System (INIS)

    Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan; Bordage, Cyril; Bosilca, George

    2017-01-01

    In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, are either too specific to applications or architectures or are not as powerful or flexible. In this article, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing a rich set of controls to allow specialization by the user or high-level programming model. Here, we describe the design, implementation, and optimization of Argobots and present integrations with three example high-level models: OpenMP, MPI, and co-located I/O service. Evaluations show that (1) Argobots outperforms existing generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency hiding capabilities; and (4) I/O service with Argobots reduces interference with co-located applications, achieving performance competitive with that of the Pthreads version.

  1. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  2. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  3. Multivariate correlation analysis technique based on euclidean distance map for network traffic characterization

    NARCIS (Netherlands)

    Tan, Zhiyuan; Jamdagni, Aruna; He, Xiangjian; Nanda, Priyadarsi; Liu, Ren Ping; Qing, Sihan; Susilo, Willy; Wang, Guilin; Liu, Dongmei

    2011-01-01

    The quality of feature has significant impact on the performance of detection techniques used for Denial-of-Service (DoS) attack. The features that fail to provide accurate characterization for network traffic records make the techniques suffer from low accuracy in detection. Although researches

  4. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  5. TU-AB-BRC-03: Accurate Tissue Characterization for Monte Carlo Dose Calculation Using Dual-and Multi-Energy CT Data

    Energy Technology Data Exchange (ETDEWEB)

    Lalonde, A; Bouchard, H [University of Montreal, Montreal, Qc (Canada)

    2016-06-15

    Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to the technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.

  6. TU-AB-BRC-03: Accurate Tissue Characterization for Monte Carlo Dose Calculation Using Dual-and Multi-Energy CT Data

    International Nuclear Information System (INIS)

    Lalonde, A; Bouchard, H

    2016-01-01

    Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to the technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.

  7. Integrated Translatome and Proteome: Approach for Accurate Portraying of Widespread Multifunctional Aspects of Trichoderma

    Science.gov (United States)

    Sharma, Vivek; Salwan, Richa; Sharma, P. N.; Gulati, Arvind

    2017-01-01

    Genome-wide studies of transcripts expression help in systematic monitoring of genes and allow targeting of candidate genes for future research. In contrast to relatively stable genomic data, the expression of genes is dynamic and regulated both at time and space level at different level in. The variation in the rate of translation is specific for each protein. Both the inherent nature of an mRNA molecule to be translated and the external environmental stimuli can affect the efficiency of the translation process. In biocontrol agents (BCAs), the molecular response at translational level may represents noise-like response of absolute transcript level and an adaptive response to physiological and pathological situations representing subset of mRNAs population actively translated in a cell. The molecular responses of biocontrol are complex and involve multistage regulation of number of genes. The use of high-throughput techniques has led to rapid increase in volume of transcriptomics data of Trichoderma. In general, almost half of the variations of transcriptome and protein level are due to translational control. Thus, studies are required to integrate raw information from different “omics” approaches for accurate depiction of translational response of BCAs in interaction with plants and plant pathogens. The studies on translational status of only active mRNAs bridging with proteome data will help in accurate characterization of only a subset of mRNAs actively engaged in translation. This review highlights the associated bottlenecks and use of state-of-the-art procedures in addressing the gap to accelerate future accomplishment of biocontrol mechanisms. PMID:28900417

  8. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  9. Accurate shear measurement with faint sources

    International Nuclear Information System (INIS)

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys

  10. High accurate time system of the Low Latitude Meridian Circle.

    Science.gov (United States)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  11. Meta-analytic approach to the accurate prediction of secreted virulence effectors in gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    Sato Yoshiharu

    2011-11-01

    Full Text Available Abstract Background Many pathogens use a type III secretion system to translocate virulence proteins (called effectors in order to adapt to the host environment. To date, many prediction tools for effector identification have been developed. However, these tools are insufficiently accurate for producing a list of putative effectors that can be applied directly for labor-intensive experimental verification. This also suggests that important features of effectors have yet to be fully characterized. Results In this study, we have constructed an accurate approach to predicting secreted virulence effectors from Gram-negative bacteria. This consists of a support vector machine-based discriminant analysis followed by a simple criteria-based filtering. The accuracy was assessed by estimating the average number of true positives in the top-20 ranking in the genome-wide screening. In the validation, 10 sets of 20 training and 20 testing examples were randomly selected from 40 known effectors of Salmonella enterica serovar Typhimurium LT2. On average, the SVM portion of our system predicted 9.7 true positives from 20 testing examples in the top-20 of the prediction. Removal of the N-terminal instability, codon adaptation index and ProtParam indices decreased the score to 7.6, 8.9 and 7.9, respectively. These discrimination features suggested that the following characteristics of effectors had been uncovered: unstable N-terminus, non-optimal codon usage, hydrophilic, and less aliphathic. The secondary filtering process represented by coexpression analysis and domain distribution analysis further refined the average true positive counts to 12.3. We further confirmed that our system can correctly predict known effectors of P. syringae DC3000, strongly indicating its feasibility. Conclusions We have successfully developed an accurate prediction system for screening effectors on a genome-wide scale. We confirmed the accuracy of our system by external validation

  12. Numerical methods for characterization of synchrotron radiation based on the Wigner function method

    Directory of Open Access Journals (Sweden)

    Takashi Tanaka

    2014-06-01

    Full Text Available Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.

  13. Network Characterization Service (NCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Guojun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, George [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Crowley, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Agarwal, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2001-06-06

    Distributed applications require information to effectively utilize the network. Some of the information they require is the current and maximum bandwidth, current and minimum latency, bottlenecks, burst frequency, and congestion extent. This type of information allows applications to determine parameters like optimal TCP buffer size. In this paper, we present a cooperative information-gathering tool called the network characterization service (NCS). NCS runs in user space and is used to acquire network information. Its protocol is designed for scalable and distributed deployment, similar to DNS. Its algorithms provide efficient, speedy and accurate detection of bottlenecks, especially dynamic bottlenecks. On current and future networks, dynamic bottlenecks do and will affect network performance dramatically.

  14. Dimensional characterization of extracellular vesicles using atomic force microscopy

    International Nuclear Information System (INIS)

    Sebaihi, N; De Boeck, B; Pétry, J; Yuana, Y; Nieuwland, R

    2017-01-01

    Extracellular vesicles (EV) are small biological entities released from cells into body fluids. EV are recognized as mediators in intercellular communication and influence important physiological processes. It has been shown that the concentration and composition of EV in body fluids may differ from healthy subjects to patients suffering from particular disease. So, EV have gained a strong scientific and clinical interest as potential biomarkers for diagnosis and prognosis of disease. Due to their small size, accurate detection and characterization of EV remain challenging. The aim of the presented work is to propose a characterization method of erythrocyte-derived EV using atomic force microscopy (AFM). The vesicles are immobilized on anti-CD235a-modified mica and analyzed by AFM under buffer liquid and dry conditions. EV detected under both conditions show very similar sizes namely ∼30 nm high and ∼90 nm wide. The size of these vesicles remains stable over drying time as long as 7 d at room temperature. Since the detected vesicles are not spherical, EV are characterized by their height and diameter, and not only by the height as is usually done for spherical nanoparticles. In order to obtain an accurate measurement of EV diameters, the geometry of the AFM tip was evaluated to account for the lateral broadening artifact inherent to AFM measurements. To do so, spherical polystyrene (PS) nanobeads and EV were concomitantly deposited on the same mica substrate and simultaneously measured by AFM under dry conditions. By applying this procedure, direct calibration of the AFM tip could be performed together with EV characterization under identical experimental conditions minimizing external sources of uncertainty on the shape and size of the tip, thus allowing standardization of EV measurement. (paper)

  15. Implementing Shared Memory Parallelism in MCBEND

    Directory of Open Access Journals (Sweden)

    Bird Adam

    2017-01-01

    Full Text Available MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  16. Diagnostic Setup for Characterization of Near-Anode Processes in Hall Thrusters

    International Nuclear Information System (INIS)

    Dorf, L.; Raitses, Y.; Fisch, N.J.

    2003-01-01

    A diagnostic setup for characterization of near-anode processes in Hall-current plasma thrusters consisting of biased and emissive electrostatic probes, high-precision positioning system and low-noise electronic circuitry was developed and tested. Experimental results show that radial probe insertion does not cause perturbations to the discharge and therefore can be used for accurate near-anode measurements

  17. Design and development of a profilometer for the fast and accurate characterization of optical surfaces

    Science.gov (United States)

    Gómez-Pedrero, José A.; Rodríguez-Ibañez, Diego; Alonso, José; Quirgoa, Juan A.

    2015-09-01

    With the advent of techniques devised for the mass production of optical components made with surfaces of arbitrary form (also known as free form surfaces) in the last years, a parallel development of measuring systems adapted for these new kind of surfaces constitutes a real necessity for the industry. Profilometry is one of the preferred methods for the assessment of the quality of a surface, and is widely employed in the optical fabrication industry for the quality control of its products. In this work, we present the design, development and assembly of a new profilometer with five axis of movement, specifically suited to the measurement of medium size (up to 150 mm of diameter) "free-form" optical surfaces with sub-micrometer accuracy and low measuring times. The apparatus is formed by three X, Y, Z linear motorized positioners plus and additional angular and a tilt positioner employed to locate accurately the surface to be measured and the probe which can be a mechanical or an optical one, being optical one a confocal sensor based on chromatic aberration. Both optical and mechanical probes guarantee an accuracy lower than the micrometer in the determination of the surface height, thus ensuring an accuracy in the surface curvatures of the order of 0.01 D or better. An original calibration procedure based on the measurement of a precision sphere has been developed in order to correct the perpendicularity error between the axes of the linear positioners. To reduce the measuring time of the profilometer, a custom electronics, based on an Arduino™ controller, have been designed and produced in order to synchronize the five motorized positioners and the optical and mechanical probes so that a medium size surface (around 10 cm of diameter) with a dynamic range in curvatures of around 10 D, can be measured in less than 300 seconds (using three axes) keeping the resolution in height and curvature in the figures mentioned above.

  18. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    International Nuclear Information System (INIS)

    Brandt, J.; Ebel, A.; Elbern, H.; Jakobs, H.; Memmesheimer, M.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.

    1997-01-01

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  19. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    International Nuclear Information System (INIS)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-01-01

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  20. HIGHLY-ACCURATE MODEL ORDER REDUCTION TECHNIQUE ON A DISCRETE DOMAIN

    Directory of Open Access Journals (Sweden)

    L. D. Ribeiro

    2015-09-01

    Full Text Available AbstractIn this work, we present a highly-accurate technique of model order reduction applied to staged processes. The proposed method reduces the dimension of the original system based on null values of moment-weighted sums of heat and mass balance residuals on real stages. To compute these sums of weighted residuals, a discrete form of Gauss-Lobatto quadrature was developed, allowing a high degree of accuracy in these calculations. The locations where the residuals are cancelled vary with time and operating conditions, characterizing a desirable adaptive nature of this technique. Balances related to upstream and downstream devices (such as condenser, reboiler, and feed tray of a distillation column are considered as boundary conditions of the corresponding difference-differential equations system. The chosen number of moments is the dimension of the reduced model being much lower than the dimension of the complete model and does not depend on the size of the original model. Scaling of the discrete independent variable related with the stages was crucial for the computational implementation of the proposed method, avoiding accumulation of round-off errors present even in low-degree polynomial approximations in the original discrete variable. Dynamical simulations of distillation columns were carried out to check the performance of the proposed model order reduction technique. The obtained results show the superiority of the proposed procedure in comparison with the orthogonal collocation method.

  1. Pseudo-stochastic signal characterization in wavelet-domain

    International Nuclear Information System (INIS)

    Zaytsev, Kirill I; Zhirnov, Andrei A; Alekhnovich, Valentin I; Yurchenko, Stanislav O

    2015-01-01

    In this paper we present the method for fast and accurate characterization of pseudo-stochastic signals, which contain a large number of similar but randomly-located fragments. This method allows estimating the statistical characteristics of pseudo-stochastic signal, and it is based on digital signal processing in wavelet-domain. Continuous wavelet transform and the criterion for wavelet scale power density are utilized. We are experimentally implementing this method for the purpose of sand granulometry, and we are estimating the statistical parameters of test sand fractions

  2. Accurate isotope ratio mass spectrometry. Some problems and possibilities

    International Nuclear Information System (INIS)

    Bievre, P. de

    1978-01-01

    The review includes reference to 190 papers, mainly published during the last 10 years. It covers the following: important factors in accurate isotope ratio measurements (precision and accuracy of isotope ratio measurements -exemplified by determinations of 235 U/ 238 U and of other elements including 239 Pu/ 240 Pu; isotope fractionation -exemplified by curves for Rb, U); applications (atomic weights); the Oklo natural nuclear reactor (discovered by UF 6 mass spectrometry at Pierrelatte); nuclear and other constants; isotope ratio measurements in nuclear geology and isotope cosmology - accurate age determination; isotope ratio measurements on very small samples - archaeometry; isotope dilution; miscellaneous applications; and future prospects. (U.K.)

  3. Numerical modeling of exciton-polariton Bose-Einstein condensate in a microcavity

    Science.gov (United States)

    Voronych, Oksana; Buraczewski, Adam; Matuszewski, Michał; Stobińska, Magdalena

    2017-06-01

    within a semiconductor microcavity. It is described by a set of nonlinear differential equations similar in spirit to the Gross-Pitaevskii (GP) equation, but their unique properties do not allow standard GP solving frameworks to be utilized. Finding an accurate and efficient numerical algorithm as well as development of optimized numerical software is necessary for effective theoretical investigation of exciton-polaritons. Solution method: A Runge-Kutta method of 4th order was employed to solve the set of differential equations describing exciton-polariton superfluids. The method was fitted for the exciton-polariton equations and further optimized. The C++ programs utilize OpenMP extensions and vector operations in order to fully utilize the computer hardware. Running time: 6h for 100 ps evolution, depending on the values of parameters

  4. Discrete sensors distribution for accurate plantar pressure analyses.

    Science.gov (United States)

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. New experimental methodology, setup and LabView program for accurate absolute thermoelectric power and electrical resistivity measurements between 25 and 1600 K: Application to pure copper, platinum, tungsten, and nickel at very high temperatures

    International Nuclear Information System (INIS)

    Abadlia, L.; Mayoufi, M.; Gasser, F.; Khalouk, K.; Gasser, J. G.

    2014-01-01

    In this paper we describe an experimental setup designed to measure simultaneously and very accurately the resistivity and the absolute thermoelectric power, also called absolute thermopower or absolute Seebeck coefficient, of solid and liquid conductors/semiconductors over a wide range of temperatures (room temperature to 1600 K in present work). A careful analysis of the existing experimental data allowed us to extend the absolute thermoelectric power scale of platinum to the range 0-1800 K with two new polynomial expressions. The experimental device is controlled by a LabView program. A detailed description of the accurate dynamic measurement methodology is given in this paper. We measure the absolute thermoelectric power and the electrical resistivity and deduce with a good accuracy the thermal conductivity using the relations between the three electronic transport coefficients, going beyond the classical Wiedemann-Franz law. We use this experimental setup and methodology to give new very accurate results for pure copper, platinum, and nickel especially at very high temperatures. But resistivity and absolute thermopower measurement can be more than an objective in itself. Resistivity characterizes the bulk of a material while absolute thermoelectric power characterizes the material at the point where the electrical contact is established with a couple of metallic elements (forming a thermocouple). In a forthcoming paper we will show that the measurement of resistivity and absolute thermoelectric power characterizes advantageously the (change of) phase, probably as well as DSC (if not better), since the change of phases can be easily followed during several hours/days at constant temperature

  6. New experimental methodology, setup and LabView program for accurate absolute thermoelectric power and electrical resistivity measurements between 25 and 1600 K: application to pure copper, platinum, tungsten, and nickel at very high temperatures.

    Science.gov (United States)

    Abadlia, L; Gasser, F; Khalouk, K; Mayoufi, M; Gasser, J G

    2014-09-01

    In this paper we describe an experimental setup designed to measure simultaneously and very accurately the resistivity and the absolute thermoelectric power, also called absolute thermopower or absolute Seebeck coefficient, of solid and liquid conductors/semiconductors over a wide range of temperatures (room temperature to 1600 K in present work). A careful analysis of the existing experimental data allowed us to extend the absolute thermoelectric power scale of platinum to the range 0-1800 K with two new polynomial expressions. The experimental device is controlled by a LabView program. A detailed description of the accurate dynamic measurement methodology is given in this paper. We measure the absolute thermoelectric power and the electrical resistivity and deduce with a good accuracy the thermal conductivity using the relations between the three electronic transport coefficients, going beyond the classical Wiedemann-Franz law. We use this experimental setup and methodology to give new very accurate results for pure copper, platinum, and nickel especially at very high temperatures. But resistivity and absolute thermopower measurement can be more than an objective in itself. Resistivity characterizes the bulk of a material while absolute thermoelectric power characterizes the material at the point where the electrical contact is established with a couple of metallic elements (forming a thermocouple). In a forthcoming paper we will show that the measurement of resistivity and absolute thermoelectric power characterizes advantageously the (change of) phase, probably as well as DSC (if not better), since the change of phases can be easily followed during several hours/days at constant temperature.

  7. Characterization of materials for prosthetic implants using the BEAMnrc Monte Carlo code

    International Nuclear Information System (INIS)

    Spezi, E; Palleri, F; Angelini, A L; Ferri, A; Baruffaldi, F

    2007-01-01

    Metallic implants degrade image quality and perturb severely the patient dose distribution in external beam radiotherapy. Furthermore, conventional treatment planning systems (TPS) do not accurately account for tissue heterogeneities, especially at the interfaces where high Z gradients are present. This work deals with the accurate and systematic characterization of materials used for prosthetic implants. The dose calculation engine used in this investigation is the BEAMnrc Monte Carlo code. A detailed comparison versus experimental data was carried out for two clinical photon beam energies (6MV and 18MV). Our results show that in both cases a very good agreement (within ± 2%) between calculations and experiments was achieved

  8. More accurate picture of human body organs

    International Nuclear Information System (INIS)

    Kolar, J.

    1985-01-01

    Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)

  9. Second-order accurate volume-of-fluid algorithms for tracking material interfaces

    International Nuclear Information System (INIS)

    Pilliod, James Edward; Puckett, Elbridge Gerry

    2004-01-01

    We introduce two new volume-of-fluid interface reconstruction algorithms and compare the accuracy of these algorithms to four other widely used volume-of-fluid interface reconstruction algorithms. We find that when the interface is smooth (e.g., continuous with two continuous derivatives) the new methods are second-order accurate and the other algorithms are first-order accurate. We propose a design criteria for a volume-of-fluid interface reconstruction algorithm to be second-order accurate. Namely, that it reproduce lines in two space dimensions or planes in three space dimensions exactly. We also introduce a second-order, unsplit, volume-of-fluid advection algorithm that is based on a second-order, finite difference method for scalar conservation laws due to Bell, Dawson and Shubin. We test this advection algorithm by modeling several different interface shapes propagating in two simple incompressible flows and compare the results with the standard second-order, operator-split advection algorithm. Although both methods are second-order accurate when the interface is smooth, we find that the unsplit algorithm exhibits noticeably better resolution in regions where the interface has discontinuous derivatives, such as at corners

  10. Scanning probe microscopy techniques for mechanical characterization at nanoscale

    International Nuclear Information System (INIS)

    Passeri, D.; Anastasiadis, P.; Tamburri, E.; Gugkielmotti, V.; Rossi, M.

    2013-01-01

    Three atomic force microscopy (AFM)-based techniques are reviewed that allow one to conduct accurate measurements of mechanical properties of either stiff or compliant materials at a nanometer scale. Atomic force acoustic microscopy, AFM-based depth sensing indentation, and torsional harmonic AFM are briefly described. Examples and results of quantitative characterization of stiff (an ultrathin SeSn film), soft polymeric (polyaniline fibers doped with detonation nanodiamond) and biological (collagen fibers) materials are reported.

  11. A systematic method for characterizing the time-range performance of ground penetrating radar

    International Nuclear Information System (INIS)

    Strange, A D

    2013-01-01

    The fundamental performance of ground penetrating radar (GPR) is linked to the ability to measure the signal time-of-flight in order to provide an accurate radar-to-target range estimate. Having knowledge of the actual time range and timing nonlinearities of a trace is therefore important when seeking to make quantitative range estimates. However, very few practical methods have been formally reported in the literature to characterize GPR time-range performance. This paper describes a method to accurately measure the true time range of a GPR to provide a quantitative assessment of the timing system performance and detect and quantify the effects of timing nonlinearity due to timing jitter. The effect of varying the number of samples per trace on the true time range has also been investigated and recommendations on how to minimize the effects of timing errors are described. The approach has been practically applied to characterize the timing performance of two commercial GPR systems. The importance of the method is that it provides the GPR community with a practical method to readily characterize the underlying accuracy of GPR systems. This in turn leads to enhanced target depth estimation as well as facilitating the accuracy of more sophisticated GPR signal processing methods. (paper)

  12. Distributed and multi-core computation of 2-loop integrals

    International Nuclear Information System (INIS)

    De Doncker, E; Yuasa, F

    2014-01-01

    For an automatic computation of Feynman loop integrals in the physical region we rely on an extrapolation technique where the integrals of the sequence are obtained with iterated/repeated adaptive methods from the QUADPACK 1D quadrature package. The integration rule evaluations in the outer level, corresponding to independent inner integral approximations, are assigned to threads dynamically via the OpenMP runtime in the parallel implementation. Furthermore, multi-level (nested) parallelism enables an efficient utilization of hyperthreading or larger numbers of cores. For a class of loop integrals in the unphysical region, which do not suffer from singularities in the interior of the integration domain, we find that the distributed adaptive integration methods in the multivariate PARINT package are highly efficient and accurate. We apply these techniques without resorting to integral transformations and report on the capabilities of the algorithms and the parallel performance for a test set including various types of two-loop integrals

  13. Leg mass characteristics of accurate and inaccurate kickers--an Australian football perspective.

    Science.gov (United States)

    Hart, Nicolas H; Nimphius, Sophia; Cochrane, Jodie L; Newton, Robert U

    2013-01-01

    Athletic profiling provides valuable information to sport scientists, assisting in the optimal design of strength and conditioning programmes. Understanding the influence these physical characteristics may have on the generation of kicking accuracy is advantageous. The aim of this study was to profile and compare the lower limb mass characteristics of accurate and inaccurate Australian footballers. Thirty-one players were recruited from the Western Australian Football League to perform ten drop punt kicks over 20 metres to a player target. Players were separated into accurate (n = 15) and inaccurate (n = 16) groups, with leg mass characteristics assessed using whole body dual energy x-ray absorptiometry (DXA) scans. Accurate kickers demonstrated significantly greater relative lean mass (P ≤ 0.004) and significantly lower relative fat mass (P ≤ 0.024) across all segments of the kicking and support limbs, while also exhibiting significantly higher intra-limb lean-to-fat mass ratios for all segments across both limbs (P ≤ 0.009). Inaccurate kickers also produced significantly larger asymmetries between limbs than accurate kickers (P ≤ 0.028), showing considerably lower lean mass in their support leg. These results illustrate a difference in leg mass characteristics between accurate and inaccurate kickers, highlighting the potential influence these may have on technical proficiency of the drop punt.

  14. Measuring solar reflectance - Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul [Heat Island Group, Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2010-09-15

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective ''cool colored'' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland US latitudes, this metric R{sub E891BN} can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {<=} 5:12 [23 ]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool roof net energy savings by as much as 23%. We define clear sky air mass one global horizontal (''AM1GH'') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer. (author)

  15. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  16. Accurate joint space quantification in knee osteoarthritis: a digital x-ray tomosynthesis phantom study

    Science.gov (United States)

    Sewell, Tanzania S.; Piacsek, Kelly L.; Heckel, Beth A.; Sabol, John M.

    2011-03-01

    The current imaging standard for diagnosis and monitoring of knee osteoarthritis (OA) is projection radiography. However radiographs may be insensitive to markers of early disease such as osteophytes and joint space narrowing (JSN). Relative to standard radiography, digital X-ray tomosynthesis (DTS) may provide improved visualization of the markers of knee OA without the interference of superimposed anatomy. DTS utilizes a series of low-dose projection images over an arc of +/-20 degrees to reconstruct tomographic images parallel to the detector. We propose that DTS can increase accuracy and precision in JSN quantification. The geometric accuracy of DTS was characterized by quantifying joint space width (JSW) as a function of knee flexion and position using physical and anthropomorphic phantoms. Using a commercially available digital X-ray system, projection and DTS images were acquired for a Lucite rod phantom with known gaps at various source-object-distances, and angles of flexion. Gap width, representative of JSW, was measured using a validated algorithm. Over an object-to-detector-distance range of 5-21cm, a 3.0mm gap width was reproducibly measured in the DTS images, independent of magnification. A simulated 0.50mm (+/-0.13) JSN was quantified accurately (95% CI 0.44-0.56mm) in the DTS images. Angling the rods to represent knee flexion, the minimum gap could be precisely determined from the DTS images and was independent of flexion angle. JSN quantification using DTS was insensitive to distance from patient barrier and flexion angle. Potential exists for the optimization of DTS for accurate radiographic quantification of knee OA independent of patient positioning.

  17. A quantum-classical simulation of a multi-surface multi-mode ...

    Indian Academy of Sciences (India)

    Multi surface multi mode quantum dynamics; parallelized quantum classical approach; TDDVR method. 1. ... cal simulation on molecular system is a great cha- llenge for ..... on a multiple core cluster with shared memory using. OpenMP based ...

  18. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  19. How Accurate are Government Forecast of Economic Fundamentals?

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); Ph.H.B.F. Franses (Philip Hans); M.J. McAleer (Michael)

    2009-01-01

    textabstractA government’s ability to forecast key economic fundamentals accurately can affect business confidence, consumer sentiment, and foreign direct investment, among others. A government forecast based on an econometric model is replicable, whereas one that is not fully based on an

  20. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...

  1. Structuring a cost-effective site characterization

    International Nuclear Information System (INIS)

    Berven, B.A.; Little, C.A.; Swaja, R.E.

    1990-01-01

    Successful chemical and radiological site characterizations are complex activities which require meticulously detailed planning. Each layer of investigation is based upon previously generated information about the site. Baseline historical, physical, geological, and regulatory information is prerequisite for preliminary studies at a site. Preliminary studies then provide samples and measurements which define the identity of potential contaminants and define boundaries around the area to be investigated. The goal of a full site characterization is to accurately determine the extent and magnitude of contaminants and carefully define the site conditions such that the future movements of site contaminants can be assessed for potential exposure to human occupants and/or environmental impacts. Critical to this process is the selection of appropriate measurement and sampling methodology, selection and use of appropriate instrumentation and management/interpretation of site information. Site investigations require optimization between the need of information to maximize the understanding of site conditions and the cost of acquiring that information. 5 refs., 1 tab

  2. Current characterization methods for cellulose nanomaterials.

    Science.gov (United States)

    Foster, E Johan; Moon, Robert J; Agarwal, Umesh P; Bortner, Michael J; Bras, Julien; Camarero-Espinosa, Sandra; Chan, Kathleen J; Clift, Martin J D; Cranston, Emily D; Eichhorn, Stephen J; Fox, Douglas M; Hamad, Wadood Y; Heux, Laurent; Jean, Bruno; Korey, Matthew; Nieh, World; Ong, Kimberly J; Reid, Michael S; Renneckar, Scott; Roberts, Rose; Shatkin, Jo Anne; Simonsen, John; Stinson-Bagby, Kelly; Wanasekara, Nandula; Youngblood, Jeff

    2018-04-23

    A new family of materials comprised of cellulose, cellulose nanomaterials (CNMs), having properties and functionalities distinct from molecular cellulose and wood pulp, is being developed for applications that were once thought impossible for cellulosic materials. Commercialization, paralleled by research in this field, is fueled by the unique combination of characteristics, such as high on-axis stiffness, sustainability, scalability, and mechanical reinforcement of a wide variety of materials, leading to their utility across a broad spectrum of high-performance material applications. However, with this exponential growth in interest/activity, the development of measurement protocols necessary for consistent, reliable and accurate materials characterization has been outpaced. These protocols, developed in the broader research community, are critical for the advancement in understanding, process optimization, and utilization of CNMs in materials development. This review establishes detailed best practices, methods and techniques for characterizing CNM particle morphology, surface chemistry, surface charge, purity, crystallinity, rheological properties, mechanical properties, and toxicity for two distinct forms of CNMs: cellulose nanocrystals and cellulose nanofibrils.

  3. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    International Nuclear Information System (INIS)

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; Mayo, Jackson R.; Armstrong, Robert C.

    2016-01-01

    Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.

  4. Characterizing human activity induced impulse and slip-pulse excitations through structural vibration

    Science.gov (United States)

    Pan, Shijia; Mirshekari, Mostafa; Fagert, Jonathon; Ramirez, Ceferino Gabriel; Chung, Albert Jin; Hu, Chih Chi; Shen, John Paul; Zhang, Pei; Noh, Hae Young

    2018-02-01

    Many human activities induce excitations on ambient structures with various objects, causing the structures to vibrate. Accurate vibration excitation source detection and characterization enable human activity information inference, hence allowing human activity monitoring for various smart building applications. By utilizing structural vibrations, we can achieve sparse and non-intrusive sensing, unlike pressure- and vision-based methods. Many approaches have been presented on vibration-based source characterization, and they often either focus on one excitation type or have limited performance due to the dispersion and attenuation effects of the structures. In this paper, we present our method to characterize two main types of excitations induced by human activities (impulse and slip-pulse) on multiple structures. By understanding the physical properties of waves and their propagation, the system can achieve accurate excitation tracking on different structures without large-scale labeled training data. Specifically, our algorithm takes properties of surface waves generated by impulse and of body waves generated by slip-pulse into account to handle the dispersion and attenuation effects when different types of excitations happen on various structures. We then evaluate the algorithm through multiple scenarios. Our method achieves up to a six times improvement in impulse localization accuracy and a three times improvement in slip-pulse trajectory length estimation compared to existing methods that do not take wave properties into account.

  5. Prevalence of accurate nursing documentation in patient records

    NARCIS (Netherlands)

    Paans, Wolter; Sermeus, Walter; Nieweg, Roos; van der Schans, Cees

    2010-01-01

    AIM: This paper is a report of a study conducted to describe the accuracy of nursing documentation in patient records in hospitals. Background.  Accurate nursing documentation enables nurses to systematically review the nursing process and to evaluate the quality of care. Assessing nurses' reports

  6. Dynamic weighing for accurate fertilizer application and monitoring

    NARCIS (Netherlands)

    Bergeijk, van J.; Goense, D.; Willigenburg, van L.G.; Speelman, L.

    2001-01-01

    The mass flow of fertilizer spreaders must be calibrated for the different types of fertilizers used. To obtain accurate fertilizer application manual calibration of actual mass flow must be repeated frequently. Automatic calibration is possible by measurement of the actual mass flow, based on

  7. Laser guided automated calibrating system for accurate bracket ...

    African Journals Online (AJOL)

    It is widely recognized that accurate bracket placement is of critical importance in the efficient application of biomechanics and in realizing the full potential of a preadjusted edgewise appliance. Aim: The purpose of ... placement. Keywords: Hough transforms, Indirect bonding technique, Laser, Orthodontic bracket placement ...

  8. Neutron Environment Characterization of the Central Cavity in the Annular Core Research Reactor *

    Directory of Open Access Journals (Sweden)

    Parma Edward J.

    2016-01-01

    Full Text Available Characterization of the neutron environment in the central cavity of the Sandia National Laboratories' Annular Core Research Reactor (ACRR is important in order to provide experimenters with the most accurate spectral information and maintain a high degree of fidelity in performing reactor experiments. Characterization includes both modeling and experimental efforts. Building accurate neutronic models of the ACRR and the central cavity “bucket” environments that can be used by experimenters is important in planning and designing experiments, as well as assessing the experimental results and quantifying uncertainties. Neutron fluence characterizations of two bucket environments, LB44 and PLG, are presented. These two environments are used frequently and represent two extremes in the neutron spectrum. The LB44 bucket is designed to remove the thermal component of the neutron spectrum and significantly attenuate the gamma-ray fluence. The PLG bucket is designed to enhance the thermal component of the neutron spectrum and attenuate the gamma-ray fluence. The neutron characterization for each bucket was performed by irradiating 20 different activation foil types, some of which were cadmium covered, resulting in 37 different reactions at the peak axial flux location in each bucket. The dosimetry results were used in the LSL-M2 spectrum adjustment code with a 640-energy group MCNP-generated trial spectrum, self-shielding correction factors, the SNLRML or IRDFF dosimetry cross-section library, trial spectrum uncertainty, and trial covariance matrix, to generate a least-squares adjusted neutron spectrum, spectrum uncertainty, and covariance matrix. Both environment character-izations are well documented and the environments are available for use by experimenters.

  9. Can cancer researchers accurately judge whether preclinical reports will reproduce?

    Directory of Open Access Journals (Sweden)

    Daniel Benjamin

    2017-06-01

    Full Text Available There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology (RP:CB to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported. Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the RP:CB replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies.

  10. Accurate and approximate thermal rate constants for polyatomic chemical reactions

    International Nuclear Information System (INIS)

    Nyman, Gunnar

    2007-01-01

    In favourable cases it is possible to calculate thermal rate constants for polyatomic reactions to high accuracy from first principles. Here, we discuss the use of flux correlation functions combined with the multi-configurational time-dependent Hartree (MCTDH) approach to efficiently calculate cumulative reaction probabilities and thermal rate constants for polyatomic chemical reactions. Three isotopic variants of the H 2 + CH 3 → CH 4 + H reaction are used to illustrate the theory. There is good agreement with experimental results although the experimental rates generally are larger than the calculated ones, which are believed to be at least as accurate as the experimental rates. Approximations allowing evaluation of the thermal rate constant above 400 K are treated. It is also noted that for the treated reactions, transition state theory (TST) gives accurate rate constants above 500 K. TST theory also gives accurate results for kinetic isotope effects in cases where the mass of the transfered atom is unchanged. Due to neglect of tunnelling, TST however fails below 400 K if the mass of the transferred atom changes between the isotopic reactions

  11. A multiple regression analysis for accurate background subtraction in 99Tcm-DTPA renography

    International Nuclear Information System (INIS)

    Middleton, G.W.; Thomson, W.H.; Davies, I.H.; Morgan, A.

    1989-01-01

    A technique for accurate background subtraction in 99 Tc m -DTPA renography is described. The technique is based on a multiple regression analysis of the renal curves and separate heart and soft tissue curves which together represent background activity. It is compared, in over 100 renograms, with a previously described linear regression technique. Results show that the method provides accurate background subtraction, even in very poorly functioning kidneys, thus enabling relative renal filtration and excretion to be accurately estimated. (author)

  12. Ocean outfall plume characterization using an Autonomous Underwater Vehicle.

    Science.gov (United States)

    Rogowski, Peter; Terrill, Eric; Otero, Mark; Hazard, Lisa; Middleton, William

    2013-01-01

    A monitoring mission to map and characterize the Point Loma Ocean Outfall (PLOO) wastewater plume using an Autonomous Underwater Vehicle (AUV) was performed on 3 March 2011. The mobility of an AUV provides a significant advantage in surveying discharge plumes over traditional cast-based methods, and when combined with optical and oceanographic sensors, provides a capability for both detecting plumes and assessing their mixing in the near and far-fields. Unique to this study is the measurement of Colored Dissolved Organic Matter (CDOM) in the discharge plume and its application for quantitative estimates of the plume's dilution. AUV mission planning methodologies for discharge plume sampling, plume characterization using onboard optical sensors, and comparison of observational data to model results are presented. The results suggest that even under variable oceanic conditions, properly planned missions for AUVs equipped with an optical CDOM sensor in addition to traditional oceanographic sensors, can accurately characterize and track ocean outfall plumes at higher resolutions than cast-based techniques.

  13. Medipix2 as a tool for proton beam characterization

    Science.gov (United States)

    Bisogni, M. G.; Cirrone, G. A. P.; Cuttone, G.; Del Guerra, A.; Lojacono, P.; Piliero, M. A.; Romano, F.; Rosso, V.; Sipala, V.; Stefanini, A.

    2009-08-01

    Proton therapy is a technique used to deliver a highly accurate and effective dose for the treatment of a variety of tumor diseases. The possibility to have an instrument able to give online information could reduce the time necessary to characterize the proton beam. To this aim we propose a detection system for online proton beam characterization based on the Medipix2 chip. Medipix2 is a detection system based on a single event counter read-out chip, bump-bonded to silicon pixel detector. The read-out chip is a matrix of 256×256 cells, 55×55 μm 2 each. To demonstrate the capabilities of Medipix2 as a proton detector, we have used a 62 MeV flux proton beam at the CATANA beam line of the LNS-INFN laboratory. The measurements performed confirmed the good imaging performances of the Medipix2 system also for the characterization of proton beams.

  14. Medipix2 as a tool for proton beam characterization

    Energy Technology Data Exchange (ETDEWEB)

    Bisogni, M.G. [Department of Physics, University of Pisa and INFN Sezione di Pisa, Pisa (Italy); Cirrone, G.A.P.; Cuttone, G. [INFN Laboratori Nazionali del Sud, Catania (Italy); Del Guerra, A. [Department of Physics, University of Pisa and INFN Sezione di Pisa, Pisa (Italy); Lojacono, P. [INFN Laboratori Nazionali del Sud, Catania (Italy); Piliero, M.A. [Department of Physics, University of Pisa and INFN Sezione di Pisa, Pisa (Italy); Romano, F. [INFN Laboratori Nazionali del Sud, Catania (Italy); Rosso, V. [Department of Physics, University of Pisa and INFN Sezione di Pisa, Pisa (Italy)], E-mail: valeria.rosso@pi.infn.it; Sipala, V. [Department of Physics and Astronomy, University of Catania and INFN Sezione di Catania, Catania (Italy); Stefanini, A. [Department of Physics, University of Pisa and INFN Sezione di Pisa, Pisa (Italy)

    2009-08-01

    Proton therapy is a technique used to deliver a highly accurate and effective dose for the treatment of a variety of tumor diseases. The possibility to have an instrument able to give online information could reduce the time necessary to characterize the proton beam. To this aim we propose a detection system for online proton beam characterization based on the Medipix2 chip. Medipix2 is a detection system based on a single event counter read-out chip, bump-bonded to silicon pixel detector. The read-out chip is a matrix of 256x256 cells, 55x55 {mu}m{sup 2} each. To demonstrate the capabilities of Medipix2 as a proton detector, we have used a 62 MeV flux proton beam at the CATANA beam line of the LNS-INFN laboratory. The measurements performed confirmed the good imaging performances of the Medipix2 system also for the characterization of proton beams.

  15. Medipix2 as a tool for proton beam characterization

    International Nuclear Information System (INIS)

    Bisogni, M.G.; Cirrone, G.A.P.; Cuttone, G.; Del Guerra, A.; Lojacono, P.; Piliero, M.A.; Romano, F.; Rosso, V.; Sipala, V.; Stefanini, A.

    2009-01-01

    Proton therapy is a technique used to deliver a highly accurate and effective dose for the treatment of a variety of tumor diseases. The possibility to have an instrument able to give online information could reduce the time necessary to characterize the proton beam. To this aim we propose a detection system for online proton beam characterization based on the Medipix2 chip. Medipix2 is a detection system based on a single event counter read-out chip, bump-bonded to silicon pixel detector. The read-out chip is a matrix of 256x256 cells, 55x55 μm 2 each. To demonstrate the capabilities of Medipix2 as a proton detector, we have used a 62 MeV flux proton beam at the CATANA beam line of the LNS-INFN laboratory. The measurements performed confirmed the good imaging performances of the Medipix2 system also for the characterization of proton beams.

  16. Accurate automatic tuning circuit for bipolar integrated filters

    NARCIS (Netherlands)

    de Heij, Wim J.A.; de Heij, W.J.A.; Hoen, Klaas; Hoen, Klaas; Seevinck, Evert; Seevinck, E.

    1990-01-01

    An accurate automatic tuning circuit for tuning the cutoff frequency and Q-factor of high-frequency bipolar filters is presented. The circuit is based on a voltage controlled quadrature oscillator (VCO). The frequency and the RMS (root mean square) amplitude of the oscillator output signal are

  17. Accurate Charge Densities from Powder Diffraction

    DEFF Research Database (Denmark)

    Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob

    Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...

  18. Importance of molecular diagnosis in the accurate diagnosis of ...

    Indian Academy of Sciences (India)

    1Department of Health and Environmental Sciences, Kyoto University Graduate School of Medicine, Yoshida Konoecho, ... of molecular diagnosis in the accurate diagnosis of systemic carnitine deficiency. .... 'affecting protein function' by SIFT.

  19. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    Science.gov (United States)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  20. Foresight begins with FMEA. Delivering accurate risk assessments.

    Science.gov (United States)

    Passey, R D

    1999-03-01

    If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.

  1. Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.

  2. An accurate determination of the flux within a slab

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Lapenta, G.

    1993-01-01

    During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available

  3. Accurate multiplicity scaling in isotopically conjugate reactions

    International Nuclear Information System (INIS)

    Golokhvastov, A.I.

    1989-01-01

    The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs

  4. Incorporation of exact boundary conditions into a discontinuous galerkin finite element method for accurately solving 2d time-dependent maxwell equations

    KAUST Repository

    Sirenko, Kostyantyn

    2013-01-01

    A scheme that discretizes exact absorbing boundary conditions (EACs) to incorporate them into a time-domain discontinuous Galerkin finite element method (TD-DG-FEM) is described. The proposed TD-DG-FEM with EACs is used for accurately characterizing transient electromagnetic wave interactions on two-dimensional waveguides. Numerical results demonstrate the proposed method\\'s superiority over the TD-DG-FEM that employs approximate boundary conditions and perfectly matched layers. Additionally, it is shown that the proposed method can produce the solution with ten-eleven digit accuracy when high-order spatial basis functions are used to discretize the Maxwell equations as well as the EACs. © 1963-2012 IEEE.

  5. The Synthesis, Characterization and Catalytic Reaction Studies of Monodisperse Platinum Nanoparticles in Mesoporous Oxide Materials

    Energy Technology Data Exchange (ETDEWEB)

    Rioux, Robert M. [Univ. of California, Berkeley, CA (United States)

    2006-01-01

    A catalyst design program was implemented in which Pt nanoparticles, either of monodisperse size and/or shape were synthesized, characterized and studied in a number of hydrocarbon conversion reactions. The novel preparation of these materials enables exquisite control over their physical and chemical properties that could be controlled (and therefore rationally tuned) during synthesis. The ability to synthesize rather than prepare catalysts followed by thorough characterization enable accurate structure-function relationships to be elucidated. This thesis emphasizes all three aspects of catalyst design: synthesis, characterization and reactivity studies. The precise control of metal nanoparticle size, surface structure and composition may enable the development of highly active and selective heterogeneous catalysts.

  6. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    KAUST Repository

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-01-01

    inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid

  7. Remote Underwater Characterization System - Innovative Technology Summary Report

    International Nuclear Information System (INIS)

    Willis, Walter David

    1999-01-01

    Characterization and inspection of water-cooled and moderated nuclear reactors and fuel storage pools requires equipment capable of operating underwater. Similarly, the deactivation and decommissioning of older nuclear facilities often requires the facility owner to accurately characterize underwater structures and equipment which may have been sitting idle for years. The underwater characterization equipment is often required to operate at depths exceeding 20 ft (6.1 m) and in relatively confined or congested spaces. The typical baseline approach has been the use of radiation detectors and underwater cameras mounted on long poles, or stationary cameras with pan and tilt features mounted on the sides of the underwater facility. There is a perceived need for an inexpensive, more mobile method of performing close-up inspection and radiation measurements in confined spaces underwater. The Remote Underwater Characterization System (RUCS) is a small, remotely operated submersible vehicle intended to serve multiple purposes in underwater nuclear operations. It is based on the commercially-available ''Scallop'' vehicle, but has been modified by Department of Energy's Robotics Technology Development Program to add auto-depth control, and vehicle orientation and depth monitoring at the operator control panel. The RUCS is designed to provide visual and gamma radiation characterization, even in confined or limited access areas. It was demonstrated in August 1998 at Idaho National Engineering and environmental Laboratory (INEEL) as part of the INEEL Large Scale Demonstration and Deployment Project. During the demonstration it was compared in a ''head-to-head'' fashion with the baseline characterization technology. This paper summarizes the results of the demonstration and lessons learned; comparing and contrasting both technologies in the areas of cost, visual characterization, radiological characterization, and overall operations

  8. Can Older Adults Accurately Report their Use of Physical Rehabilitation Services?

    Science.gov (United States)

    Freedman, Vicki A; Kasper, Judith D; Jette, Alan

    2018-04-10

    To explore accuracy of rehabilitation service use reports by older adults and variation in accuracy by demographic characteristics, time since use, duration, and setting (inpatient, outpatient, home). We calculate the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of survey-based measures from an observational panel study, the National Health and Aging Trends Study (NHATS), relative to measures developed from linked Medicare claims. Community-dwelling sample of Medicare fee-for-service beneficiaries in 2015 NHATS who were enrolled in Medicare Parts A and B for 12 months prior to their interview (N=4,228). Respondents were asked whether they received rehabilitation services in the last year and the duration and location of services. Healthcare Common Procedure Coding System codes and Revenue Center codes were used to identify Medicare-eligible rehabilitation service. Survey-based reports and Medicare claims yielded similar estimates of rehabilitation use over the last year. Self-reported measures had high sensitivity (77%) and PPV (80%) and even higher specificity and NPV (approaching 95%). However, in adjusted models sensitivity was lower for Black enrollees, the very old, and those with lower education levels. Survey-based measures of rehabilitation accurately captured use over the past year but differential reporting should be considered when characterizing rehabilitation use in certain subgroups of older Americans. Copyright © 2018. Published by Elsevier Inc.

  9. A highly accurate method for determination of dissolved oxygen: Gravimetric Winkler method

    International Nuclear Information System (INIS)

    Helm, Irja; Jalukse, Lauri; Leito, Ivo

    2012-01-01

    Highlights: ► Probably the most accurate method available for dissolved oxygen concentration measurement was developed. ► Careful analysis of uncertainty sources was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. ► This development enables more accurate calibration of dissolved oxygen sensors for routine analysis than has been possible before. - Abstract: A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012–0.018 mg dm −3 corresponding to the k = 2 expanded uncertainty in the range of 0.023–0.035 mg dm −3 (0.27–0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.

  10. Full-waveform modeling of Zero-Offset Electromagnetic Induction for Accurate Characterization of Subsurface Electrical Properties

    Science.gov (United States)

    Moghadas, D.; André, F.; Vereecken, H.; Lambot, S.

    2009-04-01

    singularities. We tested the model in controlled laboratory conditions for EMI measurements at different heights above a copper sheet, playing the role of a perfect electrical conductor. Good agreement was obtained between the measurements and the model, especially for the resonance frequency of the loop antenna. The loop antenna height could be retrieved by inversion of the Green's function. For practical applications, the method is still limited by the low sensitivity of the antenna with respect to the dynamic range of the VNA. Once this will be resolved, we believe that the proposed method should be very flexible and promising for accurate, multi-frequency EMI data inversion.

  11. A focal-spot diagnostic for on-shot characterization of high-energy petawatt lasers.

    Science.gov (United States)

    Bromage, J; Bahk, S-W; Irwin, D; Kwiatkowski, J; Pruyne, A; Millecchia, M; Moore, M; Zuegel, J D

    2008-10-13

    An on-shot focal-spot diagnostic for characterizing high-energy, petawatt-class laser systems is presented. Accurate measurements at full energy are demonstrated using high-resolution wavefront sensing in combination with techniques to calibrate on-shot measurements with low-power sample beams. Results are shown for full-energy activation shots of the OMEGA EP Laser System.

  12. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    Directory of Open Access Journals (Sweden)

    Chang Wang

    Full Text Available The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR spectroscopy with partial least squares (PLS analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  13. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    Science.gov (United States)

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  14. A comparison of atomic force microscopy (AFM) and dynamic light scattering (DLS) methods to characterize nanoparticle size distributions

    International Nuclear Information System (INIS)

    Hoo, Christopher M.; Starostin, Natasha; West, Paul; Mecartney, Martha L.

    2008-01-01

    This paper compares the accuracy of conventional dynamic light scattering (DLS) and atomic force microscopy (AFM) for characterizing size distributions of polystyrene nanoparticles in the size range of 20-100 nm. Average DLS values for monosize dispersed particles are slightly higher than the nominal values whereas AFM values were slightly lower than nominal values. Bimodal distributions were easily identified with AFM, but DLS results were skewed toward larger particles. AFM characterization of nanoparticles using automated analysis software provides an accurate and rapid analysis for nanoparticle characterization and has advantages over DLS for non-monodispersed solutions.

  15. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  16. Methods for accurate cold-chain temperature monitoring using digital data-logger thermometers

    Science.gov (United States)

    Chojnacky, M. J.; Miller, W. M.; Strouse, G. F.

    2013-09-01

    Complete and accurate records of vaccine temperature history are vital to preserving drug potency and patient safety. However, previously published vaccine storage and handling guidelines have failed to indicate a need for continuous temperature monitoring in vaccine storage refrigerators. We evaluated the performance of seven digital data logger models as candidates for continuous temperature monitoring of refrigerated vaccines, based on the following criteria: out-of-box performance and compliance with manufacturer accuracy specifications over the range of use; measurement stability over extended, continuous use; proper setup in a vaccine storage refrigerator so that measurements reflect liquid vaccine temperatures; and practical methods for end-user validation and establishing metrological traceability. Data loggers were tested using ice melting point checks and by comparison to calibrated thermocouples to characterize performance over 0 °C to 10 °C. We also monitored logger performance in a study designed to replicate the range of vaccine storage and environmental conditions encountered at provider offices. Based on the results of this study, the Centers for Disease Control released new guidelines on proper methods for storage, handling, and temperature monitoring of vaccines for participants in its federally-funded Vaccines for Children Program. Improved temperature monitoring practices will ultimately decrease waste from damaged vaccines, improve consumer confidence, and increase effective inoculation rates.

  17. 9 CFR 442.3 - Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection. (a) All scales used to determine the net weight of meat and...

  18. Systematization of Accurate Discrete Optimization Methods

    Directory of Open Access Journals (Sweden)

    V. A. Ovchinnikov

    2015-01-01

    Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.

  19. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  20. Dispersive liquid-liquid microextraction and gas chromatography accurate mass spectrometry for extraction and non-targeted profiling of volatile and semi-volatile compounds in grape marc distillates.

    Science.gov (United States)

    Fontana, Ariel; Rodríguez, Isaac; Cela, Rafael

    2018-04-20

    The suitability of dispersive liquid-liquid microextraction (DLLME) and gas chromatography accurate mass spectrometry (GC-MS), based on a time-of-flight (TOF) MS analyzer and using electron ionization (EI), for the characterization of volatile and semi-volatile profiles of grape marc distillates (grappa) are evaluated. DLLME conditions are optimized with a selection of compounds, from different chemical families, present in the distillate spirit. Under final working conditions, 2.5 mL of sample and 0.5 mL of organic solvents are consumed in the sample preparation process. The absolute extraction efficiencies ranged from 30 to 100%, depending on the compound. For the same sample volume, DLLME provided higher responses than solid-phase microextraction (SPME) for most of the model compounds. The GC-EI-TOF-MS records of grappa samples were processed using a data mining non-targeted search algorithm. In this way, chromatographic peaks and accurate EI-MS spectra of sample components were linked. The identities of more than 140 of these components are proposed from comparison of their accurate spectra with those in a low resolution EI-MS database, accurate masses of most intense fragment ions of known structure, and available chromatographic retention index. The use of chromatographic and spectral data, associated to the set of components mined from different grappa samples, for multivariate analysis purposes is also illustrated in the study. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Characterization of reactor neutron environments at Sandia National Laboratories

    International Nuclear Information System (INIS)

    Kelly, J.G.; Luera, T.F.; Griffin, P.J.; Vehar, D.W.

    1994-01-01

    To assure quality in the testing of electronic parts in neutron radiation environments, Sandia National Laboratories (SNL) has incorporated modern techniques and procedures, developed in the last two decades by the radiation effects community, into all of its experimental programs. Attention to the application of all of these methodologies, experiment designs, nuclear data, procedures and controls to the SNL radiation services has led to the much more accurate and reliable environment characterizations required to correlate the effects observed with the radiation delivered

  2. Radball Technology Testing For Hot Cell Characterization

    International Nuclear Information System (INIS)

    Farfan, E.; Jannik, T.

    2010-01-01

    Operations at various U.S. Department of Energy sites have resulted in substantial radiological contamination of tools, equipment, and facilities. It is essential to use remote technologies for characterization and decommissioning to keep worker exposures as low as reasonably achievable in these highly contaminated environments. A significant initial step in planning and implementing D and D of contaminated facilities involves the development of an accurate assessment of the radiological, chemical, and structural conditions inside of the facilities. Collected information describing facility conditions using remote technologies could reduce the conservatism associated with planning initial worker entry (and associated cost).

  3. Automatic Fault Characterization via Abnormality-Enhanced Classification

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    2010-12-20

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help to identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.

  4. Characterizing Topology of Probabilistic Biological Networks.

    Science.gov (United States)

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-09-06

    Biological interactions are often uncertain events, that may or may not take place with some probability. Existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. Here, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. We develop a method that accurately describes the degree distribution of such networks. We also extend our method to accurately compute the joint degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. It also helps us find an adequate mathematical model using maximum likelihood estimation. Our results demonstrate that power law and log-normal models best describe degree distributions for probabilistic networks. The inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected.

  5. Laser Guided Automated Calibrating System for Accurate Bracket ...

    African Journals Online (AJOL)

    Background: The basic premise of preadjusted bracket system is accurate bracket positioning. ... using MATLAB ver. 7 software (The MathWorks Inc.). These images are in the form of matrices of size 640 × 480. 650 nm (red light) type III diode laser is used as ... motion control and Pitch, Yaw, Roll degrees of freedom (DOF).

  6. Examining ERP correlates of recognition memory: Evidence of accurate source recognition without recollection

    Science.gov (United States)

    Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.

    2012-01-01

    Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808

  7. Fluid characterization for miscible EOR projects and CO2 sequestration

    DEFF Research Database (Denmark)

    Jessen, Kristian; Stenby, Erling Halfdan

    2007-01-01

    Accurate performance prediction of miscible enhanced-oil-recovery (EOR) projects or CO, sequestration in depleted oil and gas reservoirs relies in part on the ability of an equation-of-state (EOS) model to adequately represent the properties of a wide range of mixtures of the resident fluid...... in the data reduction and demonstrate that for some gas/oil systems, swelling tests do not contribute to a more accurate prediction of multicontact miscibility. Finally, we report on the impact that use of EOS models based on different characterization procedures can have on recovery predictions from dynamic...... and the injected fluid(s). The mixtures that form when gas displaces oil in a porous medium will, in many cases, differ significantly from compositions created in swelling tests and other standard pressure/volume/temperature (PVT) experiments. Multicontact experiments (e.g., slimtube displacements) are often used...

  8. Indexed variation graphs for efficient and accurate resistome profiling.

    Science.gov (United States)

    Rowe, Will P M; Winn, Martyn D

    2018-05-14

    Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.

  9. Individual Differences in Accurately Judging Personality From Text.

    Science.gov (United States)

    Hall, Judith A; Goh, Jin X; Mast, Marianne Schmid; Hagedorn, Christian

    2016-08-01

    This research examines correlates of accuracy in judging Big Five traits from first-person text excerpts. Participants in six studies were recruited from psychology courses or online. In each study, participants performed a task of judging personality from text and performed other ability tasks and/or filled out questionnaires. Participants who were more accurate in judging personality from text were more likely to be female; had personalities that were more agreeable, conscientious, and feminine, and less neurotic and dominant (all controlling for participant gender); scored higher on empathic concern; self-reported more interest in, and attentiveness to, people's personalities in their daily lives; and reported reading more for pleasure, especially fiction. Accuracy was not associated with SAT scores but had a significant relation to vocabulary knowledge. Accuracy did not correlate with tests of judging personality and emotion based on audiovisual cues. This research is the first to address individual differences in accurate judgment of personality from text, thus adding to the literature on correlates of the good judge of personality. © 2015 Wiley Periodicals, Inc.

  10. Design of an expert system based on neuro-fuzzy inference analyzer for on-line microstructural characterization using magnetic NDT method

    International Nuclear Information System (INIS)

    Ghanei, S.; Vafaeenezhad, H.; Kashefi, M.; Eivani, A.R.; Mazinani, M.

    2015-01-01

    Tracing microstructural evolution has a significant importance and priority in manufacturing lines of dual-phase steels. In this paper, an artificial intelligence method is presented for on-line microstructural characterization of dual-phase steels. A new method for microstructure characterization based on the theory of magnetic Barkhausen noise nondestructive testing method is introduced using adaptive neuro-fuzzy inference system (ANFIS). In order to predict the accurate martensite volume fraction of dual-phase steels while eliminating the effect and interference of frequency on the magnetic Barkhausen noise outputs, the magnetic responses were fed into the ANFIS structure in terms of position, height and width of the Barkhausen profiles. The results showed that ANFIS approach has the potential to detect and characterize microstructural evolution while the considerable effect of the frequency on magnetic outputs is overlooked. In fact implementing multiple outputs simultaneously enables ANFIS to approach to the accurate results using only height, position and width of the magnetic Barkhausen noise peaks without knowing the value of the used frequency. - Highlights: • New NDT system for microstructural evaluation based on MBN using ANFIS modeling. • Sensitivity of magnetic Barkhausen noise to microstructure changes of the DP steels. • Accurate prediction of martensite by feeding multiple MBN outputs simultaneously. • Obtaining the modeled output without knowing the amount of the used frequency

  11. Design of an expert system based on neuro-fuzzy inference analyzer for on-line microstructural characterization using magnetic NDT method

    Energy Technology Data Exchange (ETDEWEB)

    Ghanei, S., E-mail: Sadegh.Ghanei@yahoo.com [Department of Materials Engineering, Faculty of Engineering, Ferdowsi University of Mashhad, Azadi Square, Mashhad (Iran, Islamic Republic of); Vafaeenezhad, H. [Centre of Excellence for High Strength Alloys Technology (CEHSAT), School of Metallurgical and Materials Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran (Iran, Islamic Republic of); Kashefi, M. [Department of Materials Engineering, Faculty of Engineering, Ferdowsi University of Mashhad, Azadi Square, Mashhad (Iran, Islamic Republic of); Eivani, A.R. [Centre of Excellence for High Strength Alloys Technology (CEHSAT), School of Metallurgical and Materials Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran (Iran, Islamic Republic of); Mazinani, M. [Department of Materials Engineering, Faculty of Engineering, Ferdowsi University of Mashhad, Azadi Square, Mashhad (Iran, Islamic Republic of)

    2015-04-01

    Tracing microstructural evolution has a significant importance and priority in manufacturing lines of dual-phase steels. In this paper, an artificial intelligence method is presented for on-line microstructural characterization of dual-phase steels. A new method for microstructure characterization based on the theory of magnetic Barkhausen noise nondestructive testing method is introduced using adaptive neuro-fuzzy inference system (ANFIS). In order to predict the accurate martensite volume fraction of dual-phase steels while eliminating the effect and interference of frequency on the magnetic Barkhausen noise outputs, the magnetic responses were fed into the ANFIS structure in terms of position, height and width of the Barkhausen profiles. The results showed that ANFIS approach has the potential to detect and characterize microstructural evolution while the considerable effect of the frequency on magnetic outputs is overlooked. In fact implementing multiple outputs simultaneously enables ANFIS to approach to the accurate results using only height, position and width of the magnetic Barkhausen noise peaks without knowing the value of the used frequency. - Highlights: • New NDT system for microstructural evaluation based on MBN using ANFIS modeling. • Sensitivity of magnetic Barkhausen noise to microstructure changes of the DP steels. • Accurate prediction of martensite by feeding multiple MBN outputs simultaneously. • Obtaining the modeled output without knowing the amount of the used frequency.

  12. In-vivo analysis of ankle joint movement for patient-specific kinematic characterization.

    Science.gov (United States)

    Ferraresi, Carlo; De Benedictis, Carlo; Franco, Walter; Maffiodo, Daniela; Leardini, Alberto

    2017-09-01

    In this article, a method for the experimental in-vivo characterization of the ankle kinematics is proposed. The method is meant to improve personalization of various ankle joint treatments, such as surgical decision-making or design and application of an orthosis, possibly to increase their effectiveness. This characterization in fact would make the treatments more compatible with the specific patient's joint physiological conditions. This article describes the experimental procedure and the analytical method adopted, based on the instantaneous and mean helical axis theories. The results obtained in this experimental analysis reveal that more accurate techniques are necessary for a robust in-vivo assessment of the tibio-talar axis of rotation.

  13. Predictive Performance Tuning of OpenACC Accelerated Applications

    KAUST Repository

    Siddiqui, Shahzeb; Feki, Saber

    2014-01-01

    , with the introduction of high level programming models such as OpenACC [1] and OpenMP 4.0 [2], these devices are becoming more accessible and practical to use by a larger scientific community. However, performance optimization of OpenACC accelerated applications usually

  14. Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.

    Science.gov (United States)

    Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.

  15. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    Science.gov (United States)

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors.

  16. Analyses of GPR signals for characterization of ground conditions in urban areas

    Science.gov (United States)

    Hong, Won-Taek; Kang, Seonghun; Lee, Sung Jin; Lee, Jong-Sub

    2018-05-01

    Ground penetrating radar (GPR) is applied for the characterization of the ground conditions in urban areas. In addition, time domain reflectometry (TDR) and dynamic cone penetrometer (DCP) tests are conducted for the accurate analyses of the GPR images. The GPR images are acquired near a ground excavation site, where a ground subsidence occurred and was repaired. Moreover, the relative permittivity and dynamic cone penetration index (DCPI) are profiled through the TDR and DCP tests, respectively. As the ground in the urban area is kept under a low-moisture condition, the relative permittivity, which is inversely related to the electromagnetic impedance, is mainly affected by the dry density and is inversely proportional to the DCPI value. Because the first strong signal in the GPR image is shifted 180° from the emitted signal, the polarity of the electromagnetic wave reflected at the dense layer, where the reflection coefficient is negative, is identical to that of the first strong signal. The temporal-scaled GPR images can be accurately converted into the spatial-scaled GPR images using the relative permittivity determined by the TDR test. The distribution of the loose layer can be accurately estimated by using the spatial-scaled GPR images and reflection characteristics of the electromagnetic wave. Note that the loose layer distribution estimated in this study matches well with the DCPI profile and is visually verified from the endoscopic images. This study demonstrates that the GPR survey complemented by the TDR and DCP tests, may be an effective method for the characterization of ground conditions in an urban area.

  17. Direct Calculation of Permeability by High-Accurate Finite Difference and Numerical Integration Methods

    KAUST Repository

    Wang, Yi

    2016-07-21

    Velocity of fluid flow in underground porous media is 6~12 orders of magnitudes lower than that in pipelines. If numerical errors are not carefully controlled in this kind of simulations, high distortion of the final results may occur [1-4]. To fit the high accuracy demands of fluid flow simulations in porous media, traditional finite difference methods and numerical integration methods are discussed and corresponding high-accurate methods are developed. When applied to the direct calculation of full-tensor permeability for underground flow, the high-accurate finite difference method is confirmed to have numerical error as low as 10-5% while the high-accurate numerical integration method has numerical error around 0%. Thus, the approach combining the high-accurate finite difference and numerical integration methods is a reliable way to efficiently determine the characteristics of general full-tensor permeability such as maximum and minimum permeability components, principal direction and anisotropic ratio. Copyright © Global-Science Press 2016.

  18. Design Concepts, Fabrication and Advanced Characterization Methods of Innovative Piezoelectric Sensors Based on ZnO Nanowires

    Directory of Open Access Journals (Sweden)

    Rodolfo Araneo

    2014-12-01

    Full Text Available Micro- and nano-scale materials and systems based on zinc oxide are expected to explode in their applications in the electronics and photonics, including nano-arrays of addressable optoelectronic devices and sensors, due to their outstanding properties, including semiconductivity and the presence of a direct bandgap, piezoelectricity, pyroelectricity and biocompatibility. Most applications are based on the cooperative and average response of a large number of ZnO micro/nanostructures. However, in order to assess the quality of the materials and their performance, it is fundamental to characterize and then accurately model the specific electrical and piezoelectric properties of single ZnO structures. In this paper, we report on focused ion beam machined high aspect ratio nanowires and their mechanical and electrical (by means of conductive atomic force microscopy characterization. Then, we investigate the suitability of new power-law design concepts to accurately model the relevant electrical and mechanical size-effects, whose existence has been emphasized in recent reviews.

  19. Site characterization of the highest-priority geologic formations for CO2 storage in Wyoming

    Energy Technology Data Exchange (ETDEWEB)

    Surdam, Ronald C. [Univ. of Wyoming, Laramie, WY (United States); Bentley, Ramsey [Univ. of Wyoming, Laramie, WY (United States); Campbell-Stone, Erin [Univ. of Wyoming, Laramie, WY (United States); Dahl, Shanna [Univ. of Wyoming, Laramie, WY (United States); Deiss, Allory [Univ. of Wyoming, Laramie, WY (United States); Ganshin, Yuri [Univ. of Wyoming, Laramie, WY (United States); Jiao, Zunsheng [Univ. of Wyoming, Laramie, WY (United States); Kaszuba, John [Univ. of Wyoming, Laramie, WY (United States); Mallick, Subhashis [Univ. of Wyoming, Laramie, WY (United States); McLaughlin, Fred [Univ. of Wyoming, Laramie, WY (United States); Myers, James [Univ. of Wyoming, Laramie, WY (United States); Quillinan, Scott [Univ. of Wyoming, Laramie, WY (United States)

    2013-12-07

    This study, funded by U.S. Department of Energy National Energy Technology Laboratory award DE-FE0002142 along with the state of Wyoming, uses outcrop and core observations, a diverse electric log suite, a VSP survey, in-bore testing (DST, injection tests, and fluid sampling), a variety of rock/fluid analyses, and a wide range of seismic attributes derived from a 3-D seismic survey to thoroughly characterize the highest-potential storage reservoirs and confining layers at the premier CO2 geological storage site in Wyoming. An accurate site characterization was essential to assessing the following critical aspects of the storage site: (1) more accurately estimate the CO2 reservoir storage capacity (Madison Limestone and Weber Sandstone at the Rock Springs Uplift (RSU)), (2) evaluate the distribution, long-term integrity, and permanence of the confining layers, (3) manage CO2 injection pressures by removing formation fluids (brine production/treatment), and (4) evaluate potential utilization of the stored CO2

  20. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    Science.gov (United States)

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.

  1. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    Directory of Open Access Journals (Sweden)

    Markus Niklasson

    2015-01-01

    Full Text Available The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.

  2. Accurate determination of selected pesticides in soya beans by liquid chromatography coupled to isotope dilution mass spectrometry.

    Science.gov (United States)

    Huertas Pérez, J F; Sejerøe-Olsen, B; Fernández Alba, A R; Schimmel, H; Dabrio, M

    2015-05-01

    A sensitive, accurate and simple liquid chromatography coupled with mass spectrometry method for the determination of 10 selected pesticides in soya beans has been developed and validated. The method is intended for use during the characterization of selected pesticides in a reference material. In this process, high accuracy and appropriate uncertainty levels associated to the analytical measurements are of utmost importance. The analytical procedure is based on sample extraction by the use of a modified QuEChERS (quick, easy, cheap, effective, rugged, safe) extraction and subsequent clean-up of the extract with C18, PSA and Florisil. Analytes were separated on a C18 column using gradient elution with water-methanol/2.5 mM ammonium acetate mobile phase, and finally identified and quantified by triple quadrupole mass spectrometry in the multiple reaction monitoring mode (MRM). Reliable and accurate quantification of the analytes was achieved by means of stable isotope-labelled analogues employed as internal standards (IS) and calibration with pure substance solutions containing both, the isotopically labelled and native compounds. Exceptions were made for thiodicarb and malaoxon where the isotopically labelled congeners were not commercially available at the time of analysis. For the quantification of those compounds methomyl-(13)C2(15)N and malathion-D10 were used respectively. The method was validated according to the general principles covered by DG SANCO guidelines. However, validation criteria were set more stringently. Mean recoveries were in the range of 86-103% with RSDs lower than 8.1%. Repeatability and intermediate precision were in the range of 3.9-7.6% and 1.9-8.7% respectively. LODs were theoretically estimated and experimentally confirmed to be in the range 0.001-0.005 mg kg(-1) in the matrix, while LOQs established as the lowest spiking mass fractionation level were in the range 0.01-0.05 mg kg(-1). The method reliably identifies and quantifies the

  3. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  4. Learning a Weighted Sequence Model of the Nucleosome Core and Linker Yields More Accurate Predictions in Saccharomyces cerevisiae and Homo sapiens

    Science.gov (United States)

    Reynolds, Sheila M.; Bilmes, Jeff A.; Noble, William Stafford

    2010-01-01

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence—301 base pairs, centered at the position to be scored—with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  5. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Directory of Open Access Journals (Sweden)

    Sheila M Reynolds

    2010-07-01

    Full Text Available DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the

  6. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Science.gov (United States)

    Reynolds, Sheila M; Bilmes, Jeff A; Noble, William Stafford

    2010-07-08

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  7. How Accurately can we Calculate Thermal Systems?

    International Nuclear Information System (INIS)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-01-01

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors

  8. Accurate control testing for clay liner permeability

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R J

    1991-08-01

    Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.

  9. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    Science.gov (United States)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  10. Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Williams, Samuel; Jong, Wibe de; Oliker, Leonid

    2014-10-10

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments. In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.

  11. Hierarchical resilience with lightweight threads

    International Nuclear Information System (INIS)

    Wheeler, Kyle Bruce

    2011-01-01

    This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specified in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).

  12. Can blind persons accurately assess body size from the voice?

    Science.gov (United States)

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).

  13. The diagnostic capability of laser induced fluorescence in the characterization of excised breast tissues

    Science.gov (United States)

    Galmed, A. H.; Elshemey, Wael M.

    2017-08-01

    Differentiating between normal, benign and malignant excised breast tissues is one of the major worldwide challenges that need a quantitative, fast and reliable technique in order to avoid personal errors in diagnosis. Laser induced fluorescence (LIF) is a promising technique that has been applied for the characterization of biological tissues including breast tissue. Unfortunately, only few studies have adopted a quantitative approach that can be directly applied for breast tissue characterization. This work provides a quantitative means for such characterization via introduction of several LIF characterization parameters and determining the diagnostic accuracy of each parameter in the differentiation between normal, benign and malignant excised breast tissues. Extensive analysis on 41 lyophilized breast samples using scatter diagrams, cut-off values, diagnostic indices and receiver operating characteristic (ROC) curves, shows that some spectral parameters (peak height and area under the peak) are superior for characterization of normal, benign and malignant breast tissues with high sensitivity (up to 0.91), specificity (up to 0.91) and accuracy ranking (highly accurate).

  14. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  15. NNLOPS accurate predictions for $W^+W^-$ production arXiv

    CERN Document Server

    Re, Emanuele; Zanderighi, Giulia

    We present novel predictions for the production of $W^+W^-$ pairs in hadron collisions that are next-to-next-to-leading order accurate and consistently matched to a parton shower (NNLOPS). All diagrams that lead to the process $pp\\to e^- \\bar \

  16. Accurate Classification of Chronic Migraine via Brain Magnetic Resonance Imaging

    Science.gov (United States)

    Schwedt, Todd J.; Chong, Catherine D.; Wu, Teresa; Gaw, Nathan; Fu, Yinlin; Li, Jing

    2015-01-01

    Background The International Classification of Headache Disorders provides criteria for the diagnosis and subclassification of migraine. Since there is no objective gold standard by which to test these diagnostic criteria, the criteria are based on the consensus opinion of content experts. Accurate migraine classifiers consisting of brain structural measures could serve as an objective gold standard by which to test and revise diagnostic criteria. The objectives of this study were to utilize magnetic resonance imaging measures of brain structure for constructing classifiers: 1) that accurately identify individuals as having chronic vs. episodic migraine vs. being a healthy control; and 2) that test the currently used threshold of 15 headache days/month for differentiating chronic migraine from episodic migraine. Methods Study participants underwent magnetic resonance imaging for determination of regional cortical thickness, cortical surface area, and volume. Principal components analysis combined structural measurements into principal components accounting for 85% of variability in brain structure. Models consisting of these principal components were developed to achieve the classification objectives. Ten-fold cross validation assessed classification accuracy within each of the ten runs, with data from 90% of participants randomly selected for classifier development and data from the remaining 10% of participants used to test classification performance. Headache frequency thresholds ranging from 5–15 headache days/month were evaluated to determine the threshold allowing for the most accurate subclassification of individuals into lower and higher frequency subgroups. Results Participants were 66 migraineurs and 54 healthy controls, 75.8% female, with an average age of 36 +/− 11 years. Average classifier accuracies were: a) 68% for migraine (episodic + chronic) vs. healthy controls; b) 67.2% for episodic migraine vs. healthy controls; c) 86.3% for chronic

  17. Actinide analytical program for characterization of Hanford waste

    International Nuclear Information System (INIS)

    Johnson, S.J.; Winters, W.I.

    1977-01-01

    The objective of this program has been to develop faster, more accurate methods for the concentration and determination of actinides at their maximum permissible concentration (MPC) levels in a controlled zone. These analyses are needed to characterize various forms of Hanford high rad waste and to support characterization of products and effluents from new waste management processes. The most acceptable methods developed for the determination of 239 Pu, 238 Pu, 237 Np, 241 Am, and 243 Cm employ solvent extraction with the addition of tracer isotopes. Plutonium and neptunium are extracted from acidified waste solutions into Aliquat-336. Americium and curium are then extracted from the waste solution at the same acidity into dihexyl-N,N-diethylcarbamylmethylenephosphonate (DHDECMP). After back extraction into an aqueous matrix, these actinides are electrodeposited on steel disks for alpha energy analysis. Total uranium and total thorium are also isolated by solvent extraction and determined spectrophotometrically

  18. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  19. Highly Accurate Prediction of Jobs Runtime Classes

    OpenAIRE

    Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi

    2016-01-01

    Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...

  20. Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?

    Science.gov (United States)

    Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J

    2018-01-01

    Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.

  1. Simulating streamer discharges in 3D with the parallel adaptive Afivo framework

    NARCIS (Netherlands)

    H.J. Teunissen (Jannis); U. M. Ebert (Ute)

    2017-01-01

    htmlabstractWe present an open-source plasma fluid code for 2D, cylindrical and 3D simulations of streamer discharges, based on the Afivo framework that features adaptive mesh refinement, geometric multigrid methods for Poisson's equation, and OpenMP parallelism. We describe the numerical

  2. High-performance method of morphological medical image processing

    Directory of Open Access Journals (Sweden)

    Ryabykh M. S.

    2016-07-01

    Full Text Available the article shows the implementation of grayscale morphology vHGW algorithm for selection borders in the medical image. Image processing is executed using OpenMP and NVIDIA CUDA technology for images with different resolution and different size of the structuring element.

  3. Distributed multiscale computing with MUSCLE 2, the Multiscale Coupling Library and Environment

    NARCIS (Netherlands)

    Borgdorff, J.; Mamonski, M.; Bosak, B.; Kurowski, K.; Ben Belgacem, M.; Chopard, B.; Groen, D.; Coveney, P.V.; Hoekstra, A.G.

    2014-01-01

    We present the Multiscale Coupling Library and Environment: MUSCLE 2. This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We demonstrate its local and distributed computing capabilities and

  4. A hybrid version of swan for fast and efficient practical wave modelling

    NARCIS (Netherlands)

    M. Genseberger (Menno); J. Donners

    2016-01-01

    htmlabstractIn the Netherlands, for coastal and inland water applications, wave modelling with SWAN has become a main ingredient. However, computational times are relatively high. Therefore we investigated the parallel efficiency of the current MPI and OpenMP versions of SWAN. The MPI version is

  5. Characterizing aging effects of lithium ion batteries by impedance spectroscopy

    International Nuclear Information System (INIS)

    Troeltzsch, Uwe; Kanoun, Olfa; Traenkler, Hans-Rolf

    2006-01-01

    Impedance spectroscopy is one of the most promising methods for characterizing aging effects of portable secondary batteries online because it provides information about different aging mechanisms. However, application of impedance spectroscopy 'in the field' has some higher requirements than for laboratory experiments. It requires a fast impedance measurement process, an accurate model applicable with several batteries and a robust method for model parameter estimation. In this paper, we present a method measuring impedance at different frequencies simultaneously. We propose to use a composite electrode model, capable to describe porous composite electrode materials. A hybrid method for parameter estimation based on a combination of evolution strategy and Levenberg-Marquardt method allowed a robust and fast parameter calculation. Based on this approach, an experimental investigation of aging effects of a lithium ion battery was carried out. After 230 discharge/charge cycles, the battery showed a 14% decreased capacity. Modeling results show that series resistance, charge transfer resistance and Warburg coefficient changed thereby their values by approximately 60%. A single frequency impedance measurement, usually carried out at 1 kHz, delivers only information about series resistance. Impedance spectroscopy allows additionally the estimation of charge transfer resistance and Warburg coefficient. This fact and the high sensitivity of model parameters to capacity change prove that impedance spectroscopy together with an accurate modeling deliver information that significantly improve characterization of aging effects

  6. Characterizing aging effects of lithium ion batteries by impedance spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Troeltzsch, Uwe [University of the Bundeswehr Munich Institute for Measurement and Automation, 85579 Neubiberg (Germany)]. E-mail: uwe.troeltzsch@unibw-muenchen.de; Kanoun, Olfa [University of the Bundeswehr Munich Institute for Measurement and Automation, 85579 Neubiberg (Germany); Traenkler, Hans-Rolf [University of the Bundeswehr Munich Institute for Measurement and Automation, 85579 Neubiberg (Germany)

    2006-01-20

    Impedance spectroscopy is one of the most promising methods for characterizing aging effects of portable secondary batteries online because it provides information about different aging mechanisms. However, application of impedance spectroscopy 'in the field' has some higher requirements than for laboratory experiments. It requires a fast impedance measurement process, an accurate model applicable with several batteries and a robust method for model parameter estimation. In this paper, we present a method measuring impedance at different frequencies simultaneously. We propose to use a composite electrode model, capable to describe porous composite electrode materials. A hybrid method for parameter estimation based on a combination of evolution strategy and Levenberg-Marquardt method allowed a robust and fast parameter calculation. Based on this approach, an experimental investigation of aging effects of a lithium ion battery was carried out. After 230 discharge/charge cycles, the battery showed a 14% decreased capacity. Modeling results show that series resistance, charge transfer resistance and Warburg coefficient changed thereby their values by approximately 60%. A single frequency impedance measurement, usually carried out at 1 kHz, delivers only information about series resistance. Impedance spectroscopy allows additionally the estimation of charge transfer resistance and Warburg coefficient. This fact and the high sensitivity of model parameters to capacity change prove that impedance spectroscopy together with an accurate modeling deliver information that significantly improve characterization of aging effects.

  7. Can phenological models predict tree phenology accurately under climate change conditions?

    Science.gov (United States)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  8. Tissue resonance interaction accurately detects colon lesions: A double-blind pilot study.

    Science.gov (United States)

    Dore, Maria P; Tufano, Marcello O; Pes, Giovanni M; Cuccu, Marianna; Farina, Valentina; Manca, Alessandra; Graham, David Y

    2015-07-07

    To investigated the performance of the tissue resonance interaction method (TRIM) for the non-invasive detection of colon lesions. We performed a prospective single-center blinded pilot study of consecutive adults undergoing colonoscopy at the University Hospital in Sassari, Italy. Before patients underwent colonoscopy, they were examined by the TRIMprobe which detects differences in electromagnetic properties between pathological and normal tissues. All patients had completed the polyethylene glycol-containing bowel prep for the colonoscopy procedure before being screened. During the procedure the subjects remained fully dressed. A hand-held probe was moved over the abdomen and variations in electromagnetic signals were recorded for 3 spectral lines (462-465 MHz, 930 MHz, and 1395 MHz). A single investigator, blind to any clinical information, performed the test using the TRIMprob system. Abnormal signals were identified and recorded as malignant or benign (adenoma or hyperplastic polyps). Findings were compared with those from colonoscopy with histologic confirmation. Statistical analysis was performed by χ(2) test. A total of 305 consecutive patients fulfilling the inclusion criteria were enrolled over a period of 12 months. The most frequent indication for colonoscopy was abdominal pain (33%). The TRIMprob was well accepted by all patients; none spontaneously complained about the procedure, and no adverse effects were observed. TRIM proved inaccurate for polyp detection in patients with inflammatory bowel disease (IBD) and they were excluded leaving 281 subjects (mean age 59 ± 13 years; 107 males). The TRIM detected and accurately characterized all 12 adenocarcinomas and 135/137 polyps (98.5%) including 64 adenomatous (100%) found. The method identified cancers and polyps with 98.7% sensitivity, 96.2% specificity, and 97.5% diagnostic accuracy, compared to colonoscopy and histology analyses. The positive predictive value was 96.7% and the negative predictive

  9. Fishing site mapping using local knowledge provides accurate and ...

    African Journals Online (AJOL)

    Accurate fishing ground maps are necessary for fisheries monitoring. In Velondriake locally managed marine area (LMMA) we observed that the nomenclature of shared fishing sites (FS) is villages dependent. Additionally, the level of illiteracy makes data collection more complicated, leading to data collectors improvising ...

  10. Accurate conjugate gradient methods for families of shifted systems

    NARCIS (Netherlands)

    Eshof, J. van den; Sleijpen, G.L.G.

    We present an efficient and accurate variant of the conjugate gradient method for solving families of shifted systems. In particular we are interested in shifted systems that occur in Tikhonov regularization for inverse problems since these problems can be sensitive to roundoff errors. The

  11. Planimetric volumetry of the prostate: how accurate is it?

    NARCIS (Netherlands)

    Aarnink, R. G.; Giesen, R. J.; de la Rosette, J. J.; Huynen, A. L.; Debruyne, F. M.; Wijkstra, H.

    1995-01-01

    Planimetric volumetry is used in clinical practice when accurate volume determination of the prostate is needed. The prostate volume is determined by discretization of the 3D prostate shape. The are of the prostate is calculated in consecutive ultrasonographic cross-sections. This area is multiplied

  12. Characterization of polarization-independent phase modulation method for practical plug and play quantum cryptography

    International Nuclear Information System (INIS)

    Kwon, Osung; Lee, Min-Soo; Woo, Min Ki; Park, Byung Kwon; Kim, Il Young; Kim, Yong-Su; Han, Sang-Wook; Moon, Sung

    2015-01-01

    We characterized a polarization-independent phase modulation method, called double phase modulation, for a practical plug and play quantum key distribution (QKD) system. Following investigation of theoretical backgrounds, we applied the method to the practical QKD system and characterized the performance through comparing single phase modulation (SPM) and double phase modulation. Consequently, we obtained repeatable and accurate phase modulation confirmed by high visibility single photon interference even for input signals with arbitrary polarization. Further, the results show that only 80% of the bias voltage required in the case of single phase modulation is needed to obtain the target amount of phase modulation. (paper)

  13. Cluster abundance in chameleon f ( R ) gravity I: toward an accurate halo mass function prediction

    Energy Technology Data Exchange (ETDEWEB)

    Cataneo, Matteo; Rapetti, David [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, 2100 Copenhagen (Denmark); Lombriser, Lucas [Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ (United Kingdom); Li, Baojiu, E-mail: matteoc@dark-cosmology.dk, E-mail: drapetti@dark-cosmology.dk, E-mail: llo@roe.ac.uk, E-mail: baojiu.li@durham.ac.uk [Institute for Computational Cosmology, Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom)

    2016-12-01

    We refine the mass and environment dependent spherical collapse model of chameleon f ( R ) gravity by calibrating a phenomenological correction inspired by the parameterized post-Friedmann framework against high-resolution N -body simulations. We employ our method to predict the corresponding modified halo mass function, and provide fitting formulas to calculate the enhancement of the f ( R ) halo abundance with respect to that of General Relativity (GR) within a precision of ∼< 5% from the results obtained in the simulations. Similar accuracy can be achieved for the full f ( R ) mass function on the condition that the modeling of the reference GR abundance of halos is accurate at the percent level. We use our fits to forecast constraints on the additional scalar degree of freedom of the theory, finding that upper bounds competitive with current Solar System tests are within reach of cluster number count analyses from ongoing and upcoming surveys at much larger scales. Importantly, the flexibility of our method allows also for this to be applied to other scalar-tensor theories characterized by a mass and environment dependent spherical collapse.

  14. Update on Fresh Fuel Characterization of U-Mo Alloys

    International Nuclear Information System (INIS)

    Burkes, D.E.; Wachs, D.M.; Keiser, D.D.; Okuniewski, M.A.; Jue, J.F.; Rice, F.J.; Prabhakaran, R.

    2009-01-01

    The need to provide more accurate property information on U-Mo fuel alloys to operators, modellers, researchers, fabricators, and government increases as success of the GTRI Reactor Convert program continues. This presentation provides an update on fresh fuel characterization activities that have occurred at the INL since the RERTR 2008 conference in Washington, D.C. The update is particularly focused on properties recently obtained and on the development progress of new measurement techniques. Furthermore, areas where useful and necessary information is still lacking is discussed. The update deals with mechanical, physical, and microstructural properties for both integrated and separate effects. Appropriate discussion of fabrication characteristics, impurities, thermodynamic response, and effects on the topic areas are provided, along with a background on the characterization techniques used and developed to obtain the information. Efforts to measure similar characteristics on irradiated fuel plates are discussed.

  15. Characterization of axial probes used in eddy current testing

    International Nuclear Information System (INIS)

    Wache, G.; Nourrisson, Ph.; Garet, Th.

    2001-01-01

    Customized reference tubes reduced sensitivity discrepancies able to be observed from one probe to the other, due to the gain setting adjustment required for a pre-definite level in amplitude response of the artificial notch. The use of a reference circuit in place of a reference part, makes characterization of the probe matched to its generator more accurate: - the material dependence is cancelled during the compensation process, - the reference signal can be adjusted more accurately in amplitude and phase response, - the manufacturing cost is reduced compared to the one necessary for machining the reference part, - the amplitude and phase response of the reference circuit can be simply modelled by using the transformer relations, such as one can appreciate the variations of the probe definition parameters and its connexion to the generator, and makes them optimal for use. The method proposed by ALSTOM for the characterization of the condenser and exchanger tubing probes, takes in account the amplitude and phase response of a reference circuit versus frequency, such it can be done by using SURECA tubing provided by ASCOT: it allows to control that the frequency values of the probe required for use are inside the useful bandwidth defined by the - 6 dB attenuation from the maximum amplitude response of the reference circuit versus frequency. Examples coming from measurements done among more than 200 probes, for which faults have been observed and replacements made by the manufacturer, are displayed and commented. (authors)

  16. A two-step method for rapid characterization of electroosmotic flows in capillary electrophoresis.

    Science.gov (United States)

    Zhang, Wenjing; He, Muyi; Yuan, Tao; Xu, Wei

    2017-12-01

    The measurement of electroosmotic flow (EOF) is important in a capillary electrophoresis (CE) experiment in terms of performance optimization and stability improvement. Although several methods exist, there are demanding needs to accurately characterize ultra-low electroosmotic flow rates (EOF rates), such as in coated capillaries used in protein separations. In this work, a new method, called the two-step method, was developed to accurately and rapidly measure EOF rates in a capillary, especially for measuring the ultra-low EOF rates in coated capillaries. In this two-step method, the EOF rates were calculated by measuring the migration time difference of a neutral marker in two consecutive experiments, in which a pressure driven was introduced to accelerate the migration and the DC voltage was reversed to switch the EOF direction. Uncoated capillaries were first characterized by both this two-step method and a conventional method to confirm the validity of this new method. Then this new method was applied in the study of coated capillaries. Results show that this new method is not only fast in speed, but also better in accuracy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Prediction of Accurate Mixed Mode Fatigue Crack Growth Curves using the Paris' Law

    Science.gov (United States)

    Sajith, S.; Krishna Murthy, K. S. R.; Robi, P. S.

    2017-12-01

    Accurate information regarding crack growth times and structural strength as a function of the crack size is mandatory in damage tolerance analysis. Various equivalent stress intensity factor (SIF) models are available for prediction of mixed mode fatigue life using the Paris' law. In the present investigation these models have been compared to assess their efficacy in prediction of the life close to the experimental findings as there are no guidelines/suggestions available on selection of these models for accurate and/or conservative predictions of fatigue life. Within the limitations of availability of experimental data and currently available numerical simulation techniques, the results of present study attempts to outline models that would provide accurate and conservative life predictions.

  18. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  19. An Accurate and Dynamic Computer Graphics Muscle Model

    Science.gov (United States)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  20. A self-interaction-free local hybrid functional: Accurate binding energies vis-à-vis accurate ionization potentials from Kohn-Sham eigenvalues

    International Nuclear Information System (INIS)

    Schmidt, Tobias; Kümmel, Stephan; Kraisler, Eli; Makmal, Adi; Kronik, Leeor

    2014-01-01

    We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potentials is not yet satisfactory, and if we choose to optimize their prediction, a rather different value of the functional's parameter is obtained. We put this finding in a larger context by discussing similar observations for other functionals and possible directions for further functional development that our findings suggest

  1. Towards accurate de novo assembly for genomes with repeats

    NARCIS (Netherlands)

    Bucur, Doina

    2017-01-01

    De novo genome assemblers designed for short k-mer length or using short raw reads are unlikely to recover complex features of the underlying genome, such as repeats hundreds of bases long. We implement a stochastic machine-learning method which obtains accurate assemblies with repeats and

  2. General approach for accurate resonance analysis in transformer windings

    NARCIS (Netherlands)

    Popov, M.

    2018-01-01

    In this paper, resonance effects in transformer windings are thoroughly investigated and analyzed. The resonance is determined by making use of an accurate approach based on the application of the impedance matrix of a transformer winding. The method is validated by a test coil and the numerical

  3. XPS Protocol for the Characterization of Pristine and Functionalized Single Wall Carbon Nanotubes

    Science.gov (United States)

    Sosa, E. D.; Allada, R.; Huffman, C. B.; Arepalli, S.

    2009-01-01

    Recent interest in developing new applications for carbon nanotubes (CNT) has fueled the need to use accurate macroscopic and nanoscopic techniques to characterize and understand their chemistry. X-ray photoelectron spectroscopy (XPS) has proved to be a useful analytical tool for nanoscale surface characterization of materials including carbon nanotubes. Recent nanotechnology research at NASA Johnson Space Center (NASA-JSC) helped to establish a characterization protocol for quality assessment for single wall carbon nanotubes (SWCNTs). Here, a review of some of the major factors of the XPS technique that can influence the quality of analytical data, suggestions for methods to maximize the quality of data obtained by XPS, and the development of a protocol for XPS characterization as a complementary technique for analyzing the purity and surface characteristics of SWCNTs is presented. The XPS protocol is then applied to a number of experiments including impurity analysis and the study of chemical modifications for SWCNTs.

  4. Geometrical modelling of scanning probe microscopes and characterization of errors

    International Nuclear Information System (INIS)

    Marinello, F; Savio, E; Bariani, P; Carmignato, S

    2009-01-01

    Scanning probe microscopes (SPMs) allow quantitative evaluation of surface topography with ultra-high resolution, as a result of accurate actuation combined with the sharpness of tips. SPMs measure sequentially, by scanning surfaces in a raster fashion: topography maps commonly consist of data sets ideally reported in an orthonormal rectilinear Cartesian coordinate system. However, due to scanning errors and measurement distortions, the measurement process is far from the ideal Cartesian condition. The paper addresses geometrical modelling of the scanning system dynamics, presenting a mathematical model which describes the surface metric x-, y- and z- coordinates as a function of the measured x'-, y'- and z'-coordinates respectively. The complete mathematical model provides a relevant contribution to characterization and calibration, and ultimately to traceability, of SPMs, when applied for quantitative characterization

  5. Fast and accurate edge orientation processing during object manipulation

    Science.gov (United States)

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  6. Multigrid time-accurate integration of Navier-Stokes equations

    Science.gov (United States)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  7. Toward an accurate taxonomic interpretation of Carex fossil fruits (Cyperaceae): a case study in section Phacocystis in the Western Palearctic.

    Science.gov (United States)

    Jiménez-Mejías, Pedro; Martinetto, Edoardo

    2013-08-01

    Despite growing interest in the systematics and evolution of the hyperdiverse genus Carex, few studies have focused on its evolution using an absolute time framework. This is partly due to the limited knowledge of the fossil record. However, Carex fruits are not rare in certain sediments. We analyzed carpological features of modern materials from Carex sect. Phacocystis to characterize the fossil record taxonomically. We studied 374 achenes from modern materials (18 extant species), as well as representatives from related groups, to establish the main traits within and among species. We also studied 99 achenes from sediments of living populations to assess their modification process after decay. Additionally, we characterized 145 fossil achenes from 10 different locations (from 4-0.02 mya), whose taxonomic assignment we discuss. Five main characters were identified for establishing morphological groups of species (epidermis morphology, achene-utricle attachment, achene base, style robustness, and pericarp section). Eleven additional characters allowed the discrimination at species level of most of the taxa. Fossil samples were assigned to two extant species and one unknown, possibly extinct species. The analysis of fruit characters allows the distinction of groups, even up to species level. Carpology is revealed as an accurate tool in Carex paleotaxonomy, which could allow the characterization of Carex fossil fruits and assign them to subgeneric or sectional categories, or to certain species. Our conclusions could be crucial for including a temporal framework in the study of the evolution of Carex.

  8. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    Science.gov (United States)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  9. Decision Fusion Based on Hyperspectral and Multispectral Satellite Imagery for Accurate Forest Species Mapping

    Directory of Open Access Journals (Sweden)

    Dimitris G. Stavrakoudis

    2014-07-01

    Full Text Available This study investigates the effectiveness of combining multispectral very high resolution (VHR and hyperspectral satellite imagery through a decision fusion approach, for accurate forest species mapping. Initially, two fuzzy classifications are conducted, one for each satellite image, using a fuzzy output support vector machine (SVM. The classification result from the hyperspectral image is then resampled to the multispectral’s spatial resolution and the two sources are combined using a simple yet efficient fusion operator. Thus, the complementary information provided from the two sources is effectively exploited, without having to resort to computationally demanding and time-consuming typical data fusion or vector stacking approaches. The effectiveness of the proposed methodology is validated in a complex Mediterranean forest landscape, comprising spectrally similar and spatially intermingled species. The decision fusion scheme resulted in an accuracy increase of 8% compared to the classification using only the multispectral imagery, whereas the increase was even higher compared to the classification using only the hyperspectral satellite image. Perhaps most importantly, its accuracy was significantly higher than alternative multisource fusion approaches, although the latter are characterized by much higher computation, storage, and time requirements.

  10. Characterization of Visual Scanning Patterns in Air Traffic Control.

    Science.gov (United States)

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.

  11. Toward Accurate On-Ground Attitude Determination for the Gaia Spacecraft

    Science.gov (United States)

    Samaan, Malak A.

    2010-03-01

    The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec.

  12. Using Virtual Testing for Characterization of Composite Materials

    Science.gov (United States)

    Harrington, Joseph

    Composite materials are finally providing uses hitherto reserved for metals in structural systems applications -- airframes and engine containment systems, wraps for repair and rehabilitation, and ballistic/blast mitigation systems. They have high strength-to-weight ratios, are durable and resistant to environmental effects, have high impact strength, and can be manufactured in a variety of shapes. Generalized constitutive models are being developed to accurately model composite systems so they can be used in implicit and explicit finite element analysis. These models require extensive characterization of the composite material as input. The particular constitutive model of interest for this research is a three-dimensional orthotropic elasto-plastic composite material model that requires a total of 12 experimental stress-strain curves, yield stresses, and Young's Modulus and Poisson's ratio in the material directions as input. Sometimes it is not possible to carry out reliable experimental tests needed to characterize the composite material. One solution is using virtual testing to fill the gaps in available experimental data. A Virtual Testing Software System (VTSS) has been developed to address the need for a less restrictive method to characterize a three-dimensional orthotropic composite material. The system takes in the material properties of the constituents and completes all 12 of the necessary characterization tests using finite element (FE) models. Verification and validation test cases demonstrate the capabilities of the VTSS.

  13. Parallel sparse direct solvers for Poisson's equation in streamer discharges

    NARCIS (Netherlands)

    M. Nool (Margreet); M. Genseberger (Menno); U. M. Ebert (Ute)

    2017-01-01

    textabstractThe aim of this paper is to examine whether a hybrid approach of parallel computing, a combination of the message passing model (MPI) with the threads model (OpenMP) can deliver good performance in streamer discharge simulations. Since one of the bottlenecks of almost all streamer

  14. Characterization of size, anisotropy, and density heterogeneity of nanoparticles by sedimentation velocity

    KAUST Repository

    Demeler, Borries

    2014-08-05

    A critical problem in materials science is the accurate characterization of the size dependent properties of colloidal inorganic nanocrystals. Due to the intrinsic polydispersity present during synthesis, dispersions of such materials exhibit simultaneous heterogeneity in density ρ, molar mass M, and particle diameter d. The density increments ∂ρ/∂d and ∂ρ/∂M of these nanoparticles, if known, can then provide important information about crystal growth and particle size distributions. For most classes of nanocrystals, a mixture of surfactants is added during synthesis to control their shape, size, and optical properties. However, it remains a challenge to accurately determine the amount of passivating ligand bound to the particle surface post synthesis. The presence of the ligand shell hampers an accurate determination of the nanocrystal diameter. Using CdSe and PbS semiconductor nanocrystals, and the ultrastable silver nanoparticle (M4Ag 44(p-MBA)30), as model systems, we describe a Custom Grid method implemented in UltraScan-III for the characterization of nanoparticles and macromolecules using sedimentation velocity analytical ultracentrifugation. We show that multiple parametrizations are possible, and that the Custom Grid method can be generalized to provide high resolution composition information for mixtures of solutes that are heterogeneous in two out of three parameters. For such cases, our method can simultaneously resolve arbitrary two-dimensional distributions of hydrodynamic parameters when a third property can be held constant. For example, this method extracts partial specific volume and molar mass from sedimentation velocity data for cases where the anisotropy can be held constant, or provides anisotropy and partial specific volume if the molar mass is known. © 2014 American Chemical Society.

  15. The first accurate description of an aurora

    Science.gov (United States)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  16. Accurate Ne-heavier rare gas interatomic potentials

    International Nuclear Information System (INIS)

    Candori, R.; Pirani, F.; Vecchiocattivi, F.

    1983-01-01

    Accurate interatomic potential curves for Ne-heavier rare gas systems are obtained by a multiproperty analysis. The curves are given via a parametric function which consists of a modified Dunham expansion connected at long range with the van der Waals expansion. The experimental properties considered in the analysis are the differential scattering cross sections at two different collision energies, the integral cross sections in the glory energy range and the second virial coefficients. The transport properties are considered indirectly by using the potential energy values recently obtained by inversion of the transport coefficients. (author)

  17. Radiological and hazardous material characterization report for the south portion of the 313 Building

    International Nuclear Information System (INIS)

    Harris, R.A.

    1995-12-01

    The objective of the characterization was to determine the extent of radiological contamination and presence of hazardous materials, to allow the preparation of an accurate cost estimate, and to plan for pre-demolition cleanup work to support building isolation. The scope of services for the project included the following tasks: Records Review and Interviews; Site Reconnaissance; Radiological Survey; and Sampling and Analysis

  18. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  19. Accurate measurement of the electron beam polarization in JLab Hall A using Compton polarimetry

    International Nuclear Information System (INIS)

    Escoffier, S.; Bertin, P.Y.; Brossard, M.; Burtin, E.; Cavata, C.; Colombel, N.; Jager, C.W. de; Delbart, A.; Lhuillier, D.; Marie, F.; Mitchell, J.; Neyret, D.; Pussieux, T.

    2005-01-01

    A major advance in accurate electron beam polarization measurement has been achieved at Jlab Hall A with a Compton polarimeter based on a Fabry-Perot cavity photon beam amplifier. At an electron energy of 4.6GeV and a beam current of 40μA, a total relative uncertainty of 1.5% is typically achieved within 40min of data taking. Under the same conditions monitoring of the polarization is accurate at a level of 1%. These unprecedented results make Compton polarimetry an essential tool for modern parity-violation experiments, which require very accurate electron beam polarization measurements

  20. Performance evaluation of canny edge detection on a tiled multicore architecture

    Science.gov (United States)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  1. Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks

    Directory of Open Access Journals (Sweden)

    Vishal Aslot

    2003-01-01

    Full Text Available The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.

  2. Dense and accurate whole-chromosome haplotyping of individual genomes

    NARCIS (Netherlands)

    Porubsky, David; Garg, Shilpa; Sanders, Ashley D.; Korbel, Jan O.; Guryev, Victor; Lansdorp, Peter M.; Marschall, Tobias

    2017-01-01

    The diploid nature of the human genome is neglected in many analyses done today, where a genome is perceived as a set of unphased variants with respect to a reference genome. This lack of haplotype-level analyses can be explained by a lack of methods that can produce dense and accurate

  3. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  4. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  5. Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem

    KAUST Repository

    Younis, Mohammad I.

    2014-08-17

    We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data in the literature and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximations they are based on. In such cases, multi-mode reduced order models need to be utilized.

  6. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory.

    Science.gov (United States)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S; Shirley, Eric L; Prendergast, David

    2017-03-03

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  7. Aerial radiation survey techniques for efficient characterization of large areas

    International Nuclear Information System (INIS)

    Sydelko, T.; Riedhauser, S.

    2006-01-01

    Full text: Accidental or intentional releases of radioactive isotopes over potentially very large surface areas can pose serious health risks to humans and ecological receptors. Timely and appropriate responses to these releases depend upon rapid and accurate characterization of impacted areas. These characterization efforts can be adversely impacted by heavy vegetation, rugged terrain, urban environments, and the presence of unknown levels of radioactivity. Aerial survey techniques have proven highly successful in measuring gamma emissions from radiological contaminates of concern quickly, efficiently, and safely. Examples of accidental releases include the unintentional distribution of uranium mining ores during transportation, the loss of uranium processing and waste materials, unintentional nuclear power plant emissions into the atmosphere, and the distribution of isotopes during major flooding events such as the one recently occurring in New Orleans. Intentional releases have occurred during the use of deleted uranium ammunition test firing and war time use by military organizations. The threat of radiological dispersion device (dirty bomb) use by terrorists is currently a major concern of many major cities worldwide. The U.S. Department of Energy, in cooperation with its Remote Sensing Laboratory and Argonne National Laboratory, has developed a sophisticated aerial measurement system for identifying the locations, types, and quantities of gamma emitting radionuclides over extremely large areas. Helicopter mounted Nal detectors are flown at low altitude and constant speed along parallel paths measuring the full spectrum of gamma activity. Analytical procedures are capable of distinguishing between radiological contamination and changes in natural background emissions. Mapped and tabular results of these accurate, timely and cost effective aerial gamma radiation surveys can be used to assist with emergency response actions, if necessary, and to focus more

  8. Current status of accurate prognostic awareness in advanced/terminally ill cancer patients: Systematic review and meta-regression analysis.

    Science.gov (United States)

    Chen, Chen Hsiu; Kuo, Su Ching; Tang, Siew Tzuh

    2017-05-01

    No systematic meta-analysis is available on the prevalence of cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. To examine the prevalence of advanced/terminal cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. Systematic review and meta-analysis. MEDLINE, Embase, The Cochrane Library, CINAHL, and PsycINFO were systematically searched on accurate prognostic awareness in adult patients with advanced/terminal cancer (1990-2014). Pooled prevalences were calculated for accurate prognostic awareness by a random-effects model. Differences in weighted estimates of accurate prognostic awareness were compared by meta-regression. In total, 34 articles were retrieved for systematic review and meta-analysis. At best, only about half of advanced/terminal cancer patients accurately understood their prognosis (49.1%; 95% confidence interval: 42.7%-55.5%; range: 5.4%-85.7%). Accurate prognostic awareness was independent of service received and publication year, but highest in Australia, followed by East Asia, North America, and southern Europe and the United Kingdom (67.7%, 60.7%, 52.8%, and 36.0%, respectively; p = 0.019). Accurate prognostic awareness was higher by clinician assessment than by patient report (63.2% vs 44.5%, p cancer patients accurately understood their prognosis, with significant variations by region and assessment method. Healthcare professionals should thoroughly assess advanced/terminal cancer patients' preferences for prognostic information and engage them in prognostic discussion early in the cancer trajectory, thus facilitating their accurate prognostic awareness and the quality of end-of-life care decision-making.

  9. Accurate absolute measurement of trapped Cs atoms in a MOT

    International Nuclear Information System (INIS)

    Talavera O, M.; Lopez R, M.; Carlos L, E. de; Jimenez S, S.

    2007-01-01

    A Cs-133 Magneto-Optical Trap (MOT) has been developed at the Time and Frequency Division of the Centro Nacional de Metrologia, CENAM, in Mexico. This MOT is part of a primary frequency standard based on ultra-cold Cs atoms, called CsF-1 clock, under development at CENAM. In this Cs MOT, we use the standard configuration (σ + - σ - ) 4-horizontal 2-vertical laser beams 1.9 cm in diameter, with 5 mW each. We use a 852 nm, 5 mW, DBR laser as a master laser which is stabilized by saturation spectroscopy. Emission linewidth of the master laser is l MHz. In order to amplify the light of the master laser, a 50 mW, 852 nm AlGaAs laser is used as slave laser. This slave laser is stabilized by light injection technique. A 12 MHz red shift of the light is performed by two double passes through two Acusto-Optic Modulators (AOMs). The optical part of the CENAMs MOT is very robust against mechanical vibration, acoustic noise and temperature changes in our laboratory, because none of our diode lasers use an extended cavity to reduce the linewidth. In this paper, we report results of our MOT characterization as a function of several operation parameters such as the intensity of laser beams, the laser beam diameter, the red shift of light, and the gradient of the magnetic field. We also report accurate absolute measurement of the number of Cs atoms trapped in our Cs MOT. We found up to 6 x 10 7 Cs atoms trapped in our MOT measured with an uncertainty no greater than 6.4%. (Author)

  10. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  11. FRACTURING FLUID CHARACTERIZATION FACILITY

    Energy Technology Data Exchange (ETDEWEB)

    Subhash Shah

    2000-08-01

    Hydraulic fracturing technology has been successfully applied for well stimulation of low and high permeability reservoirs for numerous years. Treatment optimization and improved economics have always been the key to the success and it is more so when the reservoirs under consideration are marginal. Fluids are widely used for the stimulation of wells. The Fracturing Fluid Characterization Facility (FFCF) has been established to provide the accurate prediction of the behavior of complex fracturing fluids under downhole conditions. The primary focus of the facility is to provide valuable insight into the various mechanisms that govern the flow of fracturing fluids and slurries through hydraulically created fractures. During the time between September 30, 1992, and March 31, 2000, the research efforts were devoted to the areas of fluid rheology, proppant transport, proppant flowback, dynamic fluid loss, perforation pressure losses, and frictional pressure losses. In this regard, a unique above-the-ground fracture simulator was designed and constructed at the FFCF, labeled ''The High Pressure Simulator'' (HPS). The FFCF is now available to industry for characterizing and understanding the behavior of complex fluid systems. To better reflect and encompass the broad spectrum of the petroleum industry, the FFCF now operates under a new name of ''The Well Construction Technology Center'' (WCTC). This report documents the summary of the activities performed during 1992-2000 at the FFCF.

  12. AMID: Accurate Magnetic Indoor Localization Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Namkyoung Lee

    2018-05-01

    Full Text Available Geomagnetic-based indoor positioning has drawn a great attention from academia and industry due to its advantage of being operable without infrastructure support and its reliable signal characteristics. However, it must overcome the problems of ambiguity that originate with the nature of geomagnetic data. Most studies manage this problem by incorporating particle filters along with inertial sensors. However, they cannot yield reliable positioning results because the inertial sensors in smartphones cannot precisely predict the movement of users. There have been attempts to recognize the magnetic sequence pattern, but these attempts are proven only in a one-dimensional space, because magnetic intensity fluctuates severely with even a slight change of locations. This paper proposes accurate magnetic indoor localization using deep learning (AMID, an indoor positioning system that recognizes magnetic sequence patterns using a deep neural network. Features are extracted from magnetic sequences, and then the deep neural network is used for classifying the sequences by patterns that are generated by nearby magnetic landmarks. Locations are estimated by detecting the landmarks. AMID manifested the proposed features and deep learning as an outstanding classifier, revealing the potential of accurate magnetic positioning with smartphone sensors alone. The landmark detection accuracy was over 80% in a two-dimensional environment.

  13. Genome-wide identification of the regulatory targets of a transcription factor using biochemical characterization and computational genomic analysis

    Directory of Open Access Journals (Sweden)

    Jolly Emmitt R

    2005-11-01

    Full Text Available Abstract Background A major challenge in computational genomics is the development of methodologies that allow accurate genome-wide prediction of the regulatory targets of a transcription factor. We present a method for target identification that combines experimental characterization of binding requirements with computational genomic analysis. Results Our method identified potential target genes of the transcription factor Ndt80, a key transcriptional regulator involved in yeast sporulation, using the combined information of binding affinity, positional distribution, and conservation of the binding sites across multiple species. We have also developed a mathematical approach to compute the false positive rate and the total number of targets in the genome based on the multiple selection criteria. Conclusion We have shown that combining biochemical characterization and computational genomic analysis leads to accurate identification of the genome-wide targets of a transcription factor. The method can be extended to other transcription factors and can complement other genomic approaches to transcriptional regulation.

  14. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    2017-02-01

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.

  15. Using X-ray, K-edge densitometry in spent fuel characterization

    International Nuclear Information System (INIS)

    Jensen, T.; Aljundi, T.; Gray, J.N.

    1998-01-01

    There are instances where records for spent nuclear fuel are incomplete, as well as cases where fuel assemblies have deteriorated during storage. To bring these materials into compliance for long term storage will require determination of parameters such as enrichment, total fissionable material, and burnup. To obtain accurate estimates of these parameters will require the combination of information from different inspection techniques. A method which can provide an accurate measure of the total uranium in the spent fuel is X-ray K-edge densitometry. To assess the potential for applying this method in spent fuel characterization, the authors have measured the amount of uranium in stacks of reactor fuel plates containing nuclear materials of different enrichments and alloys. They have obtained good agreement with expected uranium concentrations ranging from 60 mg/cm 2 to 3,000 mg/cm 2 , and have demonstrated that these measurements can be made in a high radiation field (> 200 mR/hr)

  16. Accurate and precise determination of small quantity uranium by means of automatic potentiometric titration

    International Nuclear Information System (INIS)

    Liu Quanwei; Luo Zhongyan; Zhu Haiqiao; Wu Jizong

    2007-01-01

    For high radioactivity level of dissolved solution of spent fuel and the solution of uranium product, radioactive hazard must be considered and reduced as low as possible during accurate determination of uranium. In this work automatic potentiometric titration was applied and the sample only 10 mg of uranium contained was taken in order to reduce the harm of analyzer suffered from the radioactivity. RSD<0.06%, at the same time the result can be corrected for more reliable and accurate measurement. The determination method can effectively reduce the harm of analyzer suffered from the radioactivity, and meets the requirement of reliable accurate measurement of uranium. (authors)

  17. A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity

    Directory of Open Access Journals (Sweden)

    C. C. Cui

    2017-01-01

    Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.

  18. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  19. Device accurately measures and records low gas-flow rates

    Science.gov (United States)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  20. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter

    Science.gov (United States)

    2009-03-31

    AFRL-RV-HA-TR-2009-1055 Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter ...m (or even 500 m) at mid to high latitudes . At low latitudes , the FDTD model exhibits variations that make it difficult to determine a reliable...Scientific, Final 3. DATES COVERED (From - To) 02-08-2006 – 31-12-2008 4. TITLE AND SUBTITLE Accurate Modeling of Ionospheric Electromagnetic Fields

  1. Carbon Contamination During Ion Irradiation - Accurate Detection and Characterization of its Effect on Microstructure of Ferritic/Martensitic Steels

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jing; Toloczko, Mychailo B.; Kruska, Karen; Schreiber, Daniel K.; Edwards, Danny J.; Zhu, Zihua; Zhang, Jiandong

    2017-11-17

    Accelerator-based ion beam techniques have been used to study radiation effects in materials for decades. Although carbon contamination induced by ion beam in target materials is a well-known issue, it has not been fully characterized nor quantified for studies in ferritic/martensitic (F/M) steels that are candidate materials for applications such as core structural components in advanced nuclear reactors. It is an especially important issue for this class of material because of the effect of carbon level on precipitate formation. In this paper, the ability to quantify carbon contamination using three common techniques, namely time-of-flight secondary ion mass spectroscopy (ToF-SIMS), atom probe tomography (APT) and transmission electron microscopy (TEM) is compared. Their effectiveness and short-comings in determining carbon contamination will be presented and discussed. The corresponding microstructural changes related to carbon contamination in ion irradiated F/M steels are also presented and briefly discussed.

  2. Definition of accurate reference pattern for the DTU-ESA VAST12 antenna

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Breinbjerg, Olav; Burgos, Sara

    2009-01-01

    In this paper, the DTU-ESA 12 GHz validation standard (VAST12) antenna and a dedicated measurement campaign carried out in 2007-2008 for the definition of its accurate reference pattern are first described. Next, a comparison between the results from the three involved measurement facilities...... is presented. Then, an accurate reference pattern of the VAST12 antenna is formed by averaging the three results taking into account the estimated uncertainties of each result. Finally, the potential use of the reference pattern for benchmarking of antenna measurement facilities is outlined....

  3. Examination of Speed Contribution of Parallelization for Several Fingerprint Pre-Processing Algorithms

    Directory of Open Access Journals (Sweden)

    GORGUNOGLU, S.

    2014-05-01

    Full Text Available In analysis of minutiae based fingerprint systems, fingerprints needs to be pre-processed. The pre-processing is carried out to enhance the quality of the fingerprint and to obtain more accurate minutiae points. Reducing the pre-processing time is important for identification and verification in real time systems and especially for databases holding large fingerprints information. Parallel processing and parallel CPU computing can be considered as distribution of processes over multi core processor. This is done by using parallel programming techniques. Reducing the execution time is the main objective in parallel processing. In this study, pre-processing of minutiae based fingerprint system is implemented by parallel processing on multi core computers using OpenMP and on graphics processor using CUDA to improve execution time. The execution times and speedup ratios are compared with the one that of single core processor. The results show that by using parallel processing, execution time is substantially improved. The improvement ratios obtained for different pre-processing algorithms allowed us to make suggestions on the more suitable approaches for parallelization.

  4. A comprehensive three-dimensional model of the cochlea

    International Nuclear Information System (INIS)

    Givelberg, Edward; Bunn, Julian

    2003-01-01

    The human cochlea is a remarkable device, able to discern extremely small amplitude sound pressure waves, and discriminate between very close frequencies. Simulation of the cochlea is computationally challenging due to its complex geometry, intricate construction and small physical size. We have developed, and are continuing to refine, a detailed three-dimensional computational model based on an accurate cochlear geometry obtained from physical measurements. In the model, the immersed boundary method is used to calculate the fluid-structure interactions produced in response to incoming sound waves. The model includes a detailed and realistic description of the various elastic structures present. In this paper, we describe the computational model and its performance on the latest generation of shared memory servers from Hewlett Packard. Using compiler generated threads and OpenMP directives, we have achieved a high degree of parallelism in the executable, which has made possible several large scale numerical simulation experiments that study the interesting features of the cochlear system. We show several results from these simulations, reproducing some of the basic known characteristics of cochlear mechanics

  5. Characterization of a photovoltaic-thermal module for Fresnel linear concentrator

    International Nuclear Information System (INIS)

    Chemisana, D.; Ibanez, M.; Rosell, J.I.

    2011-01-01

    Highlights: → A combined domed Fresnel lens - CPC PVT system is designed and characterized. → Electrical and thermal experiments have been performed. → CFD analysis has been used to determine thermal characteristic dimensionless numbers. - Abstract: An advanced solar unit is designed to match the needs of building integration and concentrating photovoltaic/thermal generation. The unit proposed accurately combines three elements: a domed linear Fresnel lens as primary concentrator, a compound parabolic reflector as secondary concentrator and a photovoltaic-thermal module. In this work the photovoltaic-thermal generator is built, analysed and characterized. Models for the electrical and thermal behaviour of the module are developed and validated experimentally. Applying a thermal resistances approach the results from both models are combined. Finally, efficiency electrical and thermal curves are derived from theoretical analysis showing good agreement with experimental measurements.

  6. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  7. RAJA Performance Suite

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-01

    The RAJA Performance Suite is designed to evaluate performance of the RAJA performance portability library on a wide variety of important high performance computing (HPC) algorithmic lulmels. These kernels assess compiler optimizations and various parallel programming model backends accessible through RAJA, such as OpenMP, CUDA, etc. The Initial version of the suite contains 25 computational kernels, each of which appears in 6 variants: Baseline SequcntiaJ, RAJA SequentiaJ, Baseline OpenMP, RAJA OpenMP, Baseline CUDA, RAJA CUDA. All variants of each kernel perform essentially the same mathematical operations and the loop body code for each kernel is identical across all variants. There are a few kernels, such as those that contain reduction operations, that require CUDA-specific coding for their CUDA variants. ActuaJ computer instructions executed and how they run in parallel differs depending on the parallel programming model backend used and which optimizations are perfonned by the compiler used to build the Perfonnance Suite executable. The Suite will be used primarily by RAJA developers to perform regular assessments of RAJA performance across a range of hardware platforms and compilers as RAJA features are being developed. It will also be used by LLNL hardware and software vendor panners for new defining requirements for future computing platform procurements and acceptance testing. In particular, the RAJA Performance Suite will be used for compiler acceptance testing of the upcoming CORAUSierra machine {initial LLNL delivery expected in late-2017/early 2018) and the CORAL-2 procurement. The Suite will aJso be used to generate concise source code reproducers of compiler and runtime issues we uncover so that we may provide them to relevant vendors to be fixed.

  8. Using In-Service and Coaching to Increase Teachers' Accurate Use of Research-Based Strategies

    Science.gov (United States)

    Kretlow, Allison G.; Cooke, Nancy L.; Wood, Charles L.

    2012-01-01

    Increasing the accurate use of research-based practices in classrooms is a critical issue. Professional development is one of the most practical ways to provide practicing teachers with training related to research-based practices. This study examined the effects of in-service plus follow-up coaching on first grade teachers' accurate delivery of…

  9. Accurate Medium-Term Wind Power Forecasting in a Censored Classification Framework

    DEFF Research Database (Denmark)

    Dahl, Christian M.; Croonenbroeck, Carsten

    2014-01-01

    We provide a wind power forecasting methodology that exploits many of the actual data's statistical features, in particular both-sided censoring. While other tools ignore many of the important “stylized facts” or provide forecasts for short-term horizons only, our approach focuses on medium......-term forecasts, which are especially necessary for practitioners in the forward electricity markets of many power trading places; for example, NASDAQ OMX Commodities (formerly Nord Pool OMX Commodities) in northern Europe. We show that our model produces turbine-specific forecasts that are significantly more...... accurate in comparison to established benchmark models and present an application that illustrates the financial impact of more accurate forecasts obtained using our methodology....

  10. An efficient and accurate method for calculating nonlinear diffraction beam fields

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)

    2016-04-15

    This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.

  11. Secondary side photographic techniques used in characterization of Surry steam generator

    International Nuclear Information System (INIS)

    Sinclair, R.B.

    1984-10-01

    Characterization of the generator's secondary side prior to destructive removal of tubing presents a significant challenge. Information must be obtained in a radioactive field (up to 15 R/h) throughout the tightly spaced bundle of steam generator tubes. This report discusses the various techniques employed, along with their respective advantages and disadvantages. The most successful approach to nondestructive secondary side characterization and documentation was through use of in-house developed pinhole cameras. These devices provided accurate photographic documentation of generator condition. They could be fabricated in geometries allowing access to all parts of the generator. Semi-remote operation coupled with large area coverage per investigation and short at-location times resulted in significant personnel exposure advantages. The fabrication and use of pinhole cameras for remote inspection is discussed in detail

  12. Accurate measurement of indoor radon concentration using a low-effective volume radon monitor

    International Nuclear Information System (INIS)

    Tanaka, Aya; Minami, Nodoka; Mukai, Takahiro; Yasuoka, Yumi; Iimoto, Takeshi; Omori, Yasutaka; Nagahama, Hiroyuki; Muto, Jun

    2017-01-01

    AlphaGUARD is a low-effective volume detector and one of the most popular portable radon monitors which is currently available. This study investigated whether AlphaGUARD can accurately measure the variable indoor radon levels. The consistency of the radon-concentration data obtained by AlphaGUARD is evaluated against simultaneous measurements by two other monitors (each ∼10 times more sensitive than AlphaGUARD). When accurately measuring radon concentration with AlphaGUARD, we found that the net counts of the AlphaGUARD were required of at least 500 counts, <25% of the relative percent difference. AlphaGUARD can provide accurate measurements of radon concentration for the world average level (∼50 Bq m -3 ) and the reference level of workplace (1000 Bq m -3 ), using integrated data over at least 3 h and 10 min, respectively. (authors)

  13. A method for accurate computation of elastic and discrete inelastic scattering transfer matrix

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Santina, M.D.

    1986-05-01

    A method for accurate computation of elastic and discrete inelastic scattering transfer matrices is discussed. In particular, a partition scheme for the source energy range that avoids integration over intervals containing points where the integrand has discontinuous derivative is developed. Five-figure accurate numerical results are obtained for several test problems with the TRAMA program which incorporates the porposed method. A comparison with numerical results from existing processing codes is also presented. (author) [pt

  14. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth; Virta, Kristoffer

    2014-01-01

    to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions

  15. ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.

    Science.gov (United States)

    Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P

    2016-11-01

    ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P DRAGON score estimates (P DRAGON score estimates (P DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.

  16. Solid State Characterizations of Long-Term Leached Cast Stone Monoliths

    Energy Technology Data Exchange (ETDEWEB)

    Asmussen, Robert M.; Pearce, Carolyn I.; Parker, Kent E.; Miller, Brian W.; Lee, Brady D.; Buck, Edgar C.; Washton, Nancy M.; Bowden, Mark E.; Lawter, Amanda R.; McElroy, Erin M.; Serne, R Jeffrey

    2016-09-30

    This report describes the results from the solid phase characterization of six Cast Stone monoliths from the extended leach tests recently reported on (Serne et al. 2016),that were selected for characterization using multiple state-of-the-art approaches. The Cast Stone samples investigated were leached for > 590 d in the EPA Method 1315 test then archived for > 390 d in their final leachate. After reporting the long term leach behavior of the monoliths (containing radioactive 99Tc and stable 127I spikes and for original Westsik et al. 2013 fabricated monoliths, 238U), it was suggested that physical changes to the waste forms and a depleting inventory of contaminants of potential concern may mean that effective diffusivity calculations past 63 d should not be used to accurately represent long-term waste form behavior. These novel investigations, in both length of leaching time and application of solid state techniques, provide an initial arsenal of techniques which can be utilized to perform such Cast Stone solid phase characterization work, which in turn can support upcoming performance assessment maintenance. The work was performed at Pacific Northwest National Laboratory (PNNL) for Washington River Protection Solutions (WRPS) to characterize several properties of the long- term leached Cast Stone monolith samples.

  17. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  18. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    NARCIS (Netherlands)

    Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless

  19. Towards cycle-accurate performance predictions for real-time embedded systems

    NARCIS (Netherlands)

    Triantafyllidis, K.; Bondarev, E.; With, de P.H.N.; Arabnia, H.R.; Deligiannidis, L.; Jandieri, G.

    2013-01-01

    In this paper we present a model-based performance analysis method for component-based real-time systems, featuring cycle-accurate predictions of latencies and enhanced system robustness. The method incorporates the following phases: (a) instruction-level profiling of SW components, (b) modeling the

  20. Shale gas reservoir characterization using LWD in real time

    Energy Technology Data Exchange (ETDEWEB)

    Han, S.Y.; Kok, J.C.L.; Tollefsen, E.M.; Baihly, J.D.; Malpani, R.; Alford, J. [Schlumberger Canada Ltd., Calgary, AB (Canada)

    2010-07-01

    Wireline logging programs are frequently used to evaluate vertical boreholes in shale gas plays. Data logged from the vertical hole are used to define reservoir profiles for the horizontal target window. The horizontal wells are then steered based on gamma ray measurements obtained using correlations against the vertical pilot wells. Logging-while-drilling tools are used in bottom hole assemblies (BHA) to ensure accurate well placement and to perform detailed reservoir characterizations across the target structure. The LWD measurements are also used to avoid hazards and enhance rates of penetration. LWD can also be used to enhance trajectory placement and provide an improved understanding of reservoirs. In this study, LWD measurements were conducted at a shale gas play in order to obtain accurate well placement, formation evaluation, and completion optimization processes. The study showed how LWD measurements can be used to optimize well completion and stimulation plans by considering well positions in relation to geological targets, reservoir property changes, hydrocarbon saturation disparity, and variations in geomechanical properties. 21 refs., 13 figs.

  1. Fricke Xylenol Gel characterization at megavoltage radiation energy

    Energy Technology Data Exchange (ETDEWEB)

    Del Lama, Lucas Sacchini, E-mail: lucasdellama@gmail.com [Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, DF-FFCLRP/USP, Avenida Bandeirantes, n" o 3900, Monte Alegre, CEP: 14040-901, Ribeirão Preto, SP (Brazil); Petchevist, Paulo César Dias [Oncoville, Centro de Excelência em Radioterapia em Curitiba, Rodovia BR-277, n" o 1437, Ecoville, CEP: 82305-100, Curitiba, PR (Brazil); Almeida, Adelaide de [Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, DF-FFCLRP/USP, Avenida Bandeirantes, n" o 3900, Monte Alegre, CEP: 14040-901, Ribeirão Preto, SP (Brazil)

    2017-03-01

    Accurate determination of absorbed dose is of great importance in every medical application of ionizing radiation, mainly when involving biological tissues. Among different types of dosimeters, the ferrous sulfate chemical solution, known as Fricke solution, can be detached, due to its accuracy, reproducibility and linearity, been used in radiation dosimetry for over 50 years. Besides these characteristics, the Fricke Xylenol Gel (FXG), became one of the most known dosimeters for absorbed dose spatial distribution because of its high spatial resolution. In this work, we evaluated the FXG dosimeter taking into account different preparation recipes, in order to characterize its response in terms of absorbed dose range, linearity, sensitivity and fading.

  2. Fricke Xylenol Gel characterization at megavoltage radiation energy

    International Nuclear Information System (INIS)

    o 3900, Monte Alegre, CEP: 14040-901, Ribeirão Preto, SP (Brazil))" data-affiliation=" (Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, DF-FFCLRP/USP, Avenida Bandeirantes, no 3900, Monte Alegre, CEP: 14040-901, Ribeirão Preto, SP (Brazil))" >Del Lama, Lucas Sacchini; o 1437, Ecoville, CEP: 82305-100, Curitiba, PR (Brazil))" data-affiliation=" (Oncoville, Centro de Excelência em Radioterapia em Curitiba, Rodovia BR-277, no 1437, Ecoville, CEP: 82305-100, Curitiba, PR (Brazil))" >Petchevist, Paulo César Dias; o 3900, Monte Alegre, CEP: 14040-901, Ribeirão Preto, SP (Brazil))" data-affiliation=" (Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, DF-FFCLRP/USP, Avenida Bandeirantes, no 3900, Monte Alegre, CEP: 14040-901, Ribeirão Preto, SP (Brazil))" >Almeida, Adelaide de

    2017-01-01

    Accurate determination of absorbed dose is of great importance in every medical application of ionizing radiation, mainly when involving biological tissues. Among different types of dosimeters, the ferrous sulfate chemical solution, known as Fricke solution, can be detached, due to its accuracy, reproducibility and linearity, been used in radiation dosimetry for over 50 years. Besides these characteristics, the Fricke Xylenol Gel (FXG), became one of the most known dosimeters for absorbed dose spatial distribution because of its high spatial resolution. In this work, we evaluated the FXG dosimeter taking into account different preparation recipes, in order to characterize its response in terms of absorbed dose range, linearity, sensitivity and fading.

  3. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  4. Characterization of surface position in a liquid dispensing orifice

    Energy Technology Data Exchange (ETDEWEB)

    Farahi, R H [ORNL; Passian, Ali [ORNL; Thundat, Thomas George [ORNL; Lereu, Aude L [ORNL; Tetard, Laurene [University of Tennessee, Knoxville (UTK) & Oak Ridge National Laboratory (ORNL); Jones, Yolanda [ORNL

    2009-01-01

    Precision microdispencing technology delivers picoliter amounts of fluid for printing, electronic, optical, chemical and biomedical applications. In particular, microjetting is capable of accurate, flexible, and non-contact coating with polymers, thus allowing the functionalization of delicate microsensors such as microcantilevers. Information on various phases of droplet formation are important to control volume, uniformity, velocity and rate. One such aspect is the ringing of the meniscus after droplet breakoff which can affect subsequent drop formation. We present analysis of an optical characterization technique and experimental results on the behaviour of menisus oscillations in an orifice of a piezoelectric microjet.

  5. Detector characterization for efficiency calibration in different measurement geometries

    International Nuclear Information System (INIS)

    Toma, M.; Dinescu, L.; Sima, O.

    2005-01-01

    In order to perform an accurate efficiency calibration for different measurement geometries a good knowledge of the detector characteristics is required. The Monte Carlo simulation program GESPECOR is applied. The detector characterization required for Monte Carlo simulation is achieved using the efficiency values obtained from measuring a point source. The point source was measured in two significant geometries: the source placed in a vertical plane containing the vertical symmetry axis of the detector and in a horizontal plane containing the centre of the active volume of the detector. The measurements were made using gamma spectrometry technique. (authors)

  6. Characterization of geometrical random uncertainty distribution for a group of patients in radiotherapy; Caracterizacion de la distribucion de incertidumbres geometricas aleatorias para un grupo de pacientes en radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Munoz Montplet, C.; Jurado Bruggeman, D.

    2010-07-01

    Geometrical random uncertainty in radiotherapy is usually characterized by a unique value in each group of patients. We propose a novel approach based on a statistically accurate characterization of the uncertainty distribution, thus reducing the risk of obtaining potentially unsafe results in CT V-Pt margins or in the selection of correction protocols.

  7. An accurate metric for the spacetime around rotating neutron stars

    Science.gov (United States)

    Pappas, George

    2017-04-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, I.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  8. Rapid and accurate species tree estimation for phylogeographic investigations using replicated subsampling.

    Science.gov (United States)

    Hird, Sarah; Kubatko, Laura; Carstens, Bryan

    2010-11-01

    We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. How accurately can 21cm tomography constrain cosmology?

    Science.gov (United States)

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  10. A Bayesian method for characterizing distributed micro-releases: II. inference under model uncertainty with short time-series data.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef; Fast P. (Lawrence Livermore National Laboratory, Livermore, CA); Kraus, M. (Peterson AFB, CO); Ray, J. P.

    2006-01-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that these data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.

  11. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  12. Accurate reconstruction of the jV-characteristic of organic solar cells from measurements of the external quantum efficiency

    Science.gov (United States)

    Meyer, Toni; Körner, Christian; Vandewal, Koen; Leo, Karl

    2018-04-01

    In two terminal tandem solar cells, the current density - voltage (jV) characteristic of the individual subcells is typically not directly measurable, but often required for a rigorous device characterization. In this work, we reconstruct the jV-characteristic of organic solar cells from measurements of the external quantum efficiency under applied bias voltages and illumination. We show that it is necessary to perform a bias irradiance variation at each voltage and subsequently conduct a mathematical correction of the differential to the absolute external quantum efficiency to obtain an accurate jV-characteristic. Furthermore, we show that measuring the external quantum efficiency as a function of voltage for a single bias irradiance of 0.36 AM1.5g equivalent sun provides a good approximation of the photocurrent density over voltage curve. The method is tested on a selection of efficient, common single-junctions. The obtained conclusions can easily be transferred to multi-junction devices with serially connected subcells.

  13. Characterizing chemical systems with on-line computers and graphics

    International Nuclear Information System (INIS)

    Frazer, J.W.; Rigdon, L.P.; Brand, H.R.; Pomernacki, C.L.

    1979-01-01

    Incorporating computers and graphics on-line to chemical experiments and processes opens up new opportunities for the study and control of complex systems. Systems having many variables can be characterized even when the variable interactions are nonlinear, and the system cannot a priori be represented by numerical methods and models. That is, large sets of accurate data can be rapidly acquired, then modeling and graphic techniques can be used to obtain partial interpretation plus design of further experimentation. The experimenter can thus comparatively quickly iterate between experimentation and modeling to obtain a final solution. We have designed and characterized a versatile computer-controlled apparatus for chemical research, which incorporates on-line instrumentation and graphics. It can be used to determine the mechanism of enzyme-induced reactions or to optimize analytical methods. The apparatus can also be operated as a pilot plant to design control strategies. On-line graphics were used to display conventional plots used by biochemists and three-dimensional response-surface plots

  14. Small-Molecule Binding Aptamers: Selection Strategies, Characterization, and Applications

    Directory of Open Access Journals (Sweden)

    Annamaria eRuscito

    2016-05-01

    Full Text Available Aptamers are single-stranded, synthetic oligonucleotides that fold into 3-dimensional shapes capable of binding non-covalently with high affinity and specificity to a target molecule. They are generated via an in vitro process known as the Systematic Evolution of Ligands by EXponential enrichment, from which candidates are screened and characterized, and then applied in aptamer-based biosensors for target detection. Aptamers for small molecule targets such as toxins, antibiotics, molecular markers, drugs, and heavy metals will be the focus of this review. Their accurate detection is ultimately needed for the protection and wellbeing of humans and animals. However, issues such as the drastic difference in size of the aptamer and small molecule make it challenging to select, characterize, and apply aptamers for the detection of small molecules. Thus, recent (since 2012 notable advances in small molecule aptamers, which have overcome some of these challenges, are presented here, while defining challenges that still exist are discussed

  15. Robust and accurate vectorization of line drawings.

    Science.gov (United States)

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  16. Arbitrarily accurate twin composite π -pulse sequences

    Science.gov (United States)

    Torosov, Boyan T.; Vitanov, Nikolay V.

    2018-04-01

    We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .

  17. Accurate wavelength prediction of photonic crystal resonant reflection and applications in refractive index measurement

    DEFF Research Database (Denmark)

    Hermannsson, Pétur Gordon; Vannahme, Christoph; Smith, Cameron L. C.

    2014-01-01

    and superstrate materials. The importance of accounting for material dispersion in order to obtain accurate simulation results is highlighted, and a method for doing so using an iterative approach is demonstrated. Furthermore, an application for the model is demonstrated, in which the material dispersion......In the past decade, photonic crystal resonant reflectors have been increasingly used as the basis for label-free biochemical assays in lab-on-a-chip applications. In both designing and interpreting experimental results, an accurate model describing the optical behavior of such structures...... is essential. Here, an analytical method for precisely predicting the absolute positions of resonantly reflected wavelengths is presented. The model is experimentally verified to be highly accurate using nanoreplicated, polymer-based photonic crystal grating reflectors with varying grating periods...

  18. Feedforward signal prediction for accurate motion systems using digital filters

    NARCIS (Netherlands)

    Butler, H.

    2012-01-01

    A positioning system that needs to accurately track a reference can benefit greatly from using feedforward. When using a force actuator, the feedforward needs to generate a force proportional to the reference acceleration, which can be measured by means of an accelerometer or can be created by

  19. Using an FPGA for Fast Bit Accurate SoC Simulation

    NARCIS (Netherlands)

    Wolkotte, P.T.; Holzenspies, P.K.F.; Smit, Gerardus Johannes Maria

    In this paper we describe a sequential simulation method to simulate large parallel homo- and heterogeneous systems on a single FPGA. The method is applicable for parallel systems were lengthy cycle and bit accurate simulations are required. It is particularly designed for systems that do not fit

  20. Ab initio study of the CO-N2 complex: a new highly accurate intermolecular potential energy surface and rovibrational spectrum

    DEFF Research Database (Denmark)

    Cybulski, Hubert; Henriksen, Christian; Dawes, Richard

    2018-01-01

    A new, highly accurate ab initio ground-state intermolecular potential-energy surface (IPES) for the CO-N2 complex is presented. Thousands of interaction energies calculated with the CCSD(T) method and Dunning's aug-cc-pVQZ basis set extended with midbond functions were fitted to an analytical...... function. The global minimum of the potential is characterized by an almost T-shaped structure and has an energy of -118.2 cm-1. The symmetry-adapted Lanczos algorithm was used to compute rovibrational energies (up to J = 20) on the new IPES. The RMSE with respect to experiment was found to be on the order...... of 0.038 cm-1 which confirms the very high accuracy of the potential. This level of agreement is among the best reported in the literature for weakly bound systems and considerably improves on those of previously published potentials....

  1. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    Science.gov (United States)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  2. A practical method for accurate quantification of large fault trees

    International Nuclear Information System (INIS)

    Choi, Jong Soo; Cho, Nam Zin

    2007-01-01

    This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees

  3. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  4. Waste site characterization through digital analysis of historical aerial photographs at Los Alamos National Laboratory and Eglin Air Force Base

    International Nuclear Information System (INIS)

    Van Eeckhout, E.; Pope, P.; Wells, B.; Rofer, C.; Martin, B.

    1995-01-01

    Historical aerial photographs are used to provide a physical history and preliminary mapping information for characterizing hazardous waste sites at Los Alamos National Laboratory and Eglin Air Force Base. The examples cited show how imagery was used to accurately locate and identify previous activities at a site, monitor changes that occurred over time, and document the observable of such activities today. The methodology demonstrates how historical imagery (along with any other pertinent data) can be used in the characterization of past environmental damage

  5. Range Information Characterization of the Hokuyo UST-20LX LIDAR Sensor

    Directory of Open Access Journals (Sweden)

    Matthew A. Cooper

    2018-05-01

    Full Text Available This paper presents a study on the data measurements that the Hokuyo UST-20LX Laser Rangefinder produces, which compiles into an overall characterization of the LiDAR sensor relative to indoor environments. The range measurements, beam divergence, angular resolution, error effect due to some common painted and wooden surfaces, and the error due to target surface orientation are analyzed. It was shown that using a statistical average of sensor measurements provides a more accurate range measurement. It was also shown that the major source of errors for the Hokuyo UST-20LX sensor was caused by something that will be referred to as “mixed pixels”. Additional error sources are target surface material, and the range relative to the sensor. The purpose of this paper was twofold: (1 to describe a series of tests that can be performed to characterize various aspects of a LIDAR system from a user perspective, and (2 present a detailed characterization of the commonly-used Hokuyo UST-20LX LIDAR sensor.

  6. Characterization of low level mixed waste at Los Alamos National Laboratory

    International Nuclear Information System (INIS)

    Hepworth, E.; Montoya, A.; Holizer, B.

    1995-01-01

    The characterization program was conducted to maintain regulatory compliance and support ongoing waste treatment and disposal activities. The characterization team conducted a characterization review of wastes stored at the Laboratory that contain both a low-level radioactive and a hazardous component. The team addressed only those wastes generated before January 1993. The wastes reviewed, referred to as legacy wastes, had been generated before the implementation of comprehensive waste acceptance documentation procedures. The review was performed to verify existing RCRA code assignments and was required as part of the Federal Facility Compliance Agreement (FFCA). The review entailed identifying all legacy LLMW items in storage, collecting existing documentation, contacting and interviewing generators, and reviewing code assignments based upon information from knowledge of process (KOP) as allowed by RCRA. The team identified 7,546 legacy waste items in the current inventory, and determined that 4,200 required further RCRA characterization and documentation. KOP characterization was successful for accurately assigning RCRA codes for all but 117 of the 4,200 items within the scope of work. As a result of KOP interviews, 714 waste items were determined to be non-hazardous, while 276 were determined to be non-radioactive. Other wastes were stored as suspect radioactive. Many of the suspect radioactive wastes were certified by the generators as non-radioactive and will eventually be removed

  7. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  8. Many participants in inpatient rehabilitation can quantify their exercise dosage accurately: an observational study.

    Science.gov (United States)

    Scrivener, Katharine; Sherrington, Catherine; Schurr, Karl; Treacy, Daniel

    2011-01-01

    Are inpatients undergoing rehabilitation who appear able to count exercises able to quantify accurately the amount of exercise they undertake? Observational study. Inpatients in an aged care rehabilitation unit and a neurological rehabilitation unit, who appeared able to count their exercises during a 1-2 min observation by their treating physiotherapist. Participants were observed for 30 min by an external observer while they exercised in the physiotherapy gymnasium. Both the participants and the observer counted exercise repetitions with a hand-held tally counter and the two tallies were compared. Of the 60 people admitted for aged care rehabilitation during the study period, 49 (82%) were judged by their treating therapist to be able to count their own exercise repetitions accurately. Of the 30 people admitted for neurological rehabilitation during the study period, 20 (67%) were judged by their treating therapist to be able to count their repetitions accurately. Of the 69 people judged to be accurate, 40 underwent observation while exercising. There was excellent agreement between these participants' counts of their exercise repetitions and the observers' counts, ICC (3,1) of 0.99 (95% CI 0.98 to 0.99). Eleven participants (28%) were in complete agreement with the observer. A further 19 participants (48%) varied from the observer by less than 10%. Therapists were able to identify a group of rehabilitation participants who were accurate in counting their exercise repetitions. Counting of exercise repetitions by therapist-selected patients is a valid means of quantifying exercise dosage during inpatient rehabilitation. Copyright © 2011 Australian Physiotherapy Association. Published by .. All rights reserved.

  9. Ultrathin conformal devices for precise and continuous thermal characterization of human skin

    Science.gov (United States)

    Webb, R. Chad; Bonifas, Andrew P.; Behnaz, Alex; Zhang, Yihui; Yu, Ki Jun; Cheng, Huanyu; Shi, Mingxing; Bian, Zuguang; Liu, Zhuangjian; Kim, Yun-Soung; Yeo, Woon-Hong; Park, Jae Suk; Song, Jizhou; Li, Yuhang; Huang, Yonggang; Gorbach, Alexander M.; Rogers, John A.

    2013-10-01

    Precision thermometry of the skin can, together with other measurements, provide clinically relevant information about cardiovascular health, cognitive state, malignancy and many other important aspects of human physiology. Here, we introduce an ultrathin, compliant skin-like sensor/actuator technology that can pliably laminate onto the epidermis to provide continuous, accurate thermal characterizations that are unavailable with other methods. Examples include non-invasive spatial mapping of skin temperature with millikelvin precision, and simultaneous quantitative assessment of tissue thermal conductivity. Such devices can also be implemented in ways that reveal the time-dynamic influence of blood flow and perfusion on these properties. Experimental and theoretical studies establish the underlying principles of operation, and define engineering guidelines for device design. Evaluation of subtle variations in skin temperature associated with mental activity, physical stimulation and vasoconstriction/dilation along with accurate determination of skin hydration through measurements of thermal conductivity represent some important operational examples.

  10. A highly accurate algorithm for the solution of the point kinetics equations

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2013-01-01

    Highlights: • Point kinetics equations for nuclear reactor transient analysis are numerically solved to extreme accuracy. • Results for classic benchmarks found in the literature are given to 9-digit accuracy. • Recent results of claimed accuracy are shown to be less accurate than claimed. • Arguably brings a chapter of numerical evaluation of the PKEs to a close. - Abstract: Attempts to resolve the point kinetics equations (PKEs) describing nuclear reactor transients have been the subject of numerous articles and texts over the past 50 years. Some very innovative methods, such as the RTS (Reactor Transient Simulation) and CAC (Continuous Analytical Continuation) methods of G.R. Keepin and J. Vigil respectively, have been shown to be exceptionally useful. Recently however, several authors have developed methods they consider accurate without a clear basis for their assertion. In response, this presentation will establish a definitive set of benchmarks to enable those developing PKE methods to truthfully assess the degree of accuracy of their methods. Then, with these benchmarks, two recently published methods, found in this journal will be shown to be less accurate than claimed and a legacy method from 1984 will be confirmed

  11. Using an eye tracker for accurate eye movement artifact correction

    NARCIS (Netherlands)

    Kierkels, J.J.M.; Riani, J.; Bergmans, J.W.M.; Boxtel, van G.J.M.

    2007-01-01

    We present a new method to correct eye movement artifacts in electroencephalogram (EEG) data. By using an eye tracker, whose data cannot be corrupted by any electrophysiological signals, an accurate method for correction is developed. The eye-tracker data is used in a Kalman filter to estimate which

  12. How to Build MCNP 6.2

    Energy Technology Data Exchange (ETDEWEB)

    Bull, Jeffrey S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-13

    This presentation describes how to build MCNP 6.2. MCNP®* 6.2 can be compiled on Macs, PCs, and most Linux systems. It can also be built for parallel execution using both OpenMP and Messing Passing Interface (MPI) methods. MCNP6 requires Fortran, C, and C++ compilers to build the code.

  13. Characterization of turbulence stability through the identification of multifractional Brownian motions

    Science.gov (United States)

    Lee, K. C.

    2013-02-01

    Multifractional Brownian motions have become popular as flexible models in describing real-life signals of high-frequency features in geoscience, microeconomics, and turbulence, to name a few. The time-changing Hurst exponent, which describes regularity levels depending on time measurements, and variance, which relates to an energy level, are two parameters that characterize multifractional Brownian motions. This research suggests a combined method of estimating the time-changing Hurst exponent and variance using the local variation of sampled paths of signals. The method consists of two phases: initially estimating global variance and then accurately estimating the time-changing Hurst exponent. A simulation study shows its performance in estimation of the parameters. The proposed method is applied to characterization of atmospheric stability in which descriptive statistics from the estimated time-changing Hurst exponent and variance classify stable atmosphere flows from unstable ones.

  14. Characterization of turbulence stability through the identification of multifractional Brownian motions

    Directory of Open Access Journals (Sweden)

    K. C. Lee

    2013-02-01

    Full Text Available Multifractional Brownian motions have become popular as flexible models in describing real-life signals of high-frequency features in geoscience, microeconomics, and turbulence, to name a few. The time-changing Hurst exponent, which describes regularity levels depending on time measurements, and variance, which relates to an energy level, are two parameters that characterize multifractional Brownian motions. This research suggests a combined method of estimating the time-changing Hurst exponent and variance using the local variation of sampled paths of signals. The method consists of two phases: initially estimating global variance and then accurately estimating the time-changing Hurst exponent. A simulation study shows its performance in estimation of the parameters. The proposed method is applied to characterization of atmospheric stability in which descriptive statistics from the estimated time-changing Hurst exponent and variance classify stable atmosphere flows from unstable ones.

  15. Installation and Characterization of Charged Particle Sources for Space Environmental Effects Testing

    Science.gov (United States)

    Skevington, Jennifer L.

    2010-01-01

    Charged particle sources are integral devices used by Marshall Space Flight Center s Environmental Effects Branch (EM50) in order to simulate space environments for accurate testing of materials and systems. By using these sources inside custom vacuum systems, materials can be tested to determine charging and discharging properties as well as resistance to sputter damage. This knowledge can enable scientists and engineers to choose proper materials that will not fail in harsh space environments. This paper combines the steps utilized to build a low energy electron gun (The "Skevington 3000") as well as the methods used to characterize the output of both the Skevington 3000 and a manufactured Xenon ion source. Such characterizations include beam flux, beam uniformity, and beam energy. Both sources were deemed suitable for simulating environments in future testing.

  16. Characterization of spent fuel assemblies for storage facilities using non destructive assay

    International Nuclear Information System (INIS)

    Lebrun, A.; Bignan, G.; Recroix, H.; Huver, M.

    1999-01-01

    Many non destructive assay (NDA) techniques have been developed by the French Atomic Energy Commission (CEA) for spent fuel characterization and management. Passive and active neutron methods as well as gamma spectrometric methods have been carried out and applied to industrial devices like PYTHON TM and NAJA. Many existing NDA methods can be successfully applied to storage, but the most promising are the neutron methods combined with on line evolution codes. For dry storage applications, active neutron measurements require further R and D to achieve accurate results. Characterization data given by NDA instruments can now be linked to automatic fuel recognition. Both information can feed the storage management software in order to meet the storage operation requirements like: fissile mass inventory, operators declaration consistency or automatic selection of proper storage conditions. (author)

  17. Characterization of Electrospun Nanofibrous Scaffolds for Nanobiomedical Applications

    Science.gov (United States)

    Emul, E.; Saglam, S.; Ates, H.; Korkusuz, F.; Saglam, N.

    2016-08-01

    The electrospinning method is employed in the production of porous fiber scaffolds, and the usage of electrospun scaffolds especially as drug carrier and bone reconstructive material such as implants is promising for future applications in tissue engineering. The number of publications has grown very rapidly in this field through the fabrication of complex scaffolds, novel approaches in nanotechnology, and improvements of imaging methods. Hence, characterization of these materials has also grown significantly important for getting satisfied and accurate results. This advantageous and versatile method is ideal for mimicking bone extracellular matrix, and many biodegradable and biocompatible polymers are preferred in the field of bone reconstruction. In this study, gelatin, gelatin/nanohydroxyapatite (nHAp) and gelatin/PLLA/nHAp scaffolds were fabricated by the electrospinning process. These composite fibers showed clear and continuous morphology according to observation through a scanning electron microscope and their component analyses were also determined by Fourier transform infrared spectrometer analyses. These characterization experiments revealed the great effects of the electrospinning method for biomedical applications and have an especially important role in bone reconstruction and production of implant coating material.

  18. Characterization of Soil Moisture Level for Rice and Maize Crops using GSM Shield and Arduino Microcontroller

    Science.gov (United States)

    Gines, G. A.; Bea, J. G.; Palaoag, T. D.

    2018-03-01

    Soil serves a medium for plants growth. One factor that affects soil moisture is drought. Drought has been a major cause of agricultural disaster. Agricultural drought is said to occur when soil moisture is insufficient to meet crop water requirements, resulting in yield losses. In this research, it aimed to characterize soil moisture level for Rice and Maize Crops using Arduino and applying fuzzy logic. System architecture for soil moisture sensor and water pump were the basis in developing the equipment. The data gathered was characterized by applying fuzzy logic. Based on the results, applying fuzzy logic in validating the characterization of soil moisture level for Rice and Maize crops is accurate as attested by the experts. This will help the farmers in monitoring the soil moisture level of the Rice and Maize crops.

  19. Characterization of lipid films by an angle-interrogation surface plasmon resonance imaging device.

    Science.gov (United States)

    Liu, Linlin; Wang, Qiong; Yang, Zhong; Wang, Wangang; Hu, Ning; Luo, Hongyan; Liao, Yanjian; Zheng, Xiaolin; Yang, Jun

    2015-04-01

    Surface topographies of lipid films have an important significance in the analysis of the preparation of giant unilamellar vesicles (GUVs). In order to achieve accurately high-throughput and rapidly analysis of surface topographies of lipid films, a homemade SPR imaging device is constructed based on the classical Kretschmann configuration and an angle interrogation manner. A mathematical model is developed to accurately describe the shift including the light path in different conditions and the change of the illumination point on the CCD camera, and thus a SPR curve for each sampling point can also be achieved, based on this calculation method. The experiment results show that the topographies of lipid films formed in distinct experimental conditions can be accurately characterized, and the measuring resolution of the thickness lipid film may reach 0.05 nm. Compared with existing SPRi devices, which realize detection by monitoring the change of the reflective-light intensity, this new SPRi system can achieve the change of the resonance angle on the entire sensing surface. Thus, it has higher detection accuracy as the traditional angle-interrogation SPR sensor, with much wider detectable range of refractive index. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Learning fast accurate movements requires intact frontostriatal circuits

    Directory of Open Access Journals (Sweden)

    Britne eShabbott

    2013-11-01

    Full Text Available The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound, and difficulties in distinguishing learning deficits from execution impairments (performance confound. We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements, and we addressed the definition and performance confounds by: 1 focusing on an operationally defined core element of motor skill learning (speed-accuracy learning, and 2 using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington’s disease (HD, a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning.

  1. Accurate Energies and Structures for Large Water Clusters Using the X3LYP Hybrid Density Functional

    OpenAIRE

    Su, Julius T.; Xu, Xin; Goddard, William A., III

    2004-01-01

    We predict structures and energies of water clusters containing up to 19 waters with X3LYP, an extended hybrid density functional designed to describe noncovalently bound systems as accurately as covalent systems. Our work establishes X3LYP as the most practical ab initio method today for calculating accurate water cluster structures and energies. We compare X3LYP/aug-cc-pVTZ energies to the most accurate theoretical values available (n = 2−6, 8), MP2 with basis set superposition error (BSSE)...

  2. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  3. Accurate numerical simulation of reaction-diffusion processes for heavy oil recovery

    Energy Technology Data Exchange (ETDEWEB)

    Govind, P.A.; Srinivasan, S. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas Univ., Austin, TX (United States)

    2008-10-15

    This study evaluated a reaction-diffusion simulation tool designed to analyze the displacement of carbon dioxide (CO{sub 2}) in a simultaneous injection of carbon dioxide and elemental sodium in a heavy oil reservoir. Sodium was used due to the exothermic reaction of sodium with in situ that occurs when heat is used to reduce oil viscosity. The process also results in the formation of sodium hydroxide that reduces interfacial tension at the bitumen interface. A commercial simulation tool was used to model the sodium transport mechanism to the reaction interface through diffusion as well as the reaction zone's subsequent displacement. The aim of the study was to verify if the in situ reaction was able to generate sufficient heat to reduce oil viscosity and improve the displacement of the heavy oil. The study also assessed the accuracy of the reaction front simulation tool, in which an alternate method was used to model the propagation front as a moving heat source. The sensitivity of the simulation results were then evaluated in relation to the diffusion coefficient in order to understand the scaling characteristics of the reaction-diffusion zone. A pore-scale simulation was then up-scaled to grid blocks. Results of the study showed that when sodium suspended in liquid CO{sub 2} is injected into reservoirs, it diffuses through the carrier phase and interacts with water. A random walk diffusion algorithm with reactive dissipation was implemented to more accurately characterize reaction and diffusion processes. It was concluded that the algorithm modelled physical dispersion while neglecting the effect of numerical dispersion. 10 refs., 3 tabs., 24 figs.

  4. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    Science.gov (United States)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  5. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    KAUST Repository

    Pan, Bing

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost. © 2014 Elsevier Ltd.

  6. PV led engine characterization lab for standalone light to light systems

    DEFF Research Database (Denmark)

    Thorsteinsson, Sune; Poulsen, Peter Behrensdorff; Lindén, Johannes

    2014-01-01

    PV-powered lighting systems, light-to-light systems (L2L), offer outdoor lighting where it is else where cumbersome to enable lighting. Application of these systems at high latitudes, where the difference in day length between summer and winter is large and the solar energy is low requires smart...... dimming functions for reliable lighting. In this work we have built a laboratory to characterize these systems up to 200 Wp from “nose to tail” in great details to support improvement of the systems and to make accurate field performance predictions....

  7. Immunohistochemical Characterization of Canine Lymphomas

    Directory of Open Access Journals (Sweden)

    Roxana CORA

    2017-11-01

    Full Text Available Lymphomas occur by clonal expansion of lymphoid cells and have distinctive morphological and immunophenotypic features. Determination of canine lymphoma immunophenotype is useful for accurate prognosis and further therapy. In the suggested study, we performed an immunohistochemical evaluation of some cases with canine lymphoma diagnosed in the Department of Pathology (Faculty of Veterinary Medicine, Cluj-Napoca, Romania, in order to characterize them. The investigation included 39 dogs diagnosed with different anatomical forms of lymphoma, following necropsy analysis or assessment of biopsies. The diagnosis of lymphoma was confirmed by necropsy and histopathology (Hematoxylin-eosin stain examinations. The collected specimens were analyzed by immunohistochemistry technique (automatic method using the following antibodies: CD3, CD20, CD21 and CD79a. The analyzed neoplasms were characterized as follows: about 64.10% of cases were diagnosed as B-cell lymphomas, 33.34% of cases as T-cell lymphomas, whereas 2.56% of cases were null cell type lymphomas (neither B nor T. Most of multicentric (80%, mediastinal (60% and primary central nervous system lymphomas (100% had B immunophenotype, while the majority of cutaneous (80% and digestive (100% lymphomas had T immunophenotype. Immunohistochemical description of canine lymphomas can deliver some major details concerning their behavior and malignancy. Additionally, vital prognosis and efficacy of some therapeutic protocols are relying on the immunohistochemical features of canine lymphoma.

  8. Dynamic and accurate assessment of acetaminophen-induced hepatotoxicity by integrated photoacoustic imaging and mechanistic biomarkers in vivo.

    Science.gov (United States)

    Brillant, Nathalie; Elmasry, Mohamed; Burton, Neal C; Rodriguez, Josep Monne; Sharkey, Jack W; Fenwick, Stephen; Poptani, Harish; Kitteringham, Neil R; Goldring, Christopher E; Kipar, Anja; Park, B Kevin; Antoine, Daniel J

    2017-10-01

    The prediction and understanding of acetaminophen (APAP)-induced liver injury (APAP-ILI) and the response to therapeutic interventions is complex. This is due in part to sensitivity and specificity limitations of currently used assessment techniques. Here we sought to determine the utility of integrating translational non-invasive photoacoustic imaging of liver function with mechanistic circulating biomarkers of hepatotoxicity with histological assessment to facilitate the more accurate and precise characterization of APAP-ILI and the efficacy of therapeutic intervention. Perturbation of liver function and cellular viability was assessed in C57BL/6J male mice by Indocyanine green (ICG) clearance (Multispectral Optoacoustic Tomography (MSOT)) and by measurement of mechanistic (miR-122, HMGB1) and established (ALT, bilirubin) circulating biomarkers in response to the acetaminophen and its treatment with acetylcysteine (NAC) in vivo. We utilised a 60% partial hepatectomy model as a situation of defined hepatic functional mass loss to compared acetaminophen-induced changes to. Integration of these mechanistic markers correlated with histological features of APAP hepatotoxicity in a time-dependent manner. They accurately reflected the onset and recovery from hepatotoxicity compared to traditional biomarkers and also reported the efficacy of NAC with high sensitivity. ICG clearance kinetics correlated with histological scores for acute liver damage for APAP (i.e. 3h timepoint; r=0.90, P<0.0001) and elevations in both of the mechanistic biomarkers, miR-122 (e.g. 6h timepoint; r=0.70, P=0.005) and HMGB1 (e.g. 6h timepoint; r=0.56, P=0.04). For the first time we report the utility of this non-invasive longitudinal imaging approach to provide direct visualisation of the liver function coupled with mechanistic biomarkers, in the same animal, allowing the investigation of the toxicological and pharmacological aspects of APAP-ILI and hepatic regeneration. Copyright © 2017

  9. Fast and accurate calculation of the properties of water and steam for simulation

    International Nuclear Information System (INIS)

    Szegi, Zs.; Gacs, A.

    1990-01-01

    A basic principle simulator was developed at the CRIP, Budapest, for real time simulation of the transients of WWER-440 type nuclear power plants. Its integral part is the fast and accurate calculation of the thermodynamic properties of water and steam. To eliminate successive approximations, the model system of the secondary coolant circuit requires binary forms which are known as inverse functions, countinuous when crossing the saturation line, accurate and coherent for all argument combinations. A solution which reduces the computer memory and execution time demand is reported. (author) 36 refs.; 5 figs.; 3 tabs

  10. Accurate expansion of cylindrical paraxial waves for its straightforward implementation in electromagnetic scattering

    Science.gov (United States)

    Naserpour, Mahin; Zapata-Rodríguez, Carlos J.

    2018-01-01

    The evaluation of vector wave fields can be accurately performed by means of diffraction integrals, differential equations and also series expansions. In this paper, a Bessel series expansion which basis relies on the exact solution of the Helmholtz equation in cylindrical coordinates is theoretically developed for the straightforward yet accurate description of low-numerical-aperture focal waves. The validity of this approach is confirmed by explicit application to Gaussian beams and apertured focused fields in the paraxial regime. Finally we discuss how our procedure can be favorably implemented in scattering problems.

  11. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  12. Accurate Assessment of the Oxygen Reduction Electrocatalytic Activity of Mn/Polypyrrole Nanocomposites Based on Rotating Disk Electrode Measurements, Complemented with Multitechnique Structural Characterizations

    Science.gov (United States)

    Sánchez, Carolina Ramírez; Taurino, Antonietta; Bozzini, Benedetto

    2016-01-01

    This paper reports on the quantitative assessment of the oxygen reduction reaction (ORR) electrocatalytic activity of electrodeposited Mn/polypyrrole (PPy) nanocomposites for alkaline aqueous solutions, based on the Rotating Disk Electrode (RDE) method and accompanied by structural characterizations relevant to the establishment of structure-function relationships. The characterization of Mn/PPy films is addressed to the following: (i) morphology, as assessed by Field-Emission Scanning Electron Microscopy (FE-SEM) and Atomic Force Microscope (AFM); (ii) local electrical conductivity, as measured by Scanning Probe Microscopy (SPM); and (iii) molecular structure, accessed by Raman Spectroscopy; these data provide the background against which the electrocatalytic activity can be rationalised. For comparison, the properties of Mn/PPy are gauged against those of graphite, PPy, and polycrystalline-Pt (poly-Pt). Due to the literature lack of accepted protocols for precise catalytic activity measurement at poly-Pt electrode in alkaline solution using the RDE methodology, we have also worked on the obtainment of an intralaboratory benchmark by evidencing some of the time-consuming parameters which drastically affect the reliability and repeatability of the measurement. PMID:28042491

  13. Accurate Assessment of the Oxygen Reduction Electrocatalytic Activity of Mn/Polypyrrole Nanocomposites Based on Rotating Disk Electrode Measurements, Complemented with Multitechnique Structural Characterizations

    Directory of Open Access Journals (Sweden)

    Patrizia Bocchetta

    2016-01-01

    Full Text Available This paper reports on the quantitative assessment of the oxygen reduction reaction (ORR electrocatalytic activity of electrodeposited Mn/polypyrrole (PPy nanocomposites for alkaline aqueous solutions, based on the Rotating Disk Electrode (RDE method and accompanied by structural characterizations relevant to the establishment of structure-function relationships. The characterization of Mn/PPy films is addressed to the following: (i morphology, as assessed by Field-Emission Scanning Electron Microscopy (FE-SEM and Atomic Force Microscope (AFM; (ii local electrical conductivity, as measured by Scanning Probe Microscopy (SPM; and (iii molecular structure, accessed by Raman Spectroscopy; these data provide the background against which the electrocatalytic activity can be rationalised. For comparison, the properties of Mn/PPy are gauged against those of graphite, PPy, and polycrystalline-Pt (poly-Pt. Due to the literature lack of accepted protocols for precise catalytic activity measurement at poly-Pt electrode in alkaline solution using the RDE methodology, we have also worked on the obtainment of an intralaboratory benchmark by evidencing some of the time-consuming parameters which drastically affect the reliability and repeatability of the measurement.

  14. BLE-BASED ACCURATE INDOOR LOCATION TRACKING FOR HOME AND OFFICE

    OpenAIRE

    Joonghong Park; Jaehoon Kim; Sungwon Kang

    2015-01-01

    Nowadays the use of smart mobile devices and the accompanying needs for emerging services relying on indoor location-based services (LBS) for mobile devices are rapidly increasing. For more accurate location tracking using Bluetooth Low Energy (BLE), this paper proposes a novel trilateration-based algorithm and presents experimental results that demonstrate its effectiveness.

  15. Accurate localization of intracavitary brachytherapy applicators from 3D CT imaging studies

    International Nuclear Information System (INIS)

    Lerma, F.A.; Williamson, J.F.

    2002-01-01

    Purpose: To present an accurate method to identify the positions and orientations of intracavitary (ICT) brachytherapy applicators imaged in 3D CT scans, in support of Monte Carlo photon-transport simulations, enabling accurate dose modeling in the presence of applicator shielding and interapplicator attenuation. Materials and methods: The method consists of finding the transformation that maximizes the coincidence between the known 3D shapes of each applicator component (colpostats and tandem) with the volume defined by contours of the corresponding surface on each CT slice. We use this technique to localize Fletcher-Suit CT-compatible applicators for three cervix cancer patients using post-implant CT examinations (3 mm slice thickness and separation). Dose distributions in 1-to-1 registration with the underlying CT anatomy are derived from 3D Monte Carlo photon-transport simulations incorporating each applicator's internal geometry (source encapsulation, high-density shields, and applicator body) oriented in relation to the dose matrix according to the measured localization transformations. The precision and accuracy of our localization method are assessed using CT scans, in which the positions and orientations of dense rods and spheres (in a precision-machined phantom) were measured at various orientations relative to the gantry. Results: Using this method, we register 3D Monte Carlo dose calculations directly onto post insertion patient CT studies. Using CT studies of a precisely machined phantom, the absolute accuracy of the method was found to be ±0.2 mm in plane, and ±0.3 mm in the axial direction while its precision was ±0.2 mm in plane, and ±0.2 mm axially. Conclusion: We have developed a novel, and accurate technique to localize intracavitary brachytherapy applicators in 3D CT imaging studies, which supports 3D dose planning involving detailed 3D Monte Carlo dose calculations, modeling source positions, shielding and interapplicator shielding

  16. The Rocky Flats Plant Waste Stream and Residue Identification and Characterization Program (WSRIC): Progress and achievements

    International Nuclear Information System (INIS)

    Ideker, V.L.

    1994-01-01

    The Waste Stream and Residue Identification and Characterization (WSRIC) Program, as described in the WSRIC Program Description delineates the process knowledge used to identify and characterize currently-generated waste from approximately 5404 waste streams originating from 576 processes in 288 buildings at Rocky Flats Plant (RFP). Annual updates to the WSRIC documents are required by the Federal Facilities Compliance Agreement between the US Department of Energy, the Colorado Department of Health and the Environmental Protection Agency. Accurate determination and characterization of waste is a crucial component in RFP's waste management strategy to assure compliance with Resource Conservation and Recovery Act (RCRA) storage and treatment requirements, as well as disposal acceptance criteria. The WSRIC Program was rebaselined in September 1992, and serves as the linchpin for documenting process knowledge in RFP's RCRA operating record. Enhancements to the WSRIC include strengthening the waste characterization rationale, expanding WSRIC training for waste generators, and incorporating analytical information into the WSRIC building books. These enhancements will improve credibility with the regulators and increase waste generators' understanding of the basis for credible waste characterizations

  17. Influential Factors for Accurate Load Prediction in a Demand Response Context

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Kjærgaard, Mikkel Baun; Jørgensen, Bo Nørregaard

    2016-01-01

    Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence....... Next, the time of day that is being predicted greatly influence the prediction which is related to the weather pattern. By presenting these results we hope to improve the modeling of building loads and algorithms for Demand Response planning.......Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence...

  18. Application of dynamic pseudo fission products and actinides for accurate burnup calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E.; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.; Kloosterman, J.L.

    1996-09-01

    The introduction of pseudo fission products for accurate fine-group spectrum calculations during burnup is discussed. The calculation of the density of the pseudo nuclides is done before each spectrum calculation from the actual densities and their cross sections of all nuclides to be lumped into a pseudo fission product. As there are also many actinides formed in the fuel during its life cycle, a pseudo actinide with fission cross section is also introduced. From a realistic burnup calculation it is demonstrated that only a few fission products and actinides need to be included explicitly in a spectrum calculation. All other fission products and actinides can be accurately represented in the pseudo nuclides. (author)

  19. Application of an accurate thermal hydraulics solver in VTT's reactor dynamics codes

    International Nuclear Information System (INIS)

    Rajamaeki, M.; Raety, H.; Kyrki-Rajamaeki, R.; Eskola, M.

    1998-01-01

    VTT's reactor dynamics codes are developed further and new more detailed models are created for tasks related to increased safety requirements. For thermal hydraulics calculations an accurate general flow model based on a new solution method PLIM has been developed. It has been applied in VTT's one-dimensional TRAB and three-dimensional HEXTRAN codes. Results of a demanding international boron dilution benchmark defined by VTT are given and compared against results of other codes with original or improved boron tracking. The new PLIM method not only allows the accurate modelling of a propagating boron dilution front, but also the tracking of a temperature front, which is missed by the special boron tracking models. (orig.)

  20. The MOLDY short-range molecular dynamics package

    Science.gov (United States)

    Ackland, G. J.; D'Mellow, K.; Daraszewicz, S. L.; Hepburn, D. J.; Uhrin, M.; Stratford, K.

    2011-12-01

    We describe a parallelised version of the MOLDY molecular dynamics program. This Fortran code is aimed at systems which may be described by short-range potentials and specifically those which may be addressed with the embedded atom method. This includes a wide range of transition metals and alloys. MOLDY provides a range of options in terms of the molecular dynamics ensemble used and the boundary conditions which may be applied. A number of standard potentials are provided, and the modular structure of the code allows new potentials to be added easily. The code is parallelised using OpenMP and can therefore be run on shared memory systems, including modern multicore processors. Particular attention is paid to the updates required in the main force loop, where synchronisation is often required in OpenMP implementations of molecular dynamics. We examine the performance of the parallel code in detail and give some examples of applications to realistic problems, including the dynamic compression of copper and carbon migration in an iron-carbon alloy. Program summaryProgram title: MOLDY Catalogue identifier: AEJU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 382 881 No. of bytes in distributed program, including test data, etc.: 6 705 242 Distribution format: tar.gz Programming language: Fortran 95/OpenMP Computer: Any Operating system: Any Has the code been vectorised or parallelized?: Yes. OpenMP is required for parallel execution RAM: 100 MB or more Classification: 7.7 Nature of problem: Moldy addresses the problem of many atoms (of order 10 6) interacting via a classical interatomic potential on a timescale of microseconds. It is designed for problems where statistics must be gathered over a number of equivalent runs, such as

  1. Accurately controlled sequential self-folding structures by polystyrene film

    Science.gov (United States)

    Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse

    2017-08-01

    Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.

  2. A method of accurate determination of voltage stability margin

    Energy Technology Data Exchange (ETDEWEB)

    Wiszniewski, A.; Rebizant, W. [Wroclaw Univ. of Technology, Wroclaw (Poland); Klimek, A. [AREVA Transmission and Distribution, Stafford (United Kingdom)

    2008-07-01

    In the process of developing power system disturbance, voltage instability at the receiving substations often contributes to deteriorating system stability, which eventually may lead to severe blackouts. The voltage stability margin at receiving substations may be used to determine measures to prevent voltage collapse, primarily by operating or blocking the transformer tap changing device, or by load shedding. The best measure of the stability margin is the actual load to source impedance ratio and its critical value, which is unity. This paper presented an accurate method of calculating the load to source impedance ratio, derived from the Thevenin's equivalent circuit of the system, which led to calculation of the stability margin. The paper described the calculation of the load to source impedance ratio including the supporting equations. The calculation was based on the very definition of voltage stability, which says that system stability is maintained as long as the change of power, which follows the increase of admittance is positive. The testing of the stability margin assessment method was performed in a simulative way for a number of power network structures and simulation scenarios. Results of the simulations revealed that this method is accurate and stable for all possible events occurring downstream of the device location. 3 refs., 8 figs.

  3. The determination of the pressure-viscosity coefficient of a lubricant through an accurate film thickness formula and accurate film thickness measurements : part 2 : high L values

    NARCIS (Netherlands)

    Leeuwen, van H.J.

    2011-01-01

    The pressure-viscosity coefficient of a traction fluid is determined by fitting calculation results on accurate film thickness measurements, obtained at different speeds, loads, and temperatures. Through experiments, covering a range of 5.6

  4. Linear signal noise summer accurately determines and controls S/N ratio

    Science.gov (United States)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  5. High-Resolution Characterization of Intertidal Geomorphology by TLS

    Science.gov (United States)

    Guarnieri, A.; Vettore, A.; Marani, M.

    2007-12-01

    Observational fluvial geomorphology has greatly benefited in the last decades from the wide availability of digital terrain data obtained by orthophotos and by means of accurate airborne laser scanner data (LiDAR). On the contrary, the spatially-distributed study of the geomorphology of intertidal areas, such as tidal flats and marshes, remains problematic owing to the small relief characterizing such environments, often of the order of a few tens of centimetres, i.e. comparable to the accuracy of state-of-the-art LiDAR data. Here we present the results of Terrestrial Laser Scanner (TLS) acquisitions performed within a tidal marsh in the Venice lagoon. The survey was performed using a Leica HDS 3000 TLS, characterized by a large Field of View (360 deg H x 270 deg V), a low beam divergence (DSM and a DTM. This is important e.g. in eco-geomorphic studies of intertidal environments, where conventional LiDAR technologies cannot easily separate first and last laser returns (because of the low vegetation height) and thus provide models of the surface as well as of the terrain. Furthermore, the DTM is shown to provide unprecedented characterizations of marsh morphology, e.g. regarding the cross-sectional properties of small-scale tidal creeks (widths of the order of 10 cm), previously observable only through conventional topographic surveys, thus not allowing a fully spatially-distributed description of their morphology.

  6. Petrographic characterization to build an accurate rock model using micro-CT: Case study on low-permeable to tight turbidite sandstone from Eocene Shahejie Formation.

    Science.gov (United States)

    Munawar, Muhammad Jawad; Lin, Chengyan; Cnudde, Veerle; Bultreys, Tom; Dong, Chunmei; Zhang, Xianguo; De Boever, Wesley; Zahid, Muhammad Aleem; Wu, Yuqi

    2018-03-26

    Pore scale flow simulations heavily depend on petrographic characterizing and modeling of reservoir rocks. Mineral phase segmentation and pore network modeling are crucial stages in micro-CT based rock modeling. The success of the pore network model (PNM) to predict petrophysical properties relies on image segmentation, image resolution and most importantly nature of rock (homogenous, complex or microporous). The pore network modeling has experienced extensive research and development during last decade, however the application of these models to a variety of naturally heterogenous reservoir rock is still a challenge. In this paper, four samples from a low permeable to tight sandstone reservoir were used to characterize their petrographic and petrophysical properties using high-resolution micro-CT imaging. The phase segmentation analysis from micro-CT images shows that 5-6% microporous regions are present in kaolinite rich sandstone (E3 and E4), while 1.7-1.8% are present in illite rich sandstone (E1 and E2). The pore system percolates without micropores in E1 and E2 while it does not percolate without micropores in E3 and E4. In E1 and E2, total MICP porosity is equal to the volume percent of macrospores determined from micro-CT images, which indicate that the macropores are well connected and microspores do not play any role in non-wetting fluid (mercury) displacement process. Whereas in E3 and E4 sandstones, the volume percent of micropores is far less (almost 50%) than the total MICP porosity which means that almost half of the pore space was not detected by the micro-CT scan. PNM behaved well in E1 and E2 where better agreement exists in PNM and MICP measurements. While E3 and E4 exhibit multiscale pore space which cannot be addressed with single scale PNM method, a multiscale approach is needed to characterize such complex rocks. This study provides helpful insights towards the application of existing micro-CT based petrographic characterization methodology

  7. PV LED ENGINE characterization lab for stand alone light-to-light systems

    DEFF Research Database (Denmark)

    Poulsen, Peter Behrensdorff; Thorsteinsson, Sune; Lindén, Johannes

    2015-01-01

    dimming functions for reliable lighting. A barrier for exploiting use of standalone solar lighting for the urban environment seem to be lack of knowledge and lack of available tools for proper dimensioning. In this work the development of powerful dimensioning tool is described and initial measurements...... are presented. Furthermore, a laboratory has been build to characterize these systems up to 200 Wp from “nose to tail” in great details to support improvement of the systems and to make accurate field performance predictions by the dimensioning tool....

  8. Transfer and characterization of large-area CVD graphene for transparent electrode applications

    DEFF Research Database (Denmark)

    Whelan, Patrick Rebsdorf

    addresses key issues for industrial integration of large area graphene for optoelectronic devices. This is done through optimization of existing characterization methods and development of new transfer techniques. A method for accurately measuring the decoupling of graphene from copper catalysts...... and the electrical properties of graphene after transfer are superior compared to the standard etching transfer method. Spatial mapping of the electrical properties of transferred graphene is performed using terahertz time-domain spectroscopy (THz-TDS). The non-contact nature of THz-TDS and the fact...

  9. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    International Nuclear Information System (INIS)

    Thompson, A.P.; Swiler, L.P.; Trott, C.R.; Foiles, S.M.; Tucker, G.J.

    2015-01-01

    We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum

  10. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Aidan P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Multiscale Science Dept.; Swiler, Laura P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Optimization and Uncertainty Quantification Dept.; Trott, Christian R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Scalable Algorithms Dept.; Foiles, Stephen M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computational Materials and Data Science Dept.; Tucker, Garritt J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computational Materials and Data Science Dept.; Drexel Univ., Philadelphia, PA (United States). Dept. of Materials Science and Engineering

    2015-03-15

    Here, we present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.

  11. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, A.P., E-mail: athomps@sandia.gov [Multiscale Science Department, Sandia National Laboratories, PO Box 5800, MS 1322, Albuquerque, NM 87185 (United States); Swiler, L.P., E-mail: lpswile@sandia.gov [Optimization and Uncertainty Quantification Department, Sandia National Laboratories, PO Box 5800, MS 1318, Albuquerque, NM 87185 (United States); Trott, C.R., E-mail: crtrott@sandia.gov [Scalable Algorithms Department, Sandia National Laboratories, PO Box 5800, MS 1322, Albuquerque, NM 87185 (United States); Foiles, S.M., E-mail: foiles@sandia.gov [Computational Materials and Data Science Department, Sandia National Laboratories, PO Box 5800, MS 1411, Albuquerque, NM 87185 (United States); Tucker, G.J., E-mail: gtucker@coe.drexel.edu [Computational Materials and Data Science Department, Sandia National Laboratories, PO Box 5800, MS 1411, Albuquerque, NM 87185 (United States); Department of Materials Science and Engineering, Drexel University, Philadelphia, PA 19104 (United States)

    2015-03-15

    We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.

  12. Comparison of 250 MHz R10K Origin 2000 and 400 MHz Origin 2000 Using NAS Parallel Benchmarks

    Science.gov (United States)

    Turney, Raymond D.; Thigpen, William W. (Technical Monitor)

    2001-01-01

    This report describes results of benchmark tests on Steger, a 250 MHz Origin 2000 system with R10K processors, currently installed at the NASA Ames National Advanced Supercomputing (NAS) facility. For comparison purposes, the tests were also run on Lomax, a 400 MHz Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to measure system performance. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versions used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  13. A comparative critical analysis of modern task-parallel runtimes.

    Energy Technology Data Exchange (ETDEWEB)

    Wheeler, Kyle Bruce; Stark, Dylan; Murphy, Richard C.

    2012-12-01

    The rise in node-level parallelism has increased interest in task-based parallel runtimes for a wide array of application areas. Applications have a wide variety of task spawning patterns which frequently change during the course of application execution, based on the algorithm or solver kernel in use. Task scheduling and load balance regimes, however, are often highly optimized for specific patterns. This paper uses four basic task spawning patterns to quantify the impact of specific scheduling policy decisions on execution time. We compare the behavior of six publicly available tasking runtimes: Intel Cilk, Intel Threading Building Blocks (TBB), Intel OpenMP, GCC OpenMP, Qthreads, and High Performance ParalleX (HPX). With the exception of Qthreads, the runtimes prove to have schedulers that are highly sensitive to application structure. No runtime is able to provide the best performance in all cases, and those that do provide the best performance in some cases, unfortunately, provide extremely poor performance when application structure does not match the schedulers assumptions.

  14. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning.

    Science.gov (United States)

    Silva, Susana F; Domingues, José Paulo; Morgado, António Miguel

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed.

  15. Simple, fast and accurate two-diode model for photovoltaic modules

    Energy Technology Data Exchange (ETDEWEB)

    Ishaque, Kashif; Salam, Zainal; Taheri, Hamed [Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM 81310, Skudai, Johor Bahru (Malaysia)

    2011-02-15

    This paper proposes an improved modeling approach for the two-diode model of photovoltaic (PV) module. The main contribution of this work is the simplification of the current equation, in which only four parameters are required, compared to six or more in the previously developed two-diode models. Furthermore the values of the series and parallel resistances are computed using a simple and fast iterative method. To validate the accuracy of the proposed model, six PV modules of different types (multi-crystalline, mono-crystalline and thin-film) from various manufacturers are tested. The performance of the model is evaluated against the popular single diode models. It is found that the proposed model is superior when subjected to irradiance and temperature variations. In particular the model matches very accurately for all important points of the I-V curves, i.e. the peak power, short-circuit current and open circuit voltage. The modeling method is useful for PV power converter designers and circuit simulator developers who require simple, fast yet accurate model for the PV module. (author)

  16. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  17. Accurate beacon positioning method for satellite-to-ground optical communication.

    Science.gov (United States)

    Wang, Qiang; Tong, Ling; Yu, Siyuan; Tan, Liying; Ma, Jing

    2017-12-11

    In satellite laser communication systems, accurate positioning of the beacon is essential for establishing a steady laser communication link. For satellite-to-ground optical communication, the main influencing factors on the acquisition of the beacon are background noise and atmospheric turbulence. In this paper, we consider the influence of background noise and atmospheric turbulence on the beacon in satellite-to-ground optical communication, and propose a new locating algorithm for the beacon, which takes the correlation coefficient obtained by curve fitting for image data as weights. By performing a long distance laser communication experiment (11.16 km), we verified the feasibility of this method. Both simulation and experiment showed that the new algorithm can accurately obtain the position of the centroid of beacon. Furthermore, for the distortion of the light spot through atmospheric turbulence, the locating accuracy of the new algorithm was 50% higher than that of the conventional gray centroid algorithm. This new approach will be beneficial for the design of satellite-to ground optical communication systems.

  18. TWRS privatization support project waste characterization database development

    International Nuclear Information System (INIS)

    1995-11-01

    Pacific Northwest National Laboratory requested support from ICF Kaiser Hanford Company in assembling radionuclide and chemical analyte sample data and inventory estimates for fourteen Hanford underground storage tanks: 241-AN-102, -104, -105, -106, and -107, 241-AP-102, -104, and -105, 241-AW-101, -103, and -105, 241 AZ-101 and -102; and 241-C-109. Sample data were assembled for sixteen radionuclides and thirty-five chemical analytes. The characterization data were provided to Pacific Northwest National Laboratory in support of the Tank Waste Remediation Services Privatization Support Project. The purpose of this report is to present the results and document the methodology used in preparing the waste characterization information data set to support the Tank Waste Remediation Services Privatization Support Project. This report describes the methodology used in assembling the waste characterization information and how that information was validated by a panel of independent technical reviewers. Also, contained in this report are the various data sets created: the master data set, a subset, and an unreviewed data set. The master data set contains waste composition information for Tanks 241-AN-102 and -107, 241-AP-102 and -105, 241-AW-101; and 241-AZ-101 and -102. The subset contains only the validated analytical sample data from the master data set. The unreviewed data set contains all collected but unreviewed sample data for Tanks 241-AN-104, -105, and -106; 241-AP-104; 241-AW-103 and-105; and 241-C-109. The methodology used to review the waste characterization information was found to be an accurate, useful way to separate the invalid or questionable data from the more reliable data. In the future, this methodology should be considered when validating waste characterization information

  19. WRAP Module 1 sampling strategy and waste characterization alternatives study

    Energy Technology Data Exchange (ETDEWEB)

    Bergeson, C.L.

    1994-09-30

    The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner.

  20. WRAP Module 1 sampling strategy and waste characterization alternatives study

    International Nuclear Information System (INIS)

    Bergeson, C.L.

    1994-01-01

    The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner