WorldWideScience

Sample records for analysing multi-threaded applications

  1. Multi-thread Parallel Speech Recognition for Mobile Applications

    Directory of Open Access Journals (Sweden)

    LOJKA Martin

    2014-05-01

    Full Text Available In this paper, the server based solution of the multi-thread large vocabulary automatic speech recognition engine is described along with the Android OS and HTML5 practical application examples. The basic idea was to bring speech recognition available for full variety of applications for computers and especially for mobile devices. The speech recognition engine should be independent of commercial products and services (where the dictionary could not be modified. Using of third-party services could be also a security and privacy problem in specific applications, when the unsecured audio data could not be sent to uncontrolled environments (voice data transferred to servers around the globe. Using our experience with speech recognition applications, we have been able to construct a multi-thread speech recognition serverbased solution designed for simple applications interface (API to speech recognition engine modified to specific needs of particular application.

  2. SISSY: An example of a multi-threaded, networked, object-oriented databased application

    International Nuclear Information System (INIS)

    Scipioni, B.; Liu, D.; Song, T.

    1993-05-01

    The Systems Integration Support SYstem (SISSY) is presented and its capabilities and techniques are discussed. It is fully automated data collection and analysis system supporting the SSCL's systems analysis activities as they relate to the Physics Detector and Simulation Facility (PDSF). SISSY itself is a paradigm of effective computing on the PDSF. It uses home-grown code (C++), network programming (RPC, SNMP), relational (SYBASE) and object-oriented (ObjectStore) DBMSs, UNIX operating system services (IRIX threads, cron, system utilities, shells scripts, etc.), and third party software applications (NetCentral Station, Wingz, DataLink) all of which act together as a single application to monitor and analyze the PDSF

  3. Performance Characterization of Multi-threaded Graph Processing Applications on Intel Many-Integrated-Core Architecture

    OpenAIRE

    Liu, Xu; Chen, Langshi; Firoz, Jesun S.; Qiu, Judy; Jiang, Lei

    2017-01-01

    Intel Xeon Phi many-integrated-core (MIC) architectures usher in a new era of terascale integration. Among emerging killer applications, parallel graph processing has been a critical technique to analyze connected data. In this paper, we empirically evaluate various computing platforms including an Intel Xeon E5 CPU, a Nvidia Geforce GTX1070 GPU and an Xeon Phi 7210 processor codenamed Knights Landing (KNL) in the domain of parallel graph processing. We show that the KNL gains encouraging per...

  4. Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos

    International Nuclear Information System (INIS)

    Ragusa, J.C.

    2003-01-01

    The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster

  5. Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)

    2003-07-01

    The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.

  6. Creating and improving multi-threaded Geant4

    CERN Document Server

    Dong, Xin; Apostolakis, John; Jarp, Sverre; Nowak, Andrzej; Asai, Makoto; Brandt, Daniel

    2012-01-01

    We document the methods used to create the multi-threaded prototype Geant4MT from a sequential version of Geant4. We cover the Source-to-Source transformations applied, and discuss the process of verifying the correctness of the Geant4MT toolkit and applications based on it. Tools to ensure that the results of a transformed multi-threaded application are exactly equal to the original sequential version are under development. Stand-alone or simple applications can be adapted within 1-2 working days. Geant4MT is shown to scale linearly on an 80-core computer. In the special case of a single worker thread on one core, 30% overhead has been observed. We explain the reasons for this and the improvements introduced to reduce this overhead.

  7. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    Science.gov (United States)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  8. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00014247; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea

    2017-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with detai...

  9. Multi-threaded ATLAS Simulation on Intel Knights Landing Processors

    CERN Document Server

    Farrell, Steven; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles

    2016-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), will be delivered to its users in two phases with the first phase online now and the second phase expected in mid-2016. Cori Phase 2 will be based on the KNL architecture and will contain over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a great use-case for the KNL architecture and supercomputers like Cori. Simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this presentation we will give an overview of the ATLAS simulation application with details on its multi-thr...

  10. Multi-Threaded Dense Linear Algebra Libraries for Low-Power Asymmetric Multicore Processors

    OpenAIRE

    Catalán, Sandra; Herrero, José R.; Igual, Francisco D.; Rodríguez-Sánchez, Rafael; Quintana-Ortí, Enrique S.

    2015-01-01

    Dense linear algebra libraries, such as BLAS and LAPACK, provide a relevant collection of numerical tools for many scientific and engineering applications. While there exist high performance implementations of the BLAS (and LAPACK) functionality for many current multi-threaded architectures,the adaption of these libraries for asymmetric multicore processors (AMPs)is still pending. In this paper we address this challenge by developing an asymmetry-aware implementation of the BLAS, based on the...

  11. Effective verification of confidentiality for multi-threaded programs

    NARCIS (Netherlands)

    Ngo, Minh Tri; Stoelinga, Mariëlle Ida Antoinette; Huisman, Marieke

    2014-01-01

    This paper studies how confidentiality properties of multi-threaded programs can be verified efficiently by a combination of newly developed and existing model checking algorithms. In particular, we study the verification of scheduler-specific observational determinism (SSOD), a property that

  12. A multi-threading approach to secure VERIFYPIN

    CSIR Research Space (South Africa)

    Frieslaar, Ibraheem

    2016-10-01

    Full Text Available along side a pin-acceptance program in a multi-threaded environment. These threads are inserted randomly on each execution of the program to create confusion for the attacker. Moreover, the research proposes a more improved version of the pin...

  13. A PREDICTABLE MULTI-THREADED MAIN-MEMORY STORAGE MANAGER

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper introduces the design and implementation of a predictable multi-threaded main-memo- ry storage manager (CS20), and emphasizes the database service mediator(DSM), an operation prediction model using exponential averaging. The memory manager, indexing, as well as lock manager in CS20 are also presented briefly. CS20 has been embedded in a mobile telecommunication service system. Practice showed, DSM effectively controls system load and hence improves the real-time characteristics of data accessing.

  14. Geant4-MT: bringing multi-threading into Geant4 production

    International Nuclear Information System (INIS)

    Ahn, S.; Apostolakis, J.; Cosmo, G.; Nowak, A.; Asai, M.; Brandt, D.; Dotti, A.; Coopermann, G.; Dong, X.; Jun, Soon Yung

    2013-01-01

    Geant4-MT is the multi-threaded version of the Geant4 particle transport code. The key goals for the design of Geant4-MT have been a) the need to reduce the memory footprint of the multi-threaded application compared to the use of separate jobs and processes; b) to create an easy migration of the existing applications; and c) to use efficiently many threads or cores, by scaling up to tens and potentially hundreds of workers. The first public release of a Geant4- MT prototype was made in 2011. We report on the revision of Geant4-MT for inclusion in the production-level release scheduled for end of 2013. This has involved significant re-engineering of the prototype in order to incorporate it into the main Geant4 development line, and the porting of Geant4-MT threading code to additional platforms. In order to make the porting of applications as simple as possible, refinements addressed the needs of standalone applications. Further adaptations were created to improve the fit with the frameworks of High Energy Physics experiments. We report on performances measurements on Intel Xeon TM , AMD Opteron TM the first trials of Geant4-MT on the Intel Many Integrated Cores (MIC) architecture, in the form of the Xeon Phi TM co-processor. These indicate near-linear scaling through about 200 threads on 60 cores, when holding fixed the number of events per thread. (authors)

  15. Multi-threaded software framework development for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226135; Baines, John; Bold, Tomasz; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and laid out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant co...

  16. Multi-threaded Software Framework Development for the ATLAS Experiment

    CERN Document Server

    Stewart, Graeme; The ATLAS collaboration; Baines, John; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and layed out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant c...

  17. MT-ADRES: multi-threading on coarse-grained reconfigurable architecture

    DEFF Research Database (Denmark)

    Wu, Kehuai; Kanstein, Andreas; Madsen, Jan

    2008-01-01

    The coarse-grained reconfigurable architecture ADRES (architecture for dynamically reconfigurable embedded systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high-ILP archi......The coarse-grained reconfigurable architecture ADRES (architecture for dynamically reconfigurable embedded systems) and its compiler offer high instruction-level parallelism (ILP) to applications by means of a sparsely interconnected array of functional units and register files. As high......-ILP architectures achieve only low parallelism when executing partially sequential code segments, which is also known as Amdahl's law, this article proposes to extend ADRES to MT-ADRES (multi-threaded ADRES) to also exploit thread-level parallelism. On MT-ADRES architectures, the array can be partitioned...

  18. A Multi-threaded Version of Field II

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2014-01-01

    A multi-threaded version of Field II has been developed, which automatically can use the multi-core capabil- ities of modern CPUs. The memory allocation routines were rewritten to minimize the number of dynamic allocations and to make pre-allocations possible for each thread. This ensures...... that the simulation job can be automatically partitioned and the interdependence between threads minimized. The new code has been compared to Field II version 3.22, October 27, 2013 (latest free-ware version). A 64 element 5 MHz focused array transducer was simulated. One million point scatterers randomly distributed...... in a plane of 20 x 50 mm (width x depth) with random Gaussian amplitudes were simulated using the command calc scat . Dual Intel Xeon CPU E5-2630 2.60 GHz CPUs were used under Ubuntu Linux 10.02 and Matlab version 2013b. Each CPU holds 6 cores with hyper-threading, corresponding to a total of 24 hyper...

  19. Matlab enhanced multi-threaded tomography optimization sequence (MEMTOS)

    International Nuclear Information System (INIS)

    Lum, Edward S.; Pope, Chad L.

    2016-01-01

    Highlights: • Monte Carlo simulation of spent nuclear fuel assembly neutron computed tomography. • Optimized parallel calculations conducted from within the MATLAB environment. • Projection difference technique used to identify anomalies in spent nuclear fuel assemblies. - Abstract: One challenge associated with spent nuclear fuel assemblies is the lack of non-destructive analysis techniques to determine if fuel pins have been removed or replaced or if there are significant defects associated with fuel pins deep within a fuel assembly. Neutron computed tomography is a promising technique for addressing these qualitative issues. Monte Carlo simulation of spent nuclear fuel neutron computed tomography allows inexpensive process investigation and optimization. The main purpose of this work is to provide a fully automated advanced simulation framework for the analysis of spent nuclear fuel inspection using neutron computed tomography. The simulation framework, called Matlab Enhanced Multi-Threaded Tomography Optimization Sequence (MEMTOS) not only automates the simulation process, but also generates superior tomography image results. MEMTOS is written in the MATLAB scripting language and addresses file management, parallel Monte Carlo execution, results extraction, and tomography image generation. This paper describes the mathematical basis for neutron computed tomography, the Monte Carlo technique used to simulate neutron computed tomography, and the overall tomography simulation optimization algorithm. Sequence results presented include overall simulation speed enhancement, tomography and image results obtained for Experimental Breeder Reactor II spent fuel assemblies and light water reactor fuel assemblies. Optimization using a projection difference technique are also described.

  20. Performance improvement of developed program by using multi-thread technique

    Directory of Open Access Journals (Sweden)

    Surasak Jabal

    2015-03-01

    Full Text Available This research presented how to use a multi-thread programming technique to improve the performance of a program written by Windows Presentation Foundation (WPF. The Computer Assisted Instruction (CAI software, named GAME24, was selected to use as a case study. This study composed of two main parts. The first part was about design and modification of the program structure upon the Object Oriented Programing (OOP approach. The second part was about coding the program using the multi-thread technique which the number of threads were based on the calculated Catalan number. The result showed that the multi-thread programming technique increased the performance of the program 44%-88% compared to the single-thread technique. In addition, it has been found that the number of cores in the CPU also increase the performance of multithreaded program proportionally.

  1. Real-time SHVC software decoding with multi-threaded parallel processing

    Science.gov (United States)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  2. Multi-Threaded DNA Tag/Anti-Tag Library Generator for Multi-Core Platforms

    Science.gov (United States)

    2009-05-01

    base pair)  Watson ‐ Crick  strand pairs that bind perfectly within pairs, but poorly across pairs. A variety  of  DNA  strand hybridization metrics...AFRL-RI-RS-TR-2009-131 Final Technical Report May 2009 MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE PLATFORMS...TYPE Final 3. DATES COVERED (From - To) Jun 08 – Feb 09 4. TITLE AND SUBTITLE MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE

  3. Scheduler-specific Confidentiality for Multi-Threaded Programs and Its Logic-Based Verification

    NARCIS (Netherlands)

    Huisman, Marieke; Ngo, Minh Tri

    2011-01-01

    Observational determinism has been proposed in the literature as a way to ensure confidentiality for multi-threaded programs. Intuitively, a program is observationally deterministic if the behavior of the public variables is deterministic, i.e., independent of the private variables and the

  4. Scheduler-Specific Confidentiality for Multi-Threaded Programs and Its Logic-Based Verification

    NARCIS (Netherlands)

    Huisman, Marieke; Ngo, Minh Tri; Beckert, B.; Damiani, F.; Gurov, D.

    2012-01-01

    Observational determinism has been proposed in the literature as a way to ensure condentiality for multi-threaded programs. Intuitively, a program is observationally deterministic if the behavior of the public variables is deterministic, i.e., independent of the private variables and the scheduling

  5. UTLEON3 Exploring Fine-Grain Multi-Threading in FPGAs

    CERN Document Server

    Daněk, Martin; Kohout, Lukáš; Sýkora, Jaroslav; Bartosinski, Roman

    2013-01-01

    This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.  Describes and documents a working SPARC v8, with fine-grain multithreading and fast context switch; Provides VHDL sources for the described processor; Describes a latency-tolerant framework for coupling hardware accelerators to microthreaded processor pipelines; Includes programming by example in the micro-threaded assembly language.    

  6. Adaptive control in multi-threaded iterated integration

    International Nuclear Information System (INIS)

    Doncker, Elise de; Yuasa, Fukuko

    2013-01-01

    In recent years we have developed a technique for the direct computation of Feynman loop-integrals, which are notorious for the occurrence of integrand singularities. Especially for handling singularities in the interior of the domain, we approximate the iterated integral using an adaptive algorithm in the coordinate directions. We present a novel multi-core parallelization scheme for adaptive multivariate integration, by assigning threads to the rule evaluations in the outer dimensions of the iterated integral. The method ensures a large parallel granularity as each function evaluation by itself comprises an integral over the lower dimensions, while the application of the threads is governed by the adaptive control in the outer level. We give computational results for a test set of 3- to 6-dimensional integrals, where several problems exhibit a loop integral behavior.

  7. Large Scale Document Inversion using a Multi-threaded Computing System

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  8. Large Scale Document Inversion using a Multi-threaded Computing System.

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  9. Servicing a globally broadcast interrupt signal in a multi-threaded computer

    Science.gov (United States)

    Attinella, John E.; Davis, Kristan D.; Musselman, Roy G.; Satterfield, David L.

    2015-12-29

    Methods, apparatuses, and computer program products for servicing a globally broadcast interrupt signal in a multi-threaded computer comprising a plurality of processor threads. Embodiments include an interrupt controller indicating in a plurality of local interrupt status locations that a globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include a thread determining that a local interrupt status location corresponding to the thread indicates that the globally broadcast interrupt signal has been received by the interrupt controller. Embodiments also include the thread processing one or more entries in a global interrupt status bit queue based on whether global interrupt status bits associated with the globally broadcast interrupt signal are locked. Each entry in the global interrupt status bit queue corresponds to a queued global interrupt.

  10. Towards Fast Reverse Time Migration Kernels using Multi-threaded Wavefront Diamond Tiling

    KAUST Repository

    Malas, T.

    2015-09-13

    Today’s high-end multicore systems are characterized by a deep memory hierarchy, i.e., several levels of local and shared caches, with limited size and bandwidth per core. The ever-increasing gap between the processor and memory speed will further exacerbate the problem and has lead the scientific community to revisit numerical software implementations to better suit the underlying memory subsystem for performance (data reuse) as well as energy efficiency (data locality). The authors propose a novel multi-threaded wavefront diamond blocking (MWD) implementation in the context of stencil computations, which represents the core operation for seismic imaging in oil industry. The stencil diamond formulation introduces temporal blocking for high data reuse in the upper cache levels. The wavefront optimization technique ensures data locality by allowing multiple threads to share common adjacent point stencil. Therefore, MWD is able to take up the aforementioned challenges by alleviating the cache size limitation and releasing pressure from the memory bandwidth. Performance comparisons are shown against the optimized 25-point stencil standard seismic imaging scheme using spatial and temporal blocking and demonstrate the effectiveness of MWD.

  11. Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225867; The ATLAS collaboration

    2017-01-01

    We present an implementation of the ATLAS High Level Trigger, HLT, that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS HLT to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data-taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the HLT input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that each execute algorithms sequentially for different events. AthenaMT will provide a fully multi-threaded environment that will additionally enable concurrent ...

  12. Design, Implementation and Testing of a Tiny Multi-Threaded DNS64 Server

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-03-01

    Full Text Available DNS64 is going to be an important service (together with NAT64 in the upcoming years of the IPv6 transition enabling the clients having only IPv6 addresses to reach the servers having only IPv4 addresses (the majority of the servers on the Internet today. This paper describes the design, implementation and functional testing of MTD64, a flexible, easy to use, multi-threaded DNS64 proxy published as a free software under the GPLv2 license. All the theoretical background is introduced including the DNS message format, the operation of the DNS64 plus NAT64 solution and the construction of the IPv4-embedded IPv6 addresses. Our design decisions are fully disclosed from the high level ones to the details. Implementation is introduced at high level only as the details can be found in the developer documentation. The most important parts of a through functional testing are included as well as the results of some basic performance comparison with BIND.

  13. High Resolution Modelling of the Congo River's Multi-Threaded Main Stem Hydraulics

    Science.gov (United States)

    Carr, A. B.; Trigg, M.; Tshimanga, R.; Neal, J. C.; Borman, D.; Smith, M. W.; Bola, G.; Kabuya, P.; Mushie, C. A.; Tschumbu, C. L.

    2017-12-01

    We present the results of a summer 2017 field campaign by members of the Congo River users Hydraulics and Morphology (CRuHM) project, and a subsequent reach-scale hydraulic modelling study on the Congo's main stem. Sonar bathymetry, ADCP transects, and water surface elevation data have been collected along the Congo's heavily multi-threaded middle reach, which exhibits complex in-channel hydraulic processes that are not well understood. To model the entire basin's hydrodynamics, these in-channel hydraulic processes must be parameterised since it is not computationally feasible to represent them explicitly. Furthermore, recent research suggests that relative to other large global rivers, in-channel flows on the Congo represent a relatively large proportion of total flow through the river-floodplain system. We therefore regard sufficient representation of in-channel hydraulic processes as a Congo River hydrodynamic research priority. To enable explicit representation of in-channel hydraulics, we develop a reach-scale (70 km), high resolution hydraulic model. Simulation of flow through individual channel threads provides new information on flow depths and velocities, and will be used to inform the parameterisation of a broader basin-scale hydrodynamic model. The basin-scale model will ultimately be used to investigate floodplain fluxes, flood wave attenuation, and the impact of future hydrological change scenarios on basin hydrodynamics. This presentation will focus on the methodology we use to develop a reach-scale bathymetric DEM. The bathymetry of only a small proportion of channel threads can realistically be captured, necessitating some estimation of the bathymetry of channels not surveyed. We explore different approaches to this bathymetry estimation, and the extent to which it influences hydraulic model predictions. The CRuHM project is a consortium comprising the Universities of Kinshasa, Rhodes, Dar es Salaam, Bristol, and Leeds, and is funded by Royal

  14. Shadow-Bitcoin: Scalable Simulation via Direct Execution of Multi-Threaded Applications

    Science.gov (United States)

    2015-08-10

    precisely model the real network. Providing initial blockchain state. Each node in the Bitcoin network typically maintains its own copy of the entire... blockchain . In our model network, we begin with all the nodes “in sync” to some prior blockchain state. To reduce the storage cost, we allow the

  15. Generic accelerated sequence alignment in SeqAn using vectorization and multi-threading.

    Science.gov (United States)

    Rahn, René; Budach, Stefan; Costanza, Pascal; Ehrhardt, Marcel; Hancox, Jonny; Reinert, Knut

    2018-05-03

    Pairwise sequence alignment is undoubtedly a central tool in many bioinformatics analyses. In this paper, we present a generically accelerated module for pairwise sequence alignments applicable for a broad range of applications. In our module, we unified the standard dynamic programming kernel used for pairwise sequence alignments and extended it with a generalized inter-sequence vectorization layout, such that many alignments can be computed simultaneously by exploiting SIMD (Single Instruction Multiple Data) instructions of modern processors. We then extended the module by adding two layers of thread-level parallelization, where we a) distribute many independent alignments on multiple threads and b) inherently parallelize a single alignment computation using a work stealing approach producing a dynamic wavefront progressing along the minor diagonal. We evaluated our alignment vectorization and parallelization on different processors, including the newest Intel® Xeon® (Skylake) and Intel® Xeon Phi™ (KNL) processors, and use cases. The instruction set AVX512-BW (Byte and Word), available on Skylake processors, can genuinely improve the performance of vectorized alignments. We could run single alignments 1600 times faster on the Xeon Phi™ and 1400 times faster on the Xeon® than executing them with our previous sequential alignment module. The module is programmed in C++ using the SeqAn (Reinert et al., 2017) library and distributed with version 2.4. under the BSD license. We support SSE4, AVX2, AVX512 instructions and included UME::SIMD, a SIMD-instruction wrapper library, to extend our module for further instruction sets. We thoroughly test all alignment components with all major C++ compilers on various platforms. rene.rahn@fu-berlin.de.

  16. AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00100895; The ATLAS collaboration; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; van Gemmeren, Peter

    2017-01-01

    ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying ha...

  17. AthenaMT: Upgrading the ATLAS Software Framework for the Many-Core World with Multi-Threading

    CERN Document Server

    Leggett, Charles; The ATLAS collaboration; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; van Gemmeren, Peter

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we will report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying...

  18. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-01-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  19. Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  20. Validation of a virtual source model of medical linac for Monte Carlo dose calculation using multi-threaded Geant4

    Science.gov (United States)

    Aboulbanine, Zakaria; El Khayati, Naïma

    2018-04-01

    The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, , , and for squared fields, and for an asymmetric rectangular field. Good agreement in terms of formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM’s precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential

  1. 64k networked multi-threaded processors and their real-time application in high energy physics

    CERN Document Server

    Schneider, R; Gutfleisch, M; Gareus, R; Lesser, F; Lindenstruth, V; Reichling, C; Torralba, G

    2002-01-01

    Particle physics experiments create large data streams at high rates ranging from kHz to MHz. In a single event the number of created particles can easily exceed 20.000. The architecture of high resolution tracking detectors does not allow to handle the event data stream exceeding 10 TByte/s. Since only some rare scenarios are interesting a selection process increases the efficiency by identifying relevant events which are processed afterwards. This trigger has to be fast enough to avoid loss of data. In case of the ALICE experiment at CERN the trigger is created by analyzing data of the transition radiation detector where about 16.000 charged particles cross six independent layers. Nearly 1.2 million analog data channels are digitized at 10 MHz by 10 bit ADCs within 2 mu s. On this data stream of 13 TByte/s a trigger decision has to be made within 6 mu s. (5 refs).

  2. Application of RUNTA code in flood analyses

    International Nuclear Information System (INIS)

    Perez Martin, F.; Benitez Fonzalez, F.

    1994-01-01

    Flood probability analyses carried out to date indicate the need to evaluate a large number of flood scenarios. This necessity is due to a variety of reasons, the most important of which include: - Large number of potential flood sources - Wide variety of characteristics of flood sources - Large possibility of flood-affected areas becoming inter linked, depending on the location of the potential flood sources - Diversity of flood flows from one flood source, depending on the size of the rupture and mode of operation - Isolation times applicable - Uncertainties in respect of the structural resistance of doors, penetration seals and floors - Applicable degrees of obstruction of floor drainage system Consequently, a tool which carries out the large number of calculations usually required in flood analyses, with speed and flexibility, is considered necessary. The RUNTA Code enables the range of possible scenarios to be calculated numerically, in accordance with all those parameters which, as a result of previous flood analyses, it is necessary to take into account in order to cover all the possible floods associated with each flood area

  3. CASPER: Embedding Power Estimation and Hardware-Controlled Power Management in a Cycle-Accurate Micro-Architecture Simulation Platform for Many-Core Multi-Threading Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Arun Ravindran

    2012-02-01

    Full Text Available Despite the promising performance improvement observed in emerging many-core architectures in high performance processors, high power consumption prohibitively affects their use and marketability in the low-energy sectors, such as embedded processors, network processors and application specific instruction processors (ASIPs. While most chip architects design power-efficient processors by finding an optimal power-performance balance in their design, some use sophisticated on-chip autonomous power management units, which dynamically reduce the voltage or frequencies of idle cores and hence extend battery life and reduce operating costs. For large scale designs of many-core processors, a holistic approach integrating both these techniques at different levels of abstraction can potentially achieve maximal power savings. In this paper we present CASPER, a robust instruction trace driven cycle-accurate many-core multi-threading micro-architecture simulation platform where we have incorporated power estimation models of a wide variety of tunable many-core micro-architectural design parameters, thus enabling processor architects to explore a sufficiently large design space and achieve power-efficient designs. Additionally CASPER is designed to accommodate cycle-accurate models of hardware controlled power management units, enabling architects to experiment with and evaluate different autonomous power-saving mechanisms to study the run-time power-performance trade-offs in embedded many-core processors. We have implemented two such techniques in CASPER–Chipwide Dynamic Voltage and Frequency Scaling, and Performance Aware Core-Specific Frequency Scaling, which show average power savings of 35.9% and 26.2% on a baseline 4-core SPARC based architecture respectively. This power saving data accounts for the power consumption of the power management units themselves. The CASPER simulation platform also provides users with complete support of SPARCV9

  4. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  5. Co Modeling and Co Synthesis of Safety Critical Multi threaded Embedded Software for Multi Core Embedded Platforms

    Science.gov (United States)

    2017-03-20

    Kaiserslautern Kaiserslautern, Germany Sandeep Shukla FERMAT Lab Electrical and Computer Engineering Department Virginia Tech 900 North Glebe Road...Software Engineering , Software Producibility, Component-based software design, behavioral types, behavioral type inference, Polychronous model of...near future, many embedded applications including safety critical ones as used in avionics, automotive , mission control systems will run on

  6. The plant design analyser and its applications

    International Nuclear Information System (INIS)

    Whitmarsh-Everiss, M.J.

    1992-01-01

    Consideration is given to the history of computational methods for the non-linear dynamic analysis of plant behaviour. This is traced from analogue to hybrid computers. When these were phased out simulation languages were used in the batch mode and the interactive computational capabilities were lost. These have subsequently been recovered using mainframe computing architecture in the context of small models using the Prototype Plant Design Analyser. Given the development of parallel processing architectures, the restriction on model size can be lifted. This capability and the use of advanced Work Stations and graphics software has enabled an advanced interactive design environment to be developed. This system is generic and can be used, with suitable graphics development, to study the dynamics and control behaviour of any plant or system for minimum cost. Examples of past and possible future uses are identified. (author)

  7. PRECLOSURE CONSEQUENCE ANALYSES FOR LICENSE APPLICATION

    Energy Technology Data Exchange (ETDEWEB)

    S. Tsai

    2005-01-12

    Radiological consequence analyses are performed for potential releases from normal operations in surface and subsurface facilities and from Category 1 and Category 2 event sequences during the preclosure period. Surface releases from normal repository operations are primarily from radionuclides released from opening a transportation cask during dry transfer operations of spent nuclear fuel (SNF) in Dry Transfer Facility 1 (DTF 1), Dry Transfer Facility 2 (DTF 2), the Canister Handling facility (CHF), or the Fuel Handling Facility (FHF). Subsurface releases from normal repository operations are from resuspension of waste package surface contamination and neutron activation of ventilated air and silica dust from host rock in the emplacement drifts. The purpose of this calculation is to demonstrate that the preclosure performance objectives, specified in 10 CFR 63.111(a) and 10 CFR 63.111(b), have been met for the proposed design and operations in the geologic repository operations area. Preclosure performance objectives are discussed in Section 6.2.3 and are summarized in Tables 1 and 2.

  8. PRECLOSURE CONSEQUENCE ANALYSES FOR LICENSE APPLICATION

    International Nuclear Information System (INIS)

    S. Tsai

    2005-01-01

    Radiological consequence analyses are performed for potential releases from normal operations in surface and subsurface facilities and from Category 1 and Category 2 event sequences during the preclosure period. Surface releases from normal repository operations are primarily from radionuclides released from opening a transportation cask during dry transfer operations of spent nuclear fuel (SNF) in Dry Transfer Facility 1 (DTF 1), Dry Transfer Facility 2 (DTF 2), the Canister Handling facility (CHF), or the Fuel Handling Facility (FHF). Subsurface releases from normal repository operations are from resuspension of waste package surface contamination and neutron activation of ventilated air and silica dust from host rock in the emplacement drifts. The purpose of this calculation is to demonstrate that the preclosure performance objectives, specified in 10 CFR 63.111(a) and 10 CFR 63.111(b), have been met for the proposed design and operations in the geologic repository operations area. Preclosure performance objectives are discussed in Section 6.2.3 and are summarized in Tables 1 and 2

  9. Nonlinear finite element analyses: advances and challenges in dental applications.

    Science.gov (United States)

    Wakabayashi, N; Ona, M; Suzuki, T; Igarashi, Y

    2008-07-01

    To discuss the development and current status of application of nonlinear finite element method (FEM) in dentistry. The literature was searched for original research articles with keywords such as nonlinear, finite element analysis, and tooth/dental/implant. References were selected manually or searched from the PUBMED and MEDLINE databases through November 2007. The nonlinear problems analyzed in FEM studies were reviewed and categorized into: (A) nonlinear simulations of the periodontal ligament (PDL), (B) plastic and viscoelastic behaviors of dental materials, (C) contact phenomena in tooth-to-tooth contact, (D) contact phenomena within prosthodontic structures, and (E) interfacial mechanics between the tooth and the restoration. The FEM in dentistry recently focused on simulation of realistic intra-oral conditions such as the nonlinear stress-strain relationship in the periodontal tissues and the contact phenomena in teeth, which could hardly be solved by the linear static model. The definition of contact area critically affects the reliability of the contact analyses, especially for implant-abutment complexes. To predict the failure risk of a bonded tooth-restoration interface, it is essential to assess the normal and shear stresses relative to the interface. The inclusion of viscoelasticity and plastic deformation to the program to account for the time-dependent, thermal sensitive, and largely deformable nature of dental materials would enhance its application. Further improvement of the nonlinear FEM solutions should be encouraged to widen the range of applications in dental and oral health science.

  10. WEBnm@: a web application for normal mode analyses of proteins

    Directory of Open Access Journals (Sweden)

    Reuter Nathalie

    2005-03-01

    Full Text Available Abstract Background Normal mode analysis (NMA has become the method of choice to investigate the slowest motions in macromolecular systems. NMA is especially useful for large biomolecular assemblies, such as transmembrane channels or virus capsids. NMA relies on the hypothesis that the vibrational normal modes having the lowest frequencies (also named soft modes describe the largest movements in a protein and are the ones that are functionally relevant. Results We developed a web-based server to perform normal modes calculations and different types of analyses. Starting from a structure file provided by the user in the PDB format, the server calculates the normal modes and subsequently offers the user a series of automated calculations; normalized squared atomic displacements, vector field representation and animation of the first six vibrational modes. Each analysis is performed independently from the others and results can be visualized using only a web browser. No additional plug-in or software is required. For users who would like to analyze the results with their favorite software, raw results can also be downloaded. The application is available on http://www.bioinfo.no/tools/normalmodes. We present here the underlying theory, the application architecture and an illustration of its features using a large transmembrane protein as an example. Conclusion We built an efficient and modular web application for normal mode analysis of proteins. Non specialists can easily and rapidly evaluate the degree of flexibility of multi-domain protein assemblies and characterize the large amplitude movements of their domains.

  11. A Brief overview of neutron activation analyses methodology and applications

    International Nuclear Information System (INIS)

    Ali, M.A.

    2000-01-01

    The primary objective of this talk is to present our new facility for Neutron Activation Analysis to the scientific and industrial societies and show its possibilities. Therefore my talk will handle the following main items: An overview of neutron activation analysis, The special interest of fast mono-energetic neutrons, The NAA method and its sensitivities, The Recent scientific and industrial applications using NAA, and o An illustrating example measured by using our facility is presented What is NAA? It is a sensitive analytical technique useful for performing both qualitative and quantitative multi-element analyses in samples. Worldwide application of NAA is so widespread; it is estimated that approximately several 10,000 samples undergo analysis each year from almost every conceivable field of scientific or technical interest. Why NAA? For many elements and applications, NAA: Offers sensitivities that are sometimes superior to those attainable by other methods, on the order of nano-gram level, It is accurate and reliable, NAA is generally recognized as the r eferee method o f choice when new procedures are being developed or when other methods yield results that do not agree. However, the activation analysis at En=14 MeV is limited by a few factors: Low value of flux, low cross-sections of threshold reactions, o Short irradiation time due to finite target life, Interfering reactions and gamma ray spectral interference

  12. Applications of one-dimensional models in simplified inelastic analyses

    International Nuclear Information System (INIS)

    Kamal, S.A.; Chern, J.M.; Pai, D.H.

    1980-01-01

    This paper presents an approximate inelastic analysis based on geometric simplification with emphasis on its applicability, modeling, and the method of defining the loading conditions. Two problems are investigated: a one-dimensional axisymmetric model of generalized plane strain thick-walled cylinder is applied to the primary sodium inlet nozzle of the Clinch River Breeder Reactor Intermediate Heat Exchanger (CRBRP-IHX), and a finite cylindrical shell is used to simulate the branch shell forging (Y) junction. The results are then compared with the available detailed inelastic analyses under cyclic loading conditions in terms of creep and fatigue damages and inelastic ratchetting strains per the ASME Code Case N-47 requirements. In both problems, the one-dimensional simulation is able to trace the detailed stress-strain response. The quantitative comparison is good for the nozzle, but less satisfactory for the Y junction. Refinements are suggested to further improve the simulation

  13. Application of carbon isotope analyses in food technology

    International Nuclear Information System (INIS)

    Szanto, Zsuzsa; Svingor, E.; Futo, I.; Palcsu, L.; Molnar, M.

    2001-01-01

    and the analyses of the CO 2 by mass spectrometry. CO 2 was obtained by microcombustion (elemental analyzer NA1500NCS), trapped into an ampoule at liquid nitrogen temperature and measured on a McKinney-Nier type mass spectrometer developed in the INR-HAS (dual inlet system and triple ion collector). The range of δ 13 C values found in terrestrial plants varies from about -8 o/oo to about -35 o/oo, depending on their photosynthetic mechanisms. There are three major photosynthetic pathways (different ways of fixing CO 2 ): C3 (δ 13 C ,-22 o/oo to about -35 o/oo, C4 (δ 13 C, -8 o/oo to about -20 o/oo and CAM (spread throughout most of the ranges of values found for C3 and C4 plants). Most of the well-established carbon SIRA procedures, which have found application in food science, are related to the substantial isotopic difference between products formed by C3 or C4 photosynthesis. The purpose of the paper is to present through some examples the practical applications of carbon isotope analyses in food industry. (authors)

  14. Evaluating the multi-threading countermeasure

    CSIR Research Space (South Africa)

    Frieslaar, Ibraheem

    2016-12-01

    Full Text Available to obfuscate the individuals information from people attempting to intercept data. One of these cryptographic algorithms is the AES algorithm [1]. This algorithm has been declared to be the standard protocol to encrypt information by the The National Institute...-128 algo- rithm, four steps were followed: While the AES-128 algorithm was executing the encryption process, the power traces along with its corresponding input text were captured; a power leakage model was implemented where the guess of a key byte...

  15. Applications of Stochastic Analyses for Collaborative Learning and Cognitive Assessment

    National Research Council Canada - National Science Library

    Soller, Amy; Stevens, Ron

    2007-01-01

    .... Examples ranging from fields as diverse as defense analysis, cognitive science, and instruction are illustrated throughout to demonstrate the variety of applications that benefit from such stochastic...

  16. Application of principal component and factor analyses in electron spectroscopy

    International Nuclear Information System (INIS)

    Siuda, R.; Balcerowska, G.

    1998-01-01

    Fundamentals of two methods, taken from multivariate analysis and known as principal component analysis (PCA) and factor analysis (FA), are presented. Both methods are well known in chemometrics. Since 1979, when application of the methods to electron spectroscopy was reported for the first time, they became to be more and more popular in different branches of electron spectroscopy. The paper presents examples of standard applications of the method of Auger electron spectroscopy (AES), X-ray photoelectron spectroscopy (XPS), and electron energy loss spectroscopy (EELS). Advantages one can take from application of the methods, their potentialities as well as their limitations are pointed out. (author)

  17. Application of digital image correlation method for analysing crack ...

    Indian Academy of Sciences (India)

    centrated strain by imitating the treatment of micro-cracks using the finite element ... water and moisture to penetrate the concrete leading to serious rust of the ... The correlations among various grey values of digital images are analysed for ...

  18. Application of digital-image-correlation techniques in analysing ...

    Indian Academy of Sciences (India)

    Basis theory of strain analysis using the digital image correlation method .... Type 304N Stainless Steel (Modulus of Elasticity = 193 MPa, Tensile Yield .... also proves the accuracy of the qualitative analyses by using the DIC ... We thank the National Science Council of Taiwan for supporting this research through grant. No.

  19. Applications of MIDAS regression in analysing trends in water quality

    Science.gov (United States)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  20. [Application of big data analyses for musculoskeletal cell differentiation].

    Science.gov (United States)

    Imai, Yuuki

    2016-04-01

    Next generation sequencer has strongly progress big data analyses in life science. Among various kinds of sequencing data sets, epigenetic platform has just been important key to clarify the questions on broad and detail phenomenon in various forms of life. In this report, it is introduced that the research on identification of novel transcription factors in osteoclastogenesis using DNase-seq. Big data on musculoskeletal research will be organized by IFMRS and is getting more crucial.

  1. Lipids: From Chemical Structures, Biosynthesis, and Analyses to Industrial Applications.

    Science.gov (United States)

    Li-Beisson, Yonghua; Nakamura, Yuki; Harwood, John

    2016-01-01

    Lipids are one of the major subcellular components, and play numerous essential functions. As well as their physiological roles, oils stored in biomass are useful commodities for a variety of biotechnological applications including food, chemical feedstocks, and fuel. Due to their agronomic as well as economic and societal importance, lipids have historically been subjected to intensive studies. Major current efforts are to increase the energy density of cell biomass, and/or create designer oils suitable for specific applications. This chapter covers some basic aspects of what one needs to know about lipids: definition, structure, function, metabolism and focus is also given on the development of modern lipid analytical tools and major current engineering approaches for biotechnological applications. This introductory chapter is intended to serve as a primer for all subsequent chapters in this book outlining current development in specific areas of lipids and their metabolism.

  2. Efficiency of insurance companies: Application of DEA and Tobit analyses

    Directory of Open Access Journals (Sweden)

    Eva Grmanová

    2017-10-01

    Full Text Available The aim of this paper is to determine the relationship between technical efficiency and profitability of insurance companies. The profitability of insurance companies was expressed by such indicators as ROA, ROE and the size of assets. We analysed 15 commercial insurance companies in Slovakia in the period of 2013-2015. Technical efficiency scores were expressed using DEA models. The relationship between the technical efficiency score and the indicators of profitability was expressed using censored regression, i.e. the Tobit regression model and the Mann-Whitney U-test. The relationship between the technical efficiency score in the CCR and BCC models and all the groups formed on the basis of the return on assets and the group formed basing on the return on equity was not confirmed. Statistically significant difference between average technical efficiency score in the CCR model in the group of insurance companies with ROA

  3. Technology application analyses at five Department of Energy Sites

    International Nuclear Information System (INIS)

    1995-05-01

    The Hazardous Waste Remedial Actions Program (HAZWRAP), a division of Lockheed Martin Energy Systems, Inc., managing contractor for the Department of Energy (DOE) facilities in Oak Ridge, Tennessee, was tasked by the United States Air Force (USAF) through an Interagency Agreement between DOE and the USAF, to provide five Technology Application Analysis Reports to the USAF. These reports were to provide information about DOE sites that have volatile organic compounds contaminating soil or ground water and how the sites have been remediated. The sites were using either a pump-and-treat technology or an alternative to pump-and-treat. The USAF was looking at the DOE sites for lessons learned that could be applied to Department of Defense (DoD) problems in an effort to communicate throughout the government system. The five reports were part of a larger project undertaken by the USAF to look at over 30 sites. Many of the sites were DoD sites, but some were in the private sector. The five DOE projects selected to be reviewed came from three sites: the Savannah River Site (SRS), the Kansas City Site, and Lawrence Livermore National Laboratory (LLNL). SRS and LLNL provided two projects each. Both provided a standard pump-and-treat application as well as an innovative technology that is an alternative to pump-and-treat. The five reports on these sites have previously been published separately. This volume combines them to give the reader an overview of the whole project

  4. Progress for the Industry Application External Hazard Analyses Early Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Prescott, Steven [Idaho National Lab. (INL), Idaho Falls, ID (United States); Coleman, Justin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ryan, Emerald [Idaho State Univ., Pocatello, ID (United States); Bhandari, Bishwo [Idaho State Univ., Pocatello, ID (United States); Sludern, Daniel [Idaho State Univ., Pocatello, ID (United States); Pope, Chad [Idaho State Univ., Pocatello, ID (United States); Sampath, Ram [Centroid PIC, Idaho Falls, ID (United States)

    2015-09-01

    This report describes the current progress and status related to the Industry Application #2 focusing on External Hazards. For this industry application within the Light Water Reactor Sustainability (LWRS) Program Risk-Informed Safety Margin Characterization (RISMC) R&D Pathway, we will create the Risk-Informed Margin Management (RIMM) approach to represent meaningful (i.e., realistic facility representation) event scenarios and consequences by using an advanced 3D facility representation that will evaluate external hazards such as flooding and earthquakes in order to identify, model and analyze the appropriate physics that needs to be included to determine plant vulnerabilities related to external events; manage the communication and interactions between different physics modeling and analysis technologies; and develop the computational infrastructure through tools related to plant representation, scenario depiction, and physics prediction. One of the unique aspects of the RISMC approach is how it couples probabilistic approaches (the scenario) with mechanistic phenomena representation (the physics) through simulation. This simulation-based modeling allows decision makers to focus on a variety of safety, performance, or economic metrics. In this report, we describe the evaluation of various physics toolkits related to flooding representation. Ultimately, we will be coupling the flooding representation with other events such as earthquakes in order to provide coupled physics analysis for scenarios where interactions exist.

  5. Application of four dyes in gene expression analyses by microarrays

    Directory of Open Access Journals (Sweden)

    van Schooten Frederik J

    2005-07-01

    Full Text Available Abstract Background DNA microarrays are widely used in gene expression analyses. To increase throughput and minimize costs without reducing gene expression data obtained, we investigated whether four mRNA samples can be analyzed simultaneously by applying four different fluorescent dyes. Results Following tests for cross-talk of fluorescence signals, Alexa 488, Alexa 594, Cyanine 3 and Cyanine 5 were selected for hybridizations. For self-hybridizations, a single RNA sample was labelled with all dyes and hybridized on commercial cDNA arrays or on in-house spotted oligonucleotide arrays. Correlation coefficients for all combinations of dyes were above 0.9 on the cDNA array. On the oligonucleotide array they were above 0.8, except combinations with Alexa 488, which were approximately 0.5. Standard deviation of expression differences for replicate spots were similar on the cDNA array for all dye combinations, but on the oligonucleotide array combinations with Alexa 488 showed a higher variation. Conclusion In conclusion, the four dyes can be used simultaneously for gene expression experiments on the tested cDNA array, but only three dyes can be used on the tested oligonucleotide array. This was confirmed by hybridizations of control with test samples, as all combinations returned similar numbers of differentially expressed genes with comparable effects on gene expression.

  6. Capacity and reliability analyses with applications to power quality

    Science.gov (United States)

    Azam, Mohammad; Tu, Fang; Shlapak, Yuri; Kirubarajan, Thiagalingam; Pattipati, Krishna R.; Karanam, Rajaiah

    2001-07-01

    The deregulation of energy markets, the ongoing advances in communication networks, the proliferation of intelligent metering and protective power devices, and the standardization of software/hardware interfaces are creating a dramatic shift in the way facilities acquire and utilize information about their power usage. The currently available power management systems gather a vast amount of information in the form of power usage, voltages, currents, and their time-dependent waveforms from a variety of devices (for example, circuit breakers, transformers, energy and power quality meters, protective relays, programmable logic controllers, motor control centers). What is lacking is an information processing and decision support infrastructure to harness this voluminous information into usable operational and management knowledge to handle the health of their equipment and power quality, minimize downtime and outages, and to optimize operations to improve productivity. This paper considers the problem of evaluating the capacity and reliability analyses of power systems with very high availability requirements (e.g., systems providing energy to data centers and communication networks with desired availability of up to 0.9999999). The real-time capacity and margin analysis helps operators to plan for additional loads and to schedule repair/replacement activities. The reliability analysis, based on computationally efficient sum of disjoint products, enables analysts to decide the optimum levels of redundancy, aids operators in prioritizing the maintenance options for a given budget and monitoring the system for capacity margin. The resulting analytical and software tool is demonstrated on a sample data center.

  7. Applications of high lateral and energy resolution imaging XPS with a double hemispherical analyser based spectromicroscope

    International Nuclear Information System (INIS)

    Escher, M.; Winkler, K.; Renault, O.; Barrett, N.

    2010-01-01

    The design and applications of an instrument for imaging X-ray photoelectron spectroscopy (XPS) are reviewed. The instrument is based on a photoelectron microscope and a double hemispherical analyser whose symmetric configuration avoids the spherical aberration (α 2 -term) inherent for standard analysers. The analyser allows high transmission imaging without sacrificing the lateral and energy resolution of the instrument. The importance of high transmission, especially for highest resolution imaging XPS with monochromated laboratory X-ray sources, is outlined and the close interrelation of energy resolution, lateral resolution and analyser transmission is illustrated. Chemical imaging applications using a monochromatic laboratory Al Kα-source are shown, with a lateral resolution of 610 nm. Examples of measurements made using synchrotron and laboratory ultra-violet light show the broad field of applications from imaging of core level electrons with chemical shift identification, high resolution threshold photoelectron emission microscopy (PEEM), work function imaging and band structure imaging.

  8. Application of Advanced Multi-Core Processor Technologies to Oceanographic Research

    Science.gov (United States)

    2013-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Application of Advanced Multi-Core Processor Technologies...STM32 NXP LPC series No Proprietary Microchip PIC32/DSPIC No > 500 mW; < 5 W ARM Cortex TI OMAP TI Sitara Broadcom BCM2835 Varies FPGA...state-of-the-art information processing architectures. OBJECTIVES Next-generation processor architectures (multi-core, multi-threaded) hold the

  9. Non-linear finite element analyses applicable for the design of large reinforced concrete structures

    NARCIS (Netherlands)

    Engen, M; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik

    2017-01-01

    In order to make non-linear finite element analyses applicable during assessments of the ultimate load capacity or the structural reliability of large reinforced concrete structures, there is need for an efficient solution strategy with a low modelling uncertainty. A solution strategy comprises

  10. Sandia Transportation Technical Environmental Information Center and its application to transportation risk analyses

    International Nuclear Information System (INIS)

    Foley, J.T.; Davidson, C.A.; McClure, J.D.

    1978-01-01

    Purpose of this paper is to describe an applied research activity which is fundamental to the conduct of transportation analyses: the collection, analysis, storage, and retrieval of information on the intensities of technical environments. This paper describes the collection system which provides such a service to official researchers in transportation analysis and the applications of this information in the area of risk analysis

  11. Applicability of two mobile analysers for mercury in urine in small-scale gold mining areas.

    Science.gov (United States)

    Baeuml, Jennifer; Bose-O'Reilly, Stephan; Lettmeier, Beate; Maydl, Alexandra; Messerer, Katalin; Roider, Gabriele; Drasch, Gustav; Siebert, Uwe

    2011-12-01

    Mercury is still used in developing countries to extract gold from the ore in small-scale gold mining areas. This is a major health hazard for people living in mining areas. The concentration of mercury in urine was analysed in different mining areas in Zimbabwe, Indonesia and Tanzania. First the urine samples were analysed by CV-AAS (cold vapour atomic absorption spectrometry) during the field projects with a mobile mercury analyser (Lumex(®) or Seefelder(®)) and secondly, in a laboratory with a stationary CV-AAS mercury analyser (PerkinElmer(®)). Caused by the different systems (reduction agent either SnCl(2) (Lumex(®) or Seefelder(®))) or NaBH(4) (PerkinElmer(®)), with the mobile analysers only the inorganic mercury was obtained and with the stationary system the total mercury concentration was measured. The aims of the study were whether the results obtained in field with the mobile equipments can be compared with the stationary reference method in the laboratory and allow the application of these mobile analysers in screening studies on concerned populations to select those, who are exposed to critical mercury levels. Overall, the concentrations obtained with the two mobile systems were approximately 25% lower than determined with the stationary system. Nevertheless, both mobile systems seem to be very useful for screening of volunteers in field. Moreover, regional staff may be trained on such analysers to perform screening tests by themselves. Copyright © 2011 Elsevier GmbH. All rights reserved.

  12. Application of ASTEC V2.0 to severe accident analyses for German KONVOI type reactors

    International Nuclear Information System (INIS)

    Nowack, H.; Erdmann, W.; Reinke, N.

    2011-01-01

    The integral code ASTEC is jointly developed by IRSN (Institut de Radioprotection et de Surete Nucleaire, France) and GRS (Germany). Its main objective is to simulate severe accident scenarios in PWRs from the initiating event up to the release of radioactive material into the environment. This paper describes the ASTEC modeling approach and the nodalisation of a KONVOI type PWR as an application example. Results from an integral severe accident study are presented and shortcomings as well as advantages are outlined. As a conclusion, the applicability of ASTEC V2.0 for deterministic severe accident analyses used for PSA level 2 and Severe Accident Management studies will be assessed. (author)

  13. Light Water Reactor Sustainability Program Industry Application External Hazard Analyses Problem Statement

    Energy Technology Data Exchange (ETDEWEB)

    Szilard, Ronaldo Henriques [Idaho National Lab. (INL), Idaho Falls, ID (United States); Coleman, Justin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Prescott, Steven [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kammerer, Annie [Annie Kammerer Consulting, Rye, NH (United States); Youngblood, Robert [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pope, Chad [Idaho State Univ., Pocatello, ID (United States)

    2015-07-01

    Risk-Informed Margin Management Industry Application on External Events. More specifically, combined events, seismically induced external flooding analyses for a generic nuclear power plant with a generic site soil, and generic power plant system and structure. The focus of this report is to define the problem above, set up the analysis, describe the methods to be used, tools to be applied to each problem, and data analysis and validation associated with the above.

  14. Metagenomic analyses of bacteria on human hairs: a qualitative assessment for applications in forensic science.

    Science.gov (United States)

    Tridico, Silvana R; Murray, Dáithí C; Addison, Jayne; Kirkbride, Kenneth P; Bunce, Michael

    2014-01-01

    Mammalian hairs are one of the most ubiquitous types of trace evidence collected in the course of forensic investigations. However, hairs that are naturally shed or that lack roots are problematic substrates for DNA profiling; these hair types often contain insufficient nuclear DNA to yield short tandem repeat (STR) profiles. Whilst there have been a number of initial investigations evaluating the value of metagenomics analyses for forensic applications (e.g. examination of computer keyboards), there have been no metagenomic evaluations of human hairs-a substrate commonly encountered during forensic practice. This present study attempts to address this forensic capability gap, by conducting a qualitative assessment into the applicability of metagenomic analyses of human scalp and pubic hair. Forty-two DNA extracts obtained from human scalp and pubic hairs generated a total of 79,766 reads, yielding 39,814 reads post control and abundance filtering. The results revealed the presence of unique combinations of microbial taxa that can enable discrimination between individuals and signature taxa indigenous to female pubic hairs. Microbial data from a single co-habiting couple added an extra dimension to the study by suggesting that metagenomic analyses might be of evidentiary value in sexual assault cases when other associative evidence is not present. Of all the data generated in this study, the next-generation sequencing (NGS) data generated from pubic hair held the most potential for forensic applications. Metagenomic analyses of human hairs may provide independent data to augment other forensic results and possibly provide association between victims of sexual assault and offender when other associative evidence is absent. Based on results garnered in the present study, we believe that with further development, bacterial profiling of hair will become a valuable addition to the forensic toolkit.

  15. Physical characterization of biomass-based pyrolysis liquids. Application of standard fuel oil analyses

    Energy Technology Data Exchange (ETDEWEB)

    Oasmaa, A; Leppaemaeki, E; Koponen, P; Levander, J; Tapola, E [VTT Energy, Espoo (Finland). Energy Production Technologies

    1998-12-31

    The main purpose of the study was to test the applicability of standard fuel oil methods developed for petroleum-based fuels to pyrolysis liquids. In addition, research on sampling, homogeneity, stability, miscibility and corrosivity was carried out. The standard methods have been tested for several different pyrolysis liquids. Recommendations on sampling, sample size and small modifications of standard methods are presented. In general, most of the methods can be used as such but the accuracy of the analysis can be improved by minor modifications. Fuel oil analyses not suitable for pyrolysis liquids have been identified. Homogeneity of the liquids is the most critical factor in accurate analysis. The presence of air bubbles may disturb in several analyses. Sample preheating and prefiltration should be avoided when possible. The former may cause changes in the composition and structure of the pyrolysis liquid. The latter may remove part of organic material with particles. The size of the sample should be determined on the basis of the homogeneity and the water content of the liquid. The basic analyses of the Technical Research Centre of Finland (VTT) include water, pH, solids, ash, Conradson carbon residue, heating value, CHN, density, viscosity, pourpoint, flash point, and stability. Additional analyses are carried out when needed. (orig.) 53 refs.

  16. Application of energy dispersive X-ray spectrometers with semiconductor detectors in radiometric analyses

    International Nuclear Information System (INIS)

    Jugelt, P.; Schieckel, M.

    1983-01-01

    Problems and possibilities of applying semiconductor detector spectrometers in radiometric analyses are described. A summary of the state of the art and tendencies of device engineering and spectra evaluation is given. Liquid-nitrogen cooled Li-drifted Si-detectors and high-purity Ge-detectors are compared. Semiconductor detectors working at room temperature are under development. In this connection CdTe and HgI 2 semiconductor detectors are compared. The use of small efficient computers in the spectrometer systems stimulates the development of algorithms for spectra analyses and for determining the concentration. Fields of application of energy dispersive X-ray spectrometers are X-ray diffraction and X-ray macroanalysis in investigating the structure of extensive surface regions

  17. A Fourier transform infrared trace gas and isotope analyser for atmospheric applications

    Directory of Open Access Journals (Sweden)

    D. W. T. Griffith

    2012-10-01

    Full Text Available Concern in recent decades about human impacts on Earth's climate has led to the need for improved and expanded measurement capabilities of greenhouse gases in the atmosphere. In this paper we describe in detail an in situ trace gas analyser based on Fourier Transform Infrared (FTIR spectroscopy that is capable of simultaneous and continuous measurements of carbon dioxide (CO2, methane (CH4, carbon monoxide (CO, nitrous oxide (N2O and 13C in CO2 in air with high precision. High accuracy is established by reference to measurements of standard reference gases. Stable water isotopes can also be measured in undried airstreams. The analyser is automated and allows unattended operation with minimal operator intervention. Precision and accuracy meet and exceed the compatibility targets set by the World Meteorological Organisation – Global Atmosphere Watch for baseline measurements in the unpolluted troposphere for all species except 13C in CO2.

    The analyser is mobile and well suited to fixed sites, tower measurements, mobile platforms and campaign-based measurements. The isotopic specificity of the optically-based technique and analysis allows its application in isotopic tracer experiments, for example in tracing variations of 13C in CO2 and 15N in N2O. We review a number of applications illustrating use of the analyser in clean air monitoring, micrometeorological flux and tower measurements, mobile measurements on a train, and soil flux chamber measurements.

  18. Generic uncertainty model for DETRA for environmental consequence analyses. Application and sample outputs

    International Nuclear Information System (INIS)

    Suolanen, V.; Ilvonen, M.

    1998-10-01

    Computer model DETRA applies a dynamic compartment modelling approach. The compartment structure of each considered application can be tailored individually. This flexible modelling method makes it possible that the transfer of radionuclides can be considered in various cases: aquatic environment and related food chains, terrestrial environment, food chains in general and food stuffs, body burden analyses of humans, etc. In the former study on this subject, modernization of the user interface of DETRA code was carried out. This new interface works in Windows environment and the usability of the code has been improved. The objective of this study has been to further develop and diversify the user interface so that also probabilistic uncertainty analyses can be performed by DETRA. The most common probability distributions are available: uniform, truncated Gaussian and triangular. The corresponding logarithmic distributions are also available. All input data related to a considered case can be varied, although this option is seldomly needed. The calculated output values can be selected as monitored values at certain simulation time points defined by the user. The results of a sensitivity run are immediately available after simulation as graphical presentations. These outcomes are distributions generated for varied parameters, density functions of monitored parameters and complementary cumulative density functions (CCDF). An application considered in connection with this work was the estimation of contamination of milk caused by radioactive deposition of Cs (10 kBq(Cs-137)/m 2 ). The multi-sequence calculation model applied consisted of a pasture modelling part and a dormant season modelling part. These two sequences were linked periodically simulating the realistic practice of care taking of domestic animals in Finland. The most important parameters were varied in this exercise. The performed diversifying of the user interface of DETRA code seems to provide an easily

  19. Competition and stability analyses among emissions, energy, and economy: Application for Mexico

    International Nuclear Information System (INIS)

    Pao, Hsiao-Tien; Fu, Hsin-Chia

    2015-01-01

    In view of limited natural resources on Earth, linkage among environment, energy, and economy (3Es) becomes important perspectives for sustainable development. This paper proposes to use Lotka–Volterra model for SUstainable Development (LV-SUD) to analyse the interspecific interactions, equilibria and their stabilities among emissions, different types of energy consumption (renewable, nuclear, and fossil fuel), and real GDP, the main factors of 3Es issues. Modelling these interactions provides a useful multivariate framework for prediction outcomes. Interaction between 3Es, namely competition, symbiosis, or predation, plays an important role in policy development to achieve a balanced use of energy resources and to strengthen the green economy. Applying LV-SUD in Mexico, an emerging markets country, analysing results show that there is a mutualism between fossil fuel consumption and GDP; prey-predator relationships that fossil fuel and GDP enhance the growth of emissions, but emissions inhibit the growth of the others; and commensalisms that GDP benefits from nuclear power, and renewable power benefits from fossil fuel. It is suggested that national energy policies should remain committed to decoupling the relevance between non-clean energy and GDP, to actively developing clean energy and thereby to properly reducing fossil fuel consumption and emissions without harming economic growth. - Highlights: • LV-SUD is used to analyse the competition between environment-energy-economy (3Es). • The competitions between renewable, nuclear, and fossil energy are analysed. • Competition between 3Es plays an important role in policy development. • LV-SUD provides a useful multivariate framework for prediction outcomes. • An application for emerging markets countries such as Mexico is presented

  20. Development and application of model RAIA uranium on-line analyser

    International Nuclear Information System (INIS)

    Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru

    1999-01-01

    The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management

  1. Attitudinal Analyses of Toleration and Respect, and the Problem of Institutional Applicability

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2010-01-01

    have a sufficiently similar meaning when applied to institutions such as the state as to individual persons? The paper presents the standard analyses and explains in what sense they are attitudinal and why the attitudinal component is necessary. It then presents the problem of institutional...... applicability that the attitudinal component brings about: the ascription of the requisite attitudes to institutions in general and the state in particular is problematic since institutions arguably cannot have attitudes of the required kind. This problem is distinguished from other problems, including...... the problem of making sense of political toleration raised by Glen Newey, and some possible responses to the problem are considered, including Peter Jones’ disaggregative response to Newey, all of which are found inadequate. The paper instead proposes that the analysis of institutional toleration and respect...

  2. An application of the explicit method for analysing intersystem dependencies in the evaluation of event trees

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Frutuoso e Melo, P.F.F.; Lima, J.E.P.; Stal, I.L.

    1985-01-01

    A computacional application of the explicit method for analyzing event trees in the context of probabilistic risk assessments is discussed. A detailed analysis of the explicit method is presented, including the train level analysis (TLA) of safety systems and the impact vector method. It is shown that the penalty for not adopting TLA is that in some cases non-conservative results may be reached. The impact vector method can significantly reduce the number of sequences to be considered, and its use has inspired the definition of a dependency matrix, which enables the proper running of a computer code especially developed for analysing event trees. The code has been extensively used in the Angra 1 PRA currently underway. In its present version it gives as output the dominant sequences for each given initiator, properly classiying them in core-degradation classes as specified by the user. (Author) [pt

  3. An application of the explicit method for analysing intersystem dependencies in the evaluation of event trees

    International Nuclear Information System (INIS)

    Oliveira, L.F.S.; Frutuoso e Melo, P.F.; Lima, J.E.P.; Stal, I.L.

    1985-01-01

    We discuss in this paper a computational application of the explicit method for analyzing event trees in the context of probabilistic risk assessments. A detailed analysis of the explicit method is presented, including the train level analysis (TLA) of safety systems and the impact vector method. It is shown that the penalty for not adopting TLA is that in some cases non-conservative results may be reached. The impact vector method can significantly reduce the number of sequences to be considered, and its use has inspired the definition of a dependency matrix, which enables the proper running of a computer code especially developed for analysing event trees. This code constructs and quantifies the event trees in the fashion just discussed, by receiving as input the construction and quantification dependencies defined in the dependency matrix. The code has been extensively used in the Angra 1 PRA currently underway. In its present version it gives as output the dominant sequences for each given initiator, properly classifying them in core-degradation classes as specified by the user. This calculation is made in a pointwise fashion. Extensions of this code are being developed in order to perform uncertainty analyses on the dominant sequences and also risk importance measures of the safety systems envolved. (orig.)

  4. Discussion on the applicability of entropy generation minimization to the analyses and optimizations of thermodynamic processes

    International Nuclear Information System (INIS)

    Cheng, XueTao; Liang, XinGang

    2013-01-01

    Highlights: • The applicability of entropy generation minimization is conditional. • The concept of exergy-work conversion efficiency is defined. • The concept of exergy destruction number is introduced. • Smaller exergy destruction number leads to larger exergy-work conversion efficiency. - Abstract: This work reports the analyses of some thermodynamic systems with the concepts of entropy generation, entropy generation numbers and revised entropy generation number, as well as exergy destruction number and exergy-work conversion efficiency that are proposed in this paper. The applicability of entropy generation minimization (EGM) is conditional if the optimization objective is the output power. The EGM leads to the maximum output power when the net exergy flow rate into the system is fixed, but it may not be appropriate if the net exergy flow rate into the system is not fixed. On the other hand, smaller exergy destruction number always corresponds to larger exergy-work conversion efficiency. The refrigeration cycle with the reverse Carnot engine is also analyzed in which mechanical work is input. The result shows that the EGM leads to the largest COP if the temperature of the high temperature heat source is fixed

  5. Implementing and analyzing the multi-threaded LP-inference

    Science.gov (United States)

    Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.

    2018-03-01

    The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.

  6. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2018-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. With the evolution of the CPU market to many-core systems, both the ATLAS offline reconstruction and High-Level Trigger (HLT) software will have to transition from a multi-process to a multithreaded processing paradigm in order not to exhaust the available physical memory of a typical compute node. The new multithreaded ATLAS software framework, AthenaMT, has been designed from the ground up to support both the offline and online use-cases with the aim to further harmonize the offline and trigger algorithms. The latter is crucial both in terms of maintenance effort and to guarantee the high trigger efficiency and rejection factors needed for the next two decades of data-taking. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while...

  7. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2017-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while retaining the key aspects of trigger functionality including regional reconstruction and early event rejection. We report on the first experience of migrating trigger algorithms to this new framework and present the next steps towards a full implementation of the ATLAS trigger within AthenaMT.

  8. A Multi-Threaded Cryptographic Pseudorandom Number Generator Test Suite

    Science.gov (United States)

    2016-09-01

    bitcoin thieves, Google releases patch. (2013, Aug. 16). SiliconANGLE. [Online]. Available: http://siliconangle.com/blog/2013/ 08/16/android-crypto-prng...flaw-aided- bitcoin -thieves-google-releases-patch/ [5] M. Gondree. (2014, Sep. 28). NPS POSIX thread pool library. [Online]. Available: https

  9. An object-oriented multi-threaded software beamformation toolbox

    DEFF Research Database (Denmark)

    Hansen, Jens Munk; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2011-01-01

    Focusing and apodization are an essential part of signal processing in ultrasound imaging. Although the fun- damental principles are simple, the dramatic increase in computational power of CPUs, GPUs, and FPGAs motivates the development of software based beamformers, which further improves image...... new beam formation strategies. It is a general 3D implementation capable of handling a multitude of focusing methods, interpolation schemes, and parametric and dynamic apodization. Despite being exible, it is capable of exploiting parallelization on a single computer, on a cluster, or on both....... On a single computer, it mimics the parallization in a scanner containing multiple beam formers. The focusing is determined using the positions of the transducer elements, presence of virtual sources, and the focus points. For interpolation, a number of interpolation schemes can be chosen, e.g. linear, polyno...

  10. AN MHD AVALANCHE IN A MULTI-THREADED CORONAL LOOP

    Energy Technology Data Exchange (ETDEWEB)

    Hood, A. W.; Cargill, P. J.; Tam, K. V. [School of Mathematics and Statistics, University of St Andrews, St Andrews, Fife, KY16 9SS (United Kingdom); Browning, P. K., E-mail: awh@st-andrews.ac.uk [School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom)

    2016-01-20

    For the first time, we demonstrate how an MHD avalanche might occur in a multithreaded coronal loop. Considering 23 non-potential magnetic threads within a loop, we use 3D MHD simulations to show that only one thread needs to be unstable in order to start an avalanche even when the others are below marginal stability. This has significant implications for coronal heating in that it provides for energy dissipation with a trigger mechanism. The instability of the unstable thread follows the evolution determined in many earlier investigations. However, once one stable thread is disrupted, it coalesces with a neighboring thread and this process disrupts other nearby threads. Coalescence with these disrupted threads then occurs leading to the disruption of yet more threads as the avalanche develops. Magnetic energy is released in discrete bursts as the surrounding stable threads are disrupted. The volume integrated heating, as a function of time, shows short spikes suggesting that the temporal form of the heating is more like that of nanoflares than of constant heating.

  11. Field Experimentation Design for Multi-Threaded Analysis

    National Research Council Canada - National Science Library

    Tackett, Gregory

    2001-01-01

    .... This report discusses the OSD definition of military utility, the decomposition and allocation of requirements, the responsibilities of organizations, and the Verification, Validation, and Accrediation (VV&A) of models, simulations, and data.

  12. Validation and application of the system code ATHLET-CD for BWR severe accident analyses

    Energy Technology Data Exchange (ETDEWEB)

    Di Marcello, Valentino, E-mail: valentino.marcello@kit.edu; Imke, Uwe; Sanchez, Victor

    2016-10-15

    Highlights: • We present the application of the system code ATHLET-CD code for BWR safety analyses. • Validation of core in-vessel models is performed based on KIT CORA experiments. • A SB-LOCA scenario is simulated on a generic German BWR plant up to vessel failure. • Different core reflooding possibilities are investigated to mitigate the accident consequences. • ATHLET-CD modelling features reflect the current state of the art of severe accident codes. - Abstract: This paper is aimed at the validation and application of the system code ATHLET-CD for the simulation of severe accident phenomena in Boiling Water Reactors (BWR). The corresponding models for core degradation behaviour e.g., oxidation, melting and relocation of core structural components are validated against experimental data available from the CORA-16 and -17 bundle tests. Model weaknesses are discussed along with needs for further code improvements. With the validated ATHLET-CD code, calculations are performed to assess the code capabilities for the prediction of in-vessel late phase core behaviour and reflooding of damaged fuel rods. For this purpose, a small break LOCA scenario for a generic German BWR with postulated multiple failures of the safety systems was selected. In the analysis, accident management measures represented by cold water injection into the damaged reactor core are addressed to investigate the efficacy in avoiding or delaying the failure of the reactor pressure vessel. Results show that ATHLET-CD is applicable to the description of BWR plant behaviour with reliable physical models and numerical methods adopted for the description of key in-vessel phenomena.

  13. A protocol for better design, application, and communication of population viability analyses.

    Science.gov (United States)

    Pe'er, Guy; Matsinos, Yiannis G; Johst, Karin; Franz, Kamila W; Turlure, Camille; Radchuk, Viktoriia; Malinowska, Agnieszka H; Curtis, Janelle M R; Naujokaitis-Lewis, Ilona; Wintle, Brendan A; Henle, Klaus

    2013-08-01

    Population viability analyses (PVAs) contribute to conservation theory, policy, and management. Most PVAs focus on single species within a given landscape and address a specific problem. This specificity often is reflected in the organization of published PVA descriptions. Many lack structure, making them difficult to understand, assess, repeat, or use for drawing generalizations across PVA studies. In an assessment comparing published PVAs and existing guidelines, we found that model selection was rarely justified; important parameters remained neglected or their implementation was described vaguely; limited details were given on parameter ranges, sensitivity analysis, and scenarios; and results were often reported too inconsistently to enable repeatability and comparability. Although many guidelines exist on how to design and implement reliable PVAs and standards exist for documenting and communicating ecological models in general, there is a lack of organized guidelines for designing, applying, and communicating PVAs that account for their diversity of structures and contents. To fill this gap, we integrated published guidelines and recommendations for PVA design and application, protocols for documenting ecological models in general and individual-based models in particular, and our collective experience in developing, applying, and reviewing PVAs. We devised a comprehensive protocol for the design, application, and communication of PVAs (DAC-PVA), which has 3 primary elements. The first defines what a useful PVA is; the second element provides a workflow for the design and application of a useful PVA and highlights important aspects that need to be considered during these processes; and the third element focuses on communication of PVAs to ensure clarity, comprehensiveness, repeatability, and comparability. Thereby, DAC-PVA should strengthen the credibility and relevance of PVAs for policy and management, and improve the capacity to generalize PVA findings

  14. CALCMIN - an EXCEL™ Visual Basic application for calculating mineral structural formulae from electron microprobe analyses

    Science.gov (United States)

    Brandelik, Andreas

    2009-07-01

    CALCMIN, an open source Visual Basic program, was implemented in EXCEL™. The program was primarily developed to support geoscientists in their routine task of calculating structural formulae of minerals on the basis of chemical analysis mainly obtained by electron microprobe (EMP) techniques. Calculation programs for various minerals are already included in the form of sub-routines. These routines are arranged in separate modules containing a minimum of code. The architecture of CALCMIN allows the user to easily develop new calculation routines or modify existing routines with little knowledge of programming techniques. By means of a simple mouse-click, the program automatically generates a rudimentary framework of code using the object model of the Visual Basic Editor (VBE). Within this framework simple commands and functions, which are provided by the program, can be used, for example, to perform various normalization procedures or to output the results of the computations. For the clarity of the code, element symbols are used as variables initialized by the program automatically. CALCMIN does not set any boundaries in complexity of the code used, resulting in a wide range of possible applications. Thus, matrix and optimization methods can be included, for instance, to determine end member contents for subsequent thermodynamic calculations. Diverse input procedures are provided, such as the automated read-in of output files created by the EMP. Furthermore, a subsequent filter routine enables the user to extract specific analyses in order to use them for a corresponding calculation routine. An event-driven, interactive operating mode was selected for easy application of the program. CALCMIN leads the user from the beginning to the end of the calculation process.

  15. Applications of commercial liquid scintillation counters to radon-222 and radium-226 analyses

    International Nuclear Information System (INIS)

    Gesell, T.F.; Prichard, H.M.; Haygood, J.R.

    1978-01-01

    The ubiquitous commerical liquid scintillation counter offers automatic sample processing, automatic data recording and the prospect of multiple users. With these features in mind we have explored a number of applications of liquid scintillation counters to environmental and health physics problems. One application, the analysis of radon in water has been described elsewhere and is only briefly reviewed. A method for measuring radon in air, two methods for measuring radium in water, and a technique for leak testing radium needles have also been investigated. An ordinary glass scintillation vial is readily converted into a miniature scintillation flask by coating the inside surface with a thin layer in ZnS:Ag phosphor. The lower limit detection is high, about 2 pCi/1 for a 1 hour count, but these flasks have proved to be useful in situations where a larger number of samples must be taken in environments with relatively high levels of radon. One technique for the detection of radium in water uses liquid-liquid extraction to concentrate radon into an organic scintillation fluid, the other involves passing the water sample through an ion exchange resin and then sealing the resin and scintillation fluid in a vial. Both techniques offer the prospect of easy and inexpensive analyses with limits of detection at or below 0.5 pCi/1. Radium needles can be leak tested by placing them in vials containing toluene for a few minutes, adding fluor to the toluene and counting. Preliminary data regarding these several methods are given

  16. Optimization of Finite-Differencing Kernels for Numerical Relativity Applications

    Directory of Open Access Journals (Sweden)

    Roberto Alfieri

    2018-05-01

    Full Text Available A simple optimization strategy for the computation of 3D finite-differencing kernels on many-cores architectures is proposed. The 3D finite-differencing computation is split direction-by-direction and exploits two level of parallelism: in-core vectorization and multi-threads shared-memory parallelization. The main application of this method is to accelerate the high-order stencil computations in numerical relativity codes. Our proposed method provides substantial speedup in computations involving tensor contractions and 3D stencil calculations on different processor microarchitectures, including Intel Knight Landing.

  17. Forensic applications of scanning electron microscopy/energy dispersive X-ray analyser in Hong Kong.

    Science.gov (United States)

    Wong, Y S

    1982-01-01

    Scanning Electron Microscopy - Energy Dispersive X-Ray Analysis (SEM/EDX) has been applied in casework for more than a year in the Forensic Division, Government Laboratory of Hong Kong. The types of samples being analysed are summarised and three cases of scientific interest are described. The first case applies SEM/EDX to characterize microscopic gold particles recovered from clothing of suspects involved in goldsmith robberies. Both elemental and morphological results obtained were used as supporting evidence. The second case describes the three types of beaded ends on fibres found in a single cloth sample. These beaded ends are different in shape and surface features and can be used as an additional parameter in fibre identification. The final case shows the application of vacuum evaporation of graphite on a document sample to reveal the area of paper which has been skillfully mechanically erased. Both the image intensity and the composition of the ink are used to differentiate between original and altered characters on the document.

  18. Application of Fast Neutron Activity for Analysing Element Content on the Air Particulate

    International Nuclear Information System (INIS)

    Elin Nuraini; Ngasifudin; Sunardi; Elisabeth

    2003-01-01

    The research on application of fast neutron activation analysis for analysing element content on the air particulate has been done. The research about analysis of the particulate matters contained in non industrial traffic territory of Surakarta and full industrial traffic territory of Karanganyar, had been done using Fast Neutron Activation Analysis Method. Fast Neutron Activation Analysis method is one of the element analysis method which it's basic principle causes radioactivity appearance from the samples after being irradiated by neutron. The qualitative analysis method is based on the measuring of specific energy which was radiated by radioactive's nucleus and quantitative analysis method is based on the measuring of the intensity of each peak gamma energy. The qualitative analysis results showed, some element were identified i.e : 51 V ; 200 Pb, 27 Al and 52 Cr. The result showed that Pb level is 2.21 ± 0.09x10 -1 mg/m 3 in non industrial traffic territory of Surakarta and 2.78 ± 0.11x10 -1 mg/m 3 full industrial traffic territory of Karanganyar, this value greater than threshold value according 6.0x10 -2 mg/m 3 . (author)

  19. Thermodynamic, Environmental and Economic Analyses of Solar Ejector Refrigeration System Application for Cold Storage

    Directory of Open Access Journals (Sweden)

    İbrahim ÜÇGÜL

    2009-02-01

    Full Text Available The refrigeration processes have been widely applied for especially in cold storages. In these plants, the systems working with compressed vapour cooling cycles have been used as a classical method. In general, electrical energy is used for compressing in these processes. Although, mainly the electricity itself has no pollution effect on the environment, the fossil fuels that are widely used to produce electricity in the most of the world, affect the nature terribly. In short, these refrigeration plants, because of the source of the electricity pollute the nature indirectly. However, for compression an ejector refrigeration system requires one of the important renewable energy sources with negligible pollution impact on the environment, namely solar energy from a thermal source. Thermodynamical, environmental and economical aspects of the ejector refrigeration system working with solar energy was investigated in this study. As a pilot case, apple cold storage plants widely used in ISPARTA city, which 1/5 th of apple production of TURKEY has been provided from, was chosen. Enviromental and economical advantages of solar ejector refrigeration system application for cold storage dictated by thermodynamic, economic and enviromental analyses in this research.

  20. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    . Moreover, some sample data from our own applications for developed and developing countries are shown. The use of the different methodologies for the calculation of indirect losses in the field of forensic disaster analyses is also to be discussed. Finally, we give an outlook on the further utilization of these models aiming for the simulation of indirect losses.

  1. Analytical solutions for recession analyses of sloping aquifers - applicability on relict rock glaciers in alpine catchments

    Science.gov (United States)

    Pauritsch, Marcus; Birk, Steffen; Hergarten, Stefan; Kellerer-Pirklbauer, Andreas; Winkler, Gerfried

    2014-05-01

    Rock glaciers as aquifer systems in alpine catchments may strongly influence the hydrological characteristics of these catchments. Thus, they have a high impact on the ecosystem and potential natural hazards such as for example debris flow. Therefore, knowledge of the hydrodynamic processes, internal structure and properties of these aquifers is important for resource management and risk assessment. The investigation of such aquifers often turns out to be expensive and technically complicated because of their strongly limited accessibility. Analytical solutions of discharge recession provide a quick and easy way to estimate aquifer parameters. However, due to simplifying assumptions the validity of the interpretation is often questionable. In this study we compared results of an analytical solution of discharge recessions with results based on a numerical model. This was done in order to analyse the range of uncertainties and the applicability of the analytical method in alpine catchment areas. The research area is a 0.76 km² large catchment in the Seckauer Tauern Range, Austria. The dominant aquifer in this catchment is a rock glacier, namely the Schöneben Rock Glacier. This relict rock glacier (i.e. containing no permafrost at present) covers an area of 0.11 km² and is drained by one spring at the rock glacier front. The rock glacier consists predominantly of gneissic sediments (mainly coarse-grained, blocky at the surface) and extends from 1720 to 1905 m a.s.l.. Discharge of the rock glacier spring is automatically measured since 2002. Electric conductivity and water temperature is monitored since 2008. An automatic weather station was installed in 2011 in the central part of the catchment. Additionally data of geophysical surveys (refraction seismic and ground penetrating radar) have been used to analyse the base slope and inner structure of the rock glacier. The measured data are incorporated into a numerical model implemented in MODFLOW. The numerical

  2. Application of at-site peak-streamflow frequency analyses for very low annual exceedance probabilities

    Science.gov (United States)

    Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.

    2017-07-17

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized

  3. Analysing the impact of multiple stressors in aquatic biomonitoring data: A 'cookbook' with applications in R.

    Science.gov (United States)

    Feld, Christian K; Segurado, Pedro; Gutiérrez-Cánovas, Cayetano

    2016-12-15

    Multiple stressors threaten biodiversity and ecosystem integrity, imposing new challenges to ecosystem management and restoration. Ecosystem managers are required to address and mitigate the impact of multiple stressors, yet the knowledge required to disentangle multiple-stressor effects is still incomplete. Experimental studies have advanced the understanding of single and combined stressor effects, but there is a lack of a robust analytical framework, to address the impact of multiple stressors based on monitoring data. Since 2000, the monitoring of Europe's waters has resulted in a vast amount of biological and environmental (stressor) data of about 120,000 water bodies. For many reasons, this data is rarely exploited in the multiple-stressor context, probably because of its rather heterogeneous nature: stressors vary and are mixed with broad-scale proxies of environmental stress (e.g. land cover), missing values and zero-inflated data limit the application of statistical methods and biological indicators are often aggregated (e.g. taxon richness) and do not respond stressor-specific. Here, we present a 'cookbook' to analyse the biological response to multiple stressors using data from biomonitoring schemes. Our 'cookbook' includes guidance for the analytical process and the interpretation of results. The 'cookbook' is accompanied by scripts, which allow the user to run a stepwise analysis based on his/her own data in R, an open-source language and environment for statistical computing and graphics. Using simulated and real data, we show that the recommended procedure is capable of identifying stressor hierarchy (importance) and interaction in large datasets. We recommend a minimum number of 150 independent observations and a minimum stressor gradient length of 75% (of the most relevant stressor's gradient in nature), to be able to reliably rank the stressor's importance, detect relevant interactions and estimate their standardised effect size. We conclude with

  4. Applicability of entropy, entransy and exergy analyses to the optimization of the Organic Rankine Cycle

    International Nuclear Information System (INIS)

    Zhu, Yadong; Hu, Zhe; Zhou, Yaodong; Jiang, Liang; Yu, Lijun

    2014-01-01

    Graphical abstract: Fig. 3a. Variations of the evaluation parameters with evaporation temperature in the case of prescribed hot and cold streams for R123. Fig. 3(a) indicates that among the seven parameters, the minimum entropy generation rate, exergy destruction rate, entransy efficiency, revised entropy generation number and the maximum entransy loss rate are corresponding to the maximum output power. However, the minimum entransy dissipation rate does not associate with the output power variation, it can be explained as follow: the entransy dissipation is one part of the entransy loss rate besides entransy variation (work entransy) or does not consider the influence of work output on the change of entransy. - Highlights: • Theories of entropy, exergy and entransy are applied to the optimization of the ORC. • Two commonly utilized working fluids – R123 and N-pentane are chosen for comparison. • Variable evaporation temperature, hot stream temperature and mass flow rate are considered. • 3-D coordinates are utilized to observe the global variation of parameters. • The concept of entransy loss rate is appropriate for all the cases discussed in this paper. - Abstract: Based on the theories of entropy, entransy and exergy, the concepts of entropy generation rate, revised entropy generation number, exergy destruction rate, entransy loss rate, entransy dissipation rate and entransy efficiency are applied to the optimization of the Organic Rankine Cycle. Cycles operating on R123 and N-pentane have been compared in three common cases which are variable evaporation temperature, hot stream temperature and hot stream mass flow rate. The optimization goal is to produce maximum output power. Some numerical analyses and simulations are presented, and the results show that when both the hot and cold stream conditions are fixed, all the entropy principle, the exergy theory, the entransy loss rate and the entransy efficiency are applicable to the optimization of the

  5. Baseline Analyses of SIG Applications and SIG-Eligible and SIG-Awarded Schools. NCEE 2011-4019

    Science.gov (United States)

    Hurlburt, Steven; Le Floch, Kerstin Carlson; Therriault, Susan Bowles; Cole, Susan

    2011-01-01

    The Study of School Turnaround is an examination of the implementation of School Improvement Grants (SIG) authorized under Title I section 1003(g) of the "Elementary and Secondary Education Act" and supplemented by the "American Recovery and Reinvestment Act of 2009." "Baseline Analyses of SIG Applications and SIG-Eligible…

  6. Color and motion-based particle filter target tracking in a network of overlapping cameras with multi-threading and GPGPU Rastreo de objetivos por medio de filtros de partículas basados en color y movimiento en una red de cámaras con multi-hilo y GPGPU

    Directory of Open Access Journals (Sweden)

    Jorge Francisco Madrigal Díaz

    2013-03-01

    Full Text Available This paper describes an efficient implementation of multiple-target multiple-view tracking in video-surveillance sequences. It takes advantage of the capabilities of multiple core Central Processing Units (CPUs and of graphical processing units under the Compute Unifie Device Arquitecture (CUDA framework. The principle of our algorithm is 1 in each video sequence, to perform tracking on all persons to track by independent particle filters and 2 to fuse the tracking results of all sequences. Particle filters belong to the category of recursive Bayesian filters. They update a Monte-Carlo representation of the posterior distribution over the target position and velocity. For this purpose, they combine a probabilistic motion model, i.e. prior knowledge about how targets move (e.g. constant velocity and a likelihood model associated to the observations on targets. At this first level of single video sequences, the multi-threading library Threading Buildings Blocks (TBB has been used to parallelize the processing of the per-target independent particle filters. Afterwards at the higher level, we rely on General Purpose Programming on Graphical Processing Units (generally termed as GPGPU through CUDA in order to fuse target-tracking data collected on multiple video sequences, by solving the data association problem. Tracking results are presented on various challenging tracking datasets.Este artículo describe una implementación eficiente de un algoritmo de seguimiento de múlti­ples objetivos en múltiples vistas en secuencias de video vigilancia. Aprovecha las capacidades de las Unidades Centrales de Procesamiento (CPUs, por sus siglas en inglés de múltiples núcleos y de las unidades de procesamiento gráfico, bajo el entorno de desarrollo de Arquitec­tura Unificada de Dispositivos de Cómputo (CUDA, por sus siglas en inglés. El principio de nuestro algoritmo es: 1 aplicar el seguimiento visual en cada secuencia de video sobre todas las

  7. Practical applications of probabilistic structural reliability analyses to primary pressure systems of nuclear power plants

    International Nuclear Information System (INIS)

    Witt, F.J.

    1980-01-01

    Primary pressure systems of nuclear power plants are built to exacting codes and standards with provisions for inservice inspection and repair if necessary. Analyses and experiments have demonstrated by deterministic means that very large margins exist on safety impacting failures under normal operating and upset conditions. Probabilistic structural reliability analyses provide additional support that failures of significance are very, very remote. They may range in degree of sophistication from very simple calculations to very complex computer analyses involving highly developed mathematical techniques. The end result however should be consistent with the desired usage. In this paper a probabilistic structural reliability analysis is performed as a supplement to in-depth deterministic evaluations with the primary objective to demonstrate an acceptably low probability of failure for the conditions considered. (author)

  8. Development of a system of computer codes for severe accident analyses and its applications

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Soon Hong; Cheon, Moon Heon; Cho, Nam jin; No, Hui Cheon; Chang, Hyeon Seop; Moon, Sang Kee; Park, Seok Jeong; Chung, Jee Hwan [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1991-12-15

    The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in Nuclear Power Plants. This system of codes is necessary to conduct individual plant examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident resistance. The scope and contents of this study are as follows : development of a system of computer codes for severe accident analyses, development of severe accident management strategy.

  9. Development of a system of computer codes for severe accident analyses and its applications

    International Nuclear Information System (INIS)

    Chang, Soon Hong; Cheon, Moon Heon; Cho, Nam jin; No, Hui Cheon; Chang, Hyeon Seop; Moon, Sang Kee; Park, Seok Jeong; Chung, Jee Hwan

    1991-12-01

    The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in Nuclear Power Plants. This system of codes is necessary to conduct individual plant examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident resistance. The scope and contents of this study are as follows : development of a system of computer codes for severe accident analyses, development of severe accident management strategy

  10. Application of insights from the IREP analyses to the IREP procedures guide

    International Nuclear Information System (INIS)

    Carlson, D.D.; Murphy, J.A.; Young, J.

    1982-01-01

    One of the objectives of the Interim Reliability Evaluation Program (IREP) was to prepare a set of procedures based on experience gained in the study for use in future IREP-type analyses. The current analyses used a set of procedures and, over the course of the program, a concerted effort was made to develop insights which could improve these procedures. Insights have been gained into the organization and content of th procedures guide, into the performance and management of an IREP analysis, and into the methods to be used in the analysis

  11. Applications of RETRAN-3D for nuclear power plant transient analyses

    International Nuclear Information System (INIS)

    Paulsen, M.P.; Gose, G.C.; McFadden, J.H.; Agee, L.J.

    1996-01-01

    The RETRAN-3D computer program has been developed to analyze reactor events for which nonequilibrium thermodynamics, multidimensional neutron kinetics, or the presence of noncondensable gases are important items for consideration. This paper summarizes the features of RETRAN-3D and the analyses that have been performed to provide the verification and validation of the program

  12. Incorporating uncertainty regarding applicability of evidence from meta-analyses into clinical decision making.

    Science.gov (United States)

    Kriston, Levente; Meister, Ramona

    2014-03-01

    Judging applicability (relevance) of meta-analytical findings to particular clinical decision-making situations remains challenging. We aimed to describe an evidence synthesis method that accounts for possible uncertainty regarding applicability of the evidence. We conceptualized uncertainty regarding applicability of the meta-analytical estimates to a decision-making situation as the result of uncertainty regarding applicability of the findings of the trials that were included in the meta-analysis. This trial-level applicability uncertainty can be directly assessed by the decision maker and allows for the definition of trial inclusion probabilities, which can be used to perform a probabilistic meta-analysis with unequal probability resampling of trials (adaptive meta-analysis). A case study with several fictitious decision-making scenarios was performed to demonstrate the method in practice. We present options to elicit trial inclusion probabilities and perform the calculations. The result of an adaptive meta-analysis is a frequency distribution of the estimated parameters from traditional meta-analysis that provides individually tailored information according to the specific needs and uncertainty of the decision maker. The proposed method offers a direct and formalized combination of research evidence with individual clinical expertise and may aid clinicians in specific decision-making situations. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Analysing E-Services and Mobile Applications with Companied Conjoint Analysis and fMRI Technique

    OpenAIRE

    Heinonen, Jarmo

    2015-01-01

    Previous research has shown that neuromarketing and conjoint analysis have been used in many areas of consumer research, and to provide for further understanding of consumer behaviour. Together these two methods may reveal more information about hidden desires, expectations and restrains of consumers’ brain. This paper attempts to examine these two research methods together as a companied analysis. More specifically this study utilizes fMRI and conjoint analysis is a tool for analysing consum...

  14. Sobol method application in dimensional sensitivity analyses of different AFM cantilevers for biological particles

    Science.gov (United States)

    Korayem, M. H.; Taheri, M.; Ghahnaviyeh, S. D.

    2015-08-01

    Due to the more delicate nature of biological micro/nanoparticles, it is necessary to compute the critical force of manipulation. The modeling and simulation of reactions and nanomanipulator dynamics in a precise manipulation process require an exact modeling of cantilevers stiffness, especially the stiffness of dagger cantilevers because the previous model is not useful for this investigation. The stiffness values for V-shaped cantilevers can be obtained through several methods. One of them is the PBA method. In another approach, the cantilever is divided into two sections: a triangular head section and two slanted rectangular beams. Then, deformations along different directions are computed and used to obtain the stiffness values in different directions. The stiffness formulations of dagger cantilever are needed for this sensitivity analyses so the formulations have been driven first and then sensitivity analyses has been started. In examining the stiffness of the dagger-shaped cantilever, the micro-beam has been divided into two triangular and rectangular sections and by computing the displacements along different directions and using the existing relations, the stiffness values for dagger cantilever have been obtained. In this paper, after investigating the stiffness of common types of cantilevers, Sobol sensitivity analyses of the effects of various geometric parameters on the stiffness of these types of cantilevers have been carried out. Also, the effects of different cantilevers on the dynamic behavior of nanoparticles have been studied and the dagger-shaped cantilever has been deemed more suitable for the manipulation of biological particles.

  15. Designing Next Generation Massively Multithreaded Architectures for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tumeo, Antonino; Secchi, Simone; Villa, Oreste

    2012-08-31

    Irregular applications, such as data mining or graph-based computations, show unpredictable memory/network access patterns and control structures. Massively multi-threaded architectures with large node count, like the Cray XMT, have been shown to address their requirements better than commodity clusters. In this paper we present the approaches that we are currently pursuing to design future generations of these architectures. First, we introduce the Cray XMT and compare it to other multithreaded architectures. We then propose an evolution of the architecture, integrating multiple cores per node and next generation network interconnect. We advocate the use of hardware support for remote memory reference aggregation to optimize network utilization. For this evaluation we developed a highly parallel, custom simulation infrastructure for multi-threaded systems. Our simulator executes unmodified XMT binaries with very large datasets, capturing effects due to contention and hot-spotting, while predicting execution times with greater than 90% accuracy. We also discuss the FPGA prototyping approach that we are employing to study efficient support for irregular applications in next generation manycore processors.

  16. Development and application of computer codes for multidimensional thermalhydraulic analyses of nuclear reactor components

    International Nuclear Information System (INIS)

    Carver, M.B.

    1983-01-01

    Components of reactor systems and related equipment are identified in which multidimensional computational thermal hydraulics can be used to advantage to assess and improve design. Models of single- and two-phase flow are reviewed, and the governing equations for multidimensional analysis are discussed. Suitable computational algorithms are introduced, and sample results from the application of particular multidimensional computer codes are given

  17. Attitudinal Analyses of Toleration and Respect and the Problem of Institutional Applicability

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2015-01-01

    of institutional application is that institutions in general and the state in particular arguably cannot have attitudes of the required kind. This problem is distinct from, and broader than, well-known problems about whether political toleration is normatively legitimate. To make sense of political toleration...

  18. Automated analyses of model-driven artifacts : obtaining insights into industrial application of MDE

    NARCIS (Netherlands)

    Mengerink, J.G.M.; Serebrenik, A.; Schiffelers, R.R.H.; van den Brand, M.G.J.

    2017-01-01

    Over the past years, there has been an increase in the application of model driven engineering in industry. Similar to traditional software engineering, understanding how technologies are actually used in practice is essential for developing good tooling, and decision making processes.

  19. [Guidelines to good execution of analysis: some applications and developments. Laboratoire d'Analyses de Biologie Me'dicale Lecoeur].

    Science.gov (United States)

    Lecoeur, Y

    1998-01-01

    The decree concerning the Guidelines for Good Execution of Analyses (GGEA) promulgated on December 4, 1994 entered into application on January 1, 1995. The definition and necessity for the GGEA is discussed in the first part of this article. Actually, the GGEA is a revolutionary change for biology laboratories which must now work within the framework of precise guidelines. This may raise certain problems for private laboratories. The goal of the GGEA is to assure good quality analyses and thus patient care. It is designed as a positive aid for the biologist. Thus after two years of application, it is time to improve the initial text taking into account experience in the field. In the future, the official authorities and leaders in the profession will have to choose between the GGEA and official approval.

  20. Validating carbonation parameters of alkaline solid wastes via integrated thermal analyses: Principles and applications.

    Science.gov (United States)

    Pan, Shu-Yuan; Chang, E-E; Kim, Hyunook; Chen, Yi-Hung; Chiang, Pen-Chi

    2016-04-15

    Accelerated carbonation of alkaline solid wastes is an attractive method for CO2 capture and utilization. However, the evaluation criteria of CaCO3 content in solid wastes and the way to interpret thermal analysis profiles were found to be quite different among the literature. In this investigation, an integrated thermal analyses for determining carbonation parameters in basic oxygen furnace slag (BOFS) were proposed based on thermogravimetric (TG), derivative thermogravimetric (DTG), and differential scanning calorimetry (DSC) analyses. A modified method of TG-DTG interpretation was proposed by considering the consecutive weight loss of sample with 200-900°C because the decomposition of various hydrated compounds caused variances in estimates by using conventional methods of TG interpretation. Different quantities of reference CaCO3 standards, carbonated BOFS samples and synthetic CaCO3/BOFS mixtures were prepared for evaluating the data quality of the modified TG-DTG interpretation, in terms of precision and accuracy. The quantitative results of the modified TG-DTG method were also validated by DSC analysis. In addition, to confirm the TG-DTG results, the evolved gas analysis was performed by mass spectrometer and Fourier transform infrared spectroscopy for detection of the gaseous compounds released during heating. Furthermore, the decomposition kinetics and thermodynamics of CaCO3 in BOFS was evaluated using Arrhenius equation and Kissinger equation. The proposed integrated thermal analyses for determining CaCO3 content in alkaline wastes was precise and accurate, thereby enabling to effectively assess the CO2 capture capacity of alkaline wastes for mineral carbonation. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Validating carbonation parameters of alkaline solid wastes via integrated thermal analyses: Principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Shu-Yuan [Graduate Institute of Environmental Engineering, National Taiwan University, Taipei 10673, Taiwan (China); Chang, E.-E. [Department of Biochemistry, Taipei Medical University, Taipei 110, Taiwan (China); Kim, Hyunook [Department of Environmental Engineering, University of Seoul, Seoul 130-743 (Korea, Republic of); Chen, Yi-Hung [Department of Chemical Engineering and Biotechnology, National Taipei University of Technology, Taipei 10608, Taiwan (China); Chiang, Pen-Chi, E-mail: pcchiang@ntu.edu.tw [Graduate Institute of Environmental Engineering, National Taiwan University, Taipei 10673, Taiwan (China)

    2016-04-15

    Highlights: • Key carbonation parameters of wastes are determined by integrated thermal analyses. • A modified TG-DTG interpretation is proposed, and validated by the DSC technique. • The modified TG-DTG interpretation is further verified by DTA, TG-MS and TG-FTIR. • Kinetics and thermodynamics of CaCO{sub 3} decomposition in solid wastes are determined. • Implication to maximum carbonation conversion of various solid wastes is described. - Abstract: Accelerated carbonation of alkaline solid wastes is an attractive method for CO{sub 2} capture and utilization. However, the evaluation criteria of CaCO{sub 3} content in solid wastes and the way to interpret thermal analysis profiles were found to be quite different among the literature. In this investigation, an integrated thermal analyses for determining carbonation parameters in basic oxygen furnace slag (BOFS) were proposed based on thermogravimetric (TG), derivative thermogravimetric (DTG), and differential scanning calorimetry (DSC) analyses. A modified method of TG-DTG interpretation was proposed by considering the consecutive weight loss of sample with 200–900 °C because the decomposition of various hydrated compounds caused variances in estimates by using conventional methods of TG interpretation. Different quantities of reference CaCO{sub 3} standards, carbonated BOFS samples and synthetic CaCO{sub 3}/BOFS mixtures were prepared for evaluating the data quality of the modified TG-DTG interpretation, in terms of precision and accuracy. The quantitative results of the modified TG-DTG method were also validated by DSC analysis. In addition, to confirm the TG-DTG results, the evolved gas analysis was performed by mass spectrometer and Fourier transform infrared spectroscopy for detection of the gaseous compounds released during heating. Furthermore, the decomposition kinetics and thermodynamics of CaCO{sub 3} in BOFS was evaluated using Arrhenius equation and Kissinger equation. The proposed

  2. Validating carbonation parameters of alkaline solid wastes via integrated thermal analyses: Principles and applications

    International Nuclear Information System (INIS)

    Pan, Shu-Yuan; Chang, E.-E.; Kim, Hyunook; Chen, Yi-Hung; Chiang, Pen-Chi

    2016-01-01

    Highlights: • Key carbonation parameters of wastes are determined by integrated thermal analyses. • A modified TG-DTG interpretation is proposed, and validated by the DSC technique. • The modified TG-DTG interpretation is further verified by DTA, TG-MS and TG-FTIR. • Kinetics and thermodynamics of CaCO 3 decomposition in solid wastes are determined. • Implication to maximum carbonation conversion of various solid wastes is described. - Abstract: Accelerated carbonation of alkaline solid wastes is an attractive method for CO 2 capture and utilization. However, the evaluation criteria of CaCO 3 content in solid wastes and the way to interpret thermal analysis profiles were found to be quite different among the literature. In this investigation, an integrated thermal analyses for determining carbonation parameters in basic oxygen furnace slag (BOFS) were proposed based on thermogravimetric (TG), derivative thermogravimetric (DTG), and differential scanning calorimetry (DSC) analyses. A modified method of TG-DTG interpretation was proposed by considering the consecutive weight loss of sample with 200–900 °C because the decomposition of various hydrated compounds caused variances in estimates by using conventional methods of TG interpretation. Different quantities of reference CaCO 3 standards, carbonated BOFS samples and synthetic CaCO 3 /BOFS mixtures were prepared for evaluating the data quality of the modified TG-DTG interpretation, in terms of precision and accuracy. The quantitative results of the modified TG-DTG method were also validated by DSC analysis. In addition, to confirm the TG-DTG results, the evolved gas analysis was performed by mass spectrometer and Fourier transform infrared spectroscopy for detection of the gaseous compounds released during heating. Furthermore, the decomposition kinetics and thermodynamics of CaCO 3 in BOFS was evaluated using Arrhenius equation and Kissinger equation. The proposed integrated thermal analyses for

  3. Application of the portable pavement seismic analyser (PSPA) for pavement analysis

    CSIR Research Space (South Africa)

    Steyn, WJVDM

    2007-07-01

    Full Text Available at the selected locations were compared using a one-way ANOVA and a correlation matrix to analyse the repeatability of the PSPA data, and to determine whether there was a significant difference between the three observations obtained at a specific location... behaviour from pavement layers, the PSPA measurements were conducted at 90° angles at the same location. An ANOVA test and correlation matrix was set up for each point to evaluate the isotropy of the moduli. Unfortunately, no comparative data (such as FWD...

  4. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  5. Application of geostatistical methods to long-term safety analyses for radioactive waste repositories

    International Nuclear Information System (INIS)

    Roehlig, K.J.

    2001-01-01

    Long-term safety analyses are an important part of the design and optimisation process as well as of the licensing procedure for final repositories for radioactive waste in deep geological formations. For selected scenarios describing possible evolutions of the repository system in the post-closure phase, quantitative consequence analyses are performed. Due to the complexity of the phenomena of concern and the large timeframes under consideration, several types of uncertainties have to be taken into account. The modelling work for the far-field (geosphere) surrounding or overlaying the repository is based on model calculations concerning the groundwater movement and the resulting migration of radionuclides which possibly will be released from the repository. In contrast to engineered systems, the geosphere shows a strong spatial variability of facies, materials and material properties. The paper presented here describes the first steps towards a quantitative approach for an uncertainty assessment taking into account this variability. Due to the availability of a large amount of data and information of several types, the Gorleben site (Germany) has been used for a case study in order to demonstrate the method. (orig.)

  6. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  7. New airtight transfer box for SEM experiments: Application to lithium and sodium metals observation and analyses.

    Science.gov (United States)

    Stephant, Nicolas; Grissa, Rabeb; Guillou, Fanch; Bretaudeau, Mickaël; Borjon-Piron, Yann; Guillet, Jacques; Moreau, Philippe

    2018-04-18

    The surface of some materials reacts very quickly on contact with air, either because it is oxidized or because it gets humidity from the air. For the sake of original surface observation by scanning electron microscopy (SEM), we conceived an airtight transfer box to keep the samples under vacuum from the place of manufacturing to the SEM chamber. This object is designed to fit in all the models of SEM including those provided with an airlock chamber. The design is voluntarily simplified to allow the manufacturing of the object by a standard mechanical workshop. The transfer box can be easily opened by gravity inside the SEM and allows the preservation of the best vacuum inside, before opening. SEM images and energy dispersive spectroscopy (EDX) analyses of metallic lithium and sodium samples are presented prior and after exposure to the air. X-ray Photoelectron Spectroscopy (XPS) analyses of all samples are also discussed in order to investigate the chemical environments of the detected elements. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. ENERGY AND ENTROPY ANALYSES OF AN EXPERIMENTAL TURBOJET ENGINE FOR TARGET DRONE APPLICATION

    Directory of Open Access Journals (Sweden)

    Onder Turan

    2016-12-01

    Full Text Available This study investigates energy and entropy analyses of an experimental turbojet engine build in Anadolu University Faculty of Aeronautics and Astronautics Test-Cell Laboratory.  Law of motions and Brayton thermodynamic cycle model are used for this purpose. The processes (that is, compression, combustion, and expansion are simulated in P-v, T-s and h-s diagrams. Furthermore, the second law of thermodynamics is applied to the cycle model to perform the entropy analysis. A distribution of the wasted and thrust power, the overall (energy-based the first law efficiency, and the specific fuel consumption and specific thrust of the engine were calculated during the analyses as well. The results of the study also show the entropy changing value in engine components due to irreversibilities and inefficiencies. As a conclusion, it is expected that this study is useful to study future design and research work similar aircraft turbojets, auxiliary power units and target drone power systems.

  9. 10-channel neutral particle energy analyser apparatus and its application to tokamak plasmas

    International Nuclear Information System (INIS)

    Takeuchi, Hiroshi; Funahashi, Akimasa; Takahashi, Koki; Shirakata, Hirofumi; Yano, Syukuro.

    1976-07-01

    A 10-channel neutral particle energy analyser apparatus for measurement of charge-exchange fast atoms emitted from a hot tokamak plasma has been constructed to determine the ion temperature of plasma from fewer discharge shots and to improve the accuracy of measurement. It consists of a 45-degrees parallel plate electrostatic analyser with ten ion detectors (Ceratron multipliers), a charge stripping cell, a dry vacuum pumping system and pulse-counting circuits for data acquisition. A calibration experiment of the apparatus is made for the particle energy and the energy resolution with electron beams of 100 to 1000 eV. The transmission efficiency of particles in the energy analyser is measured with proton beams of 1, 2 and 3 keV, and the conversion efficiency for H 2 gas in a charge stripping cell is also determined with hydrogen-atom beams of 2, 3 and 4 keV. Ion temperatures of JFT-2a and JFT-2 devices were measured with this apparatus, in order to check the usefulness and reliability of the apparatus and to investigate the parameter dependence of ion temperatures. It is found that an ion temperature can be measured with sufficient accuracy from six plasma shots (three shots to determine particle signals and three shots to determine background noises). The peak ion temperatures 80 to 400 eV are about (1/2 - 1/3) of the central electron temperatures. Dependence of the ion temperatures on plasma current I sub(p), toroidal magnetic field B sub(t) and average electron density anti n sub(e) is investigated for I sub(p) = 15 to 170 kAmp, B sub(t) = 10 to 18 kGauss and anti n sub(e) = (0.8 to 1.8) x 10 13 cm -3 on JFT-2a and JFT-2 devices. It is shown that the ion temperatures are in good agreement with the scaling law by Artsimovich Tsub(i) proportional to (Isub(p)Bsub(t) anti n sub(e)R 2 )sup(1/3), with R as the major radius of a tokamak device. (J.P.N.)

  10. An application of the 'Bayesian cohort model' to nuclear power plant cost analyses

    International Nuclear Information System (INIS)

    Ono, Kenji; Nakamura, Takashi

    2002-01-01

    We have developed a new method for identifying the effects of calendar year, plant age and commercial operation starting year on the costs and performances of nuclear power plants and also developed an analysis system running on personal computers. The method extends the Bayesian cohort model for time series social survey data proposed by one of the authors. The proposed method was shown to be able to separate the above three effects more properly than traditional methods such as taking simple means by time domain. The analyses of US nuclear plant cost and performance data by using the proposed method suggest that many of the US plants spent relatively long time and much capital cost for modification at their age of about 10 to 20 years, but that, after those ages, they performed fairly well with lower and stabilized O and M and additional capital costs. (author)

  11. Application of the X-ray analyses on bioaccumulation and biomineralization for the specific elements

    International Nuclear Information System (INIS)

    Numako, Chiya

    2003-01-01

    Specific accumulation of the elements by biological activity has a large influence on a material recycling on the earth. Some non-distractive characterizations by X-ray spectroscopy, XRF and/or XAFS measurements with synchrotron radiation have been applied to some solid state specimens which accumulate heavy metals at high concentration in order to elucidate its mechanism or purpose of the phenomena for each marine creature. It has been known that a few bivalves including giant clams stock kidney granules that accumulate Mn and Zn specifically. The results of two-dimensional elemental analyses by EPMA indicate zoning structures of the concentration of Mn, Zn and Mg negative correlation between Mg and (Mn + Zn) inside of a granule. Fluorescence XAFS analyses for Mn and Zn in the kidney granules, held at BL-7C, PF, KEK, shows that Mn and Zn exist as divalent cations in amorphous calcium phosphate matrix. It is considered that Mn and Zn have got a detoxication by substitution for Mg into amorphous calcium phosphate with flexible crystal structure. The teeth of chiton, which accumulate magnetite as a major substance of the teeth, have been also studied by SR-XRF and micro XAFS technique with microbeam focused in 6 μm φ from undulator at SPring-8, BL-39XU. In the early stage of the teeth formation, abrupt accumulation of Fe into a feeding plane of a tooth, while the concentration of Ca has been remained equally at the back of a tooth. Micro-XAFS measurement for Fe in the teeth indicates that a major material of the feeding plane is magnetite while trivalent iron compounds appears along the junction zone of the tooth and a base membrane. There is a complex scheme in the tooth formation of chiton, which is available to create some kinds of iron species and allocates those into different positions in a tooth. (author)

  12. Application of the X-ray analyses on bioaccumulation and biomineralization for the specific elements

    Energy Technology Data Exchange (ETDEWEB)

    Numako, Chiya [Tokushima Univ., Faculty of Integrated Arts and Sciences, Tokushima (Japan)

    2003-03-01

    Specific accumulation of the elements by biological activity has a large influence on a material recycling on the earth. Some non-distractive characterizations by X-ray spectroscopy, XRF and/or XAFS measurements with synchrotron radiation have been applied to some solid state specimens which accumulate heavy metals at high concentration in order to elucidate its mechanism or purpose of the phenomena for each marine creature. It has been known that a few bivalves including giant clams stock kidney granules that accumulate Mn and Zn specifically. The results of two-dimensional elemental analyses by EPMA indicate zoning structures of the concentration of Mn, Zn and Mg negative correlation between Mg and (Mn + Zn) inside of a granule. Fluorescence XAFS analyses for Mn and Zn in the kidney granules, held at BL-7C, PF, KEK, shows that Mn and Zn exist as divalent cations in amorphous calcium phosphate matrix. It is considered that Mn and Zn have got a detoxication by substitution for Mg into amorphous calcium phosphate with flexible crystal structure. The teeth of chiton, which accumulate magnetite as a major substance of the teeth, have been also studied by SR-XRF and micro XAFS technique with microbeam focused in 6 {mu}m {phi} from undulator at SPring-8, BL-39XU. In the early stage of the teeth formation, abrupt accumulation of Fe into a feeding plane of a tooth, while the concentration of Ca has been remained equally at the back of a tooth. Micro-XAFS measurement for Fe in the teeth indicates that a major material of the feeding plane is magnetite while trivalent iron compounds appears along the junction zone of the tooth and a base membrane. There is a complex scheme in the tooth formation of chiton, which is available to create some kinds of iron species and allocates those into different positions in a tooth. (author)

  13. Exergy and exergoeconomic analyses of a supercritical CO_2 cycle for a cogeneration application

    International Nuclear Information System (INIS)

    Wang, Xurong; Yang, Yi; Zheng, Ya; Dai, Yiping

    2017-01-01

    Detailed exergy and exergoeconomic analyses are performed for a combined cogeneration cycle in which the waste heat from a recompression supercritical CO_2 Brayton cycle (sCO_2) is recovered by a transcritical CO_2 cycle (tCO_2) for generating electricity. Thermodynamic and exergoeconomic models are developed on the basis of mass and energy conservations, exergy balance and exergy cost equations. Parametric investigations are then conducted to evaluate the influence of key decision variables on the sCO_2/tCO_2 performance. Finally, the combined cycle is optimized from the viewpoint of exergoeconomics. It is found that, combining the sCO_2 with a tCO_2 cycle not only enhances the energy and exergy efficiencies of the sCO_2, but also improves the cycle exergoeconomic performance. The results show that the most exergy destruction rate takes place in the reactor, and the components of the tCO_2 bottoming cycle have less exergy destruction. When the optimization is conducted based on the exergoeconomics, the overall exergoeconomic factor, the total cost rate and the exergy destruction cost rate are 53.52%, 11243.15 $/h and 5225.17 $/h, respectively. The optimization study reveals that an increase in reactor outlet temperature leads to a decrease in total cost rate and total exergy destruction cost rate of the system. - Highlights: • Exergy and exergoeconomic analyses of a combined sCO_2/tCO_2 cycle were performed. • Exergoeconomic optimization of the sCO_2/tCO_2 cycle was presented. • The reactor had the highest exergy loss among sCO_2/tCO_2 cycle components. • The overall exergoeconomic factor was up to 53.5% for the optimum case.

  14. Applications for skimmer coupling systems, combining simultaneous thermal analysers with mass spectrometers

    International Nuclear Information System (INIS)

    Kaisersberger, E.; Post, E.

    1998-01-01

    The sensitivity of the Skimmer coupling for combining the simultaneous thermal analysis (STA) method TG-DTA/DSC and mass spectrometry (MS) is further improved by a factor of three using an automatic vacuum control device. Especially high mass numbers are detected without the common condensation problems met in capillary couplings, as is shown by application of the skimmer coupling for coal, CuGaSe 2 -semiconductor material and polystyrene. The basic idea of the novel pulse thermal analysis technique (PTA) is demonstrated. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)

  15. Generation of anti-idiotype antibodies for application in clinical immunotherapy laboratory analyses.

    Science.gov (United States)

    Liu, Zhanqi; Panousis, Con; Smyth, Fiona E; Murphy, Roger; Wirth, Veronika; Cartwright, Glenn; Johns, Terrance G; Scott, Andrew M

    2003-08-01

    The chimeric monoclonal antibody ch806 specifically targets the tumor-associated mutant epidermal growth factor receptor (de 2-7EGFR or EGFRVIII) and is currently under investigation for its potential use in cancer therapy. The humanised monoclonal antibody hu3S193 specifically targets the Lewis Y epithelial antigen and is currently in Phase I clinical trials in patients with advanced breast, colon, and ovarian carcinomas. To assist the clinical evaluation of ch806 and hu3S193, laboratory assays are required to monitor their serum pharmacokinetics and quantitate any immune responses to the antibodies. Mice immunized with ch806 or hu3S193 were used to generate hybridomas producing antibodies with specific binding to ch806 or hu3S193 and competitive for antigen binding. These anti-idiotype antibodies (designated Ludwig Melbourne Hybridomas, LMH) were investigated as reagents suitable for use as positive controls for HAHA or HACA analyses and for measuring hu3S193 or ch806 in human serum. Anti-idiotypes with the ability to concurrently bind two target antibody molecules were identified, which enabled the development of highly reproducible, sensitive, specific ELISA assays for determining serum concentrations of hu3S193 and ch806 with a 3 ng/mL limit of quantitation using LMH-3 and LMH-12, respectively. BIAcore analyses determined high apparent binding affinity for both idiotypes: LMH-3 binding immobilized hu3S193, Ka = 4.76 x 10(8) M(-1); LMH-12 binding immobilised ch806, Ka = 1.74 x 10(9) M(-1). Establishment of HAHA or HACA analysis of sera samples using BIAcore was possible using LMH-3 and LMH-12 as positive controls for quantitation of immune responses to hu3S193 or ch806 in patient sera. These anti-idiotypes could also be used to study the penetrance and binding of ch806 or hu3S193 to tumor cells through immunohistochemical analysis of tumor biopsies. The generation of anti-idiotype antibodies capable of concurrently binding a target antibody on each variable

  16. Forensic application of phylogenetic analyses - Exploration of suspected HIV-1 transmission case.

    Science.gov (United States)

    Siljic, Marina; Salemovic, Dubravka; Cirkovic, Valentina; Pesic-Pavlovic, Ivana; Ranin, Jovan; Todorovic, Marija; Nikolic, Slobodan; Jevtovic, Djordje; Stanojevic, Maja

    2017-03-01

    Transmission of human immunodeficiency virus (HIV) between individuals may have important legal implications and therefore may come to require forensic investigation based upon phylogenetic analysis. In criminal trials results of phylogenetic analyses have been used as evidence of responsibility for HIV transmission. In Serbia, as in many countries worldwide, exposure and deliberate transmission of HIV are criminalized. We present the results of applying state of the art phylogenetic analyses, based on pol and env genetic sequences, in exploration of suspected HIV transmission among three subjects: a man and two women, with presumed assumption of transmission direction from one woman to a man. Phylogenetic methods included relevant neighbor-joining (NJ), maximum likelihood (ML) and Bayesian methods of phylogenetic trees reconstruction and hypothesis testing, that has been shown to be the most sensitive for the reconstruction of epidemiological links mostly from sexually infected individuals. End-point limiting-dilution PCR (EPLD-PCR) assay, generating the minimum of 10 sequences per genetic region per subject, was performed to assess HIV quasispecies distribution and to explore the direction of HIV transmission between three subjects. Phylogenetic analysis revealed that the viral sequences from the three subjects were more genetically related to each other than to other strains circulating in the same area with the similar epidemiological profile, forming strongly supported transmission chain, which could be in favour of a priori hypothesis of one of the women infecting the man. However, in the EPLD based phylogenetic trees for both pol and env genetic region, viral sequences of one subject (man) were paraphyletic to those of two other subjects (women), implying the direction of transmission opposite to the a priori assumption. The dated tree in our analysis confirmed the clustering pattern of query sequences. Still, in the context of unsampled sequences and

  17. Application of the CALUX bioassay for epidemiological study. Analyses of Belgian human plasma

    Energy Technology Data Exchange (ETDEWEB)

    Wouwe, N. van; Debacker, N.; Sasse, A. [Scientific Institute of Public Health, Brussels (BE)] (and others)

    2004-09-15

    The CALUX bioassay is a promising screening method for the detection of dioxin-like compounds. The observed good sensitivity, low number of false negative results as well as the good correlations with the GC-HRMS TEQ-values in case of feed and food analyses allow this method to climb in the first assessment methods' scale. The low amount of sample needed in addition to those latest advantages suggest that the CALUX bioassay could be a good screening method for epidemiological studies. The Belgian epidemiological study concerning the possible effect of the dioxin incident on the body burden of the Belgian population was an opportunity to test this method in comparison to the gold reference one: the GC-HRMS. The first part of this abstract presents epidemiological parameters (sensibility, specificity,) of the CALUX bioassay using CALUX TEQ-values as estimators of the TEQ-values of the 17 PCDD/Fs. The second part examines epidemiological determinants observed for CALUX and GCHRMS TEQ-values.

  18. Applicability study of deuterium excess in bottled water life cycle analyses

    Directory of Open Access Journals (Sweden)

    Mihael Brenčič

    2014-12-01

    Full Text Available Paper explores the possible use of d‑excess in the investigation of bottled water. Based on the data set from Brencic and Vreca’s paper (2006. Identification of sources and production processes of bottled waters by stable hydrogen and oxygen isotope ratios, d‑excess values were statistically analysed and compared among different bottled water groups and different bottlers. The bottled water life cycle in relation to d‑excess values was also theoretically identified. Descriptive statistics and one-way ANOVA showed no significant differences among the groups. Differences were detected in the shape of empirical distributions. Groups of still and flavoured waters have similar shapes, but sparkling waters differed to the others. Two distinctive groups of bottlers could be discerned. The first group is represented by bottlers with a high range of d‑excess (from 7.7 ‰ to 18.6 ‰ with average of 12.0 ‰ exploring waters originating from the aquifers rich in highly mineralised groundwater and relatively high concentrations of CO2 gas. The second group is represented by bottlers using groundwater from relatively shallow aquifers. Their d‑excess values have characteristics similar to the local precipitation (from 7.8 ‰ to 14.3 ‰ with average of 10.3 ‰. More frequent sampling and better knowledge of production phases are needed to improve usage of isotope fingerprint for authentication of bottled waters.

  19. Application of a weighted spatial probability model in GIS to analyse landslides in Penang Island, Malaysia

    Directory of Open Access Journals (Sweden)

    Samy Ismail Elmahdy

    2016-01-01

    Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.

  20. Finite element analyses of continuous filament ties for masonry applications : final report for the Arquin Corporation.

    Energy Technology Data Exchange (ETDEWEB)

    Quinones, Armando, Sr. (Arquin Corporation, La Luz, NM); Bibeau, Tiffany A.; Ho, Clifford Kuofei

    2008-08-01

    Finite-element analyses were performed to simulate the response of a hypothetical vertical masonry wall subject to different lateral loads with and without continuous horizontal filament ties laid between rows of concrete blocks. A static loading analysis and cost comparison were also performed to evaluate optimal materials and designs for the spacers affixed to the filaments. Results showed that polypropylene, ABS, and polyethylene (high density) were suitable materials for the spacers based on performance and cost, and the short T-spacer design was optimal based on its performance and functionality. Simulations of vertical walls subject to static loads representing 100 mph winds (0.2 psi) and a seismic event (0.66 psi) showed that the simulated walls performed similarly and adequately when subject to these loads with and without the ties. Additional simulations and tests are required to assess the performance of actual walls with and without the ties under greater loads and more realistic conditions (e.g., cracks, non-linear response).

  1. Finite element analyses of continuous filament ties for masonry applications: final report for the Arquin Corporation

    Energy Technology Data Exchange (ETDEWEB)

    Quinones, Sr., Armando [Arquin Corporation, La Luz, NM (United States); Bibeau, Tiffany A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ho, Clifford Kuofei

    2006-06-01

    Finite-element analyses were performed to simulate the response of a hypothetical masonry shear wall with and without continuous filament ties to various lateral loads. The loads represented three different scenarios: (1) 100 mph wind, (2) explosive attack, and (3) an earthquake. In addition, a static loading analysis and cost comparison were performed to evaluate optimal materials and designs for the spacers affixed to the filaments. Results showed that polypropylene, ABS, and polyethylene (high density) were suitable materials for the spacers based on performance and cost, and the short T-spacer design was optimal based on its performance and functionality. Results of the shear-wall loading simulations revealed that simulated walls with the continuous filament ties yielded factors of safety that were at least ten times greater than those without the ties. In the explosive attack simulation (100 psi), the simulated wall without the ties failed (minimum factor of safety was less than one), but the simulated wall with the ties yielded a minimum factor of safety greater than one. Simulations of the walls subject to lateral loads caused by 100 mph winds (0.2 psi) and seismic events with a peak ground acceleration of 1 ''g'' (0.66 psi) yielded no failures with or without the ties. Simulations of wall displacement during the seismic scenarios showed that the wall with the ties resulted in a maximum displacement that was 20% less than the wall without the ties.

  2. Application of the SPH method in nodal diffusion analyses of SFR cores

    Energy Technology Data Exchange (ETDEWEB)

    Nikitin, Evgeny; Fridman, Emil [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Div. Reactor Safety; Mikityuk, K. [Paul Scherrer Institut, Villigen (Switzerland)

    2016-07-01

    The current study investigated the potential of the SPH method, applied to correct the few-group XS produced by Serpent, to further improve the accuracy of the nodal diffusion solutions. The procedure for the generation of SPH-corrected few-group XS is presented in the paper. The performance of the SPH method was tested on a large oxide SFR core from the OECD/NEA SFR benchmark. The reference SFR core was modeled with the DYN3D and PARCS nodal diffusion codes using the SPH-corrected few-group XS generated by Serpent. The nodal diffusion results obtained with and without SPH correction were compared to the reference full-core Serpent MC solution. It was demonstrated that the application of the SPH method improves the accuracy of the nodal diffusion solutions, particularly for the rodded core state.

  3. Medicine and ionizing rays: a help sheet in analysing risks in radiotherapy and applicable texts

    International Nuclear Information System (INIS)

    Gauron, C.

    2007-01-01

    This document proposes a synthesis of useful knowledge for radioprotection in the case of external radiotherapy. In the first part, several aspects are considered: the concerned personnel, the course of treatment procedures, the hazards, the identification of the risk associated with ionizing radiation, the risk assessment and the exposure level determination, the strategy to control the risks (reduction of risks, technical measures concerning the installation or the personnel, teaching and information, prevention and medical monitoring), and risk control assessment. A second part indicates the various applicable legal and regulatory texts (European directives, law and decrees published by public authorities, and texts concerning the general principles in radioprotection, worker protection, specialists, medical devices, nuclear medicine and radiology)

  4. Application of chaos analyses methods on East Anatolian Fault Zone fractures

    Energy Technology Data Exchange (ETDEWEB)

    Kamışlıoğlu, Miraç, E-mail: m.kamislioglu@gmail.com; Külahcı, Fatih, E-mail: fatihkulahci@firat.edu.tr [Nuclear Physics Division, Department of Physics, Faculty of Science, Fırat University, Elazig, TR-23119 (Turkey)

    2016-06-08

    Nonlinear time series analysis techniques have large application areas on the geoscience and geophysics fields. Modern nonlinear methods are provided considerable evidence for explain seismicity phenomena. In this study nonlinear time series analysis, fractal analysis and spectral analysis have been carried out for researching the chaotic behaviors of release radon gas ({sup 222}Rn) concentration occurring during seismic events. Nonlinear time series analysis methods (Lyapunov exponent, Hurst phenomenon, correlation dimension and false nearest neighbor) were applied for East Anatolian Fault Zone (EAFZ) Turkey and its surroundings where there are about 35,136 the radon measurements for each region. In this paper were investigated of {sup 222}Rn behavior which it’s used in earthquake prediction studies.

  5. Production, analyse et applications des huiles végétales en Afrique

    Directory of Open Access Journals (Sweden)

    Kapseu César

    2009-07-01

    Full Text Available This paper analyses the evolutions of the different needs related to the food and non food uses of conventional oil crops (palm, cotton, groundnuts, immerging and marketable oil culture (shea as well as the domestic oil cultures (Canarium, safou. Africa accounts for about 6.5% of the total World production of palm oil. In the last few years palm oil has witnessed an evolution in Africa through the diversification of its uses. A lot has also been done on the use of palm oil as bio-fuels. The production of cotton seed oil has equally witnessed changes from press extraction followed by solvent extraction to direct extraction with pure solvents followed by neutralisation in an appropriate medium as major innovations. West Africa produces about 50% of the total groundnuts production in Africa. Small scale processing of groundnuts is more popular than industrial processing. This is justified by the diverse uses of the different groundnut byproducts. The most remarkable innovations concern the emerging oil cultures such as shea butter. In fact the incorporation of 5% shea butter in chocolate formulations has given an added value to shea. Techniques have been put in place for improving on the production methods and quality of the butter. The evolution in this sector is better illustrated by the putting in place of an indirect solar dryer and a vertical manual screw press. The big handicap that slows down evolution remains at the level of the transfer of technology to the rural milieu. Problems on the transfer preservation of Canarium were resolved by preserving them in appropriate media and conditions. Dried safou fruits can now be found in the market. This illustrates the appropriation of technology by small and medium sized enterprises.

  6. Performance analyses of Z-source and quasi Z-source inverter for photovoltaic applications

    Science.gov (United States)

    Himabind, S.; Priya, T. Hari; Manjeera, Ch.

    2018-04-01

    This paper presents the comparative analysis of Z-source and Quasi Z-source converter for renewable energy applications. Due to the dependency of renewable energy sources on external weather conditions the output voltage, current changes accordingly which effects the performance of traditional voltage source and current source inverters connected across it. To overcome the drawbacks of VSI and CSI, Z-source and Quasi Z-source inverter (QZSI) are used, which can perform multiple tasks like ac-to-dc, dc-to-ac, ac-to-ac, dc-to-dc conversion. They can be used for both buck and boost operations, by utilizing the shoot-through zero state. The QZSI is derived from the ZSI topology, with a slight change in the impedance network and it overcomes the drawbacks of ZSI. The QZSI draws a constant current from the source when compared to ZSI. A comparative analysis is performed between Z-source and Quasi Z-source inverter, simulation is performed in MATLAB/Simulink environment.

  7. Wind Energy Applications for Municipal Water Services: Opportunities, Situation Analyses, and Case Studies; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Flowers, L.; Miner-Nordstrom, L.

    2006-01-01

    As communities grow, greater demands are placed on water supplies, wastewater services, and the electricity needed to power the growing water services infrastructure. Water is also a critical resource for thermoelectric power plants. Future population growth in the United States is therefore expected to heighten competition for water resources. Many parts of the United States with increasing water stresses also have significant wind energy resources. Wind power is the fastest-growing electric generation source in the United States and is decreasing in cost to be competitive with thermoelectric generation. Wind energy can offer communities in water-stressed areas the option of economically meeting increasing energy needs without increasing demands on valuable water resources. Wind energy can also provide targeted energy production to serve critical local water-system needs. The research presented in this report describes a systematic assessment of the potential for wind power to support water utility operation, with the objective to identify promising technical applications and water utility case study opportunities. The first section describes the current situation that municipal providers face with respect to energy and water. The second section describes the progress that wind technologies have made in recent years to become a cost-effective electricity source. The third section describes the analysis employed to assess potential for wind power in support of water service providers, as well as two case studies. The report concludes with results and recommendations.

  8. Wind energy applications for municipal water services: Opportunities, situational analyses, and case studies

    Energy Technology Data Exchange (ETDEWEB)

    Flowers, L. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Miner-Nordstrom, L. [U.S. Dept. of Energy, Washington, D.C. (United States)

    2006-01-01

    As communities grow, greater demands are placed on water supplies, wastewater services, and the electricity needed to power the growing water services infrastructure. Water is also a critical resource for thermoelectric power plants. Future population growth in the United States is therefore expected to heighten competition for water resources. Especially in arid U.S. regions, communities may soon face hard choices with respect to water and electric power. Many parts of the United States with increasing water stresses also have significant wind energy resources. Wind power is the fastest-growing electric generation source in the United States and is decreasing in cost to be competitive with thermoelectric generation. Wind energy can potentially offer communities in water-stressed areas the option of economically meeting increasing energy needs without increasing demands on valuable water resources. Wind energy can also provide targeted energy production to serve critical local water-system needs. The U.S. Department of Energy (DOE) Wind Energy Technologies Program has been exploring the potential for wind power to meet growing challenges for water supply and treatment. The DOE is currently characterizing the U.S. regions that are most likely to benefit from wind-water applications and is also exploring the associated technical and policy issues associated with bringing wind energy to bear on water resource challenges.

  9. Microbial soil community analyses for forensic science: Application to a blind test.

    Science.gov (United States)

    Demanèche, Sandrine; Schauser, Leif; Dawson, Lorna; Franqueville, Laure; Simonet, Pascal

    2017-01-01

    Soil complexity, heterogeneity and transferability make it valuable in forensic investigations to help obtain clues as to the origin of an unknown sample, or to compare samples from a suspect or object with samples collected at a crime scene. In a few countries, soil analysis is used in matters from site verification to estimates of time after death. However, up to date the application or use of soil information in criminal investigations has been limited. In particular, comparing bacterial communities in soil samples could be a useful tool for forensic science. To evaluate the relevance of this approach, a blind test was performed to determine the origin of two questioned samples (one from the mock crime scene and the other from a 50:50 mixture of the crime scene and the alibi site) compared to three control samples (soil samples from the crime scene, from a context site 25m away from the crime scene and from the alibi site which was the suspect's home). Two biological methods were used, Ribosomal Intergenic Spacer Analysis (RISA), and 16S rRNA gene sequencing with Illumina Miseq, to evaluate the discriminating power of soil bacterial communities. Both techniques discriminated well between soils from a single source, but a combination of both techniques was necessary to show that the origin was a mixture of soils. This study illustrates the potential of applying microbial ecology methodologies in soil as an evaluative forensic tool. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Applicability of the Tanaka-Johnston and Moyers mixed dentition analyses in Northeast Han Chinese.

    Science.gov (United States)

    Sherpa, Jangbu; Sah, Gopal; Rong, Zeng; Wu, Lipeng

    2015-06-01

    To assess applicability of the Tanaka-Johnston and Moyers prediction methods in a Han ethnic group from Northeast China and to develop prediction equations for this same population. Cross-sectional study. Department of Orthodontics, School of Stomatology, Jiamusi University, Heilongjiang, China. A total of 130 subjects (65 male and 65 female) aged 16-21 years from a Han ethnic group of Northeast China were recruited from dental students and patients seeking orthodontic treatment. Ethnicity was verified by questionnaire. Mesio-distal tooth width was measured using Digital Vernier calipers. Predicted values were obtained from the Tanaka-Johnston and Moyers methods in both arches were compared with the actual measured widths. Based on regression analysis, prediction equations were developed. Tanaka-Johnston equations were not precise, except for the upper arch in males. However, the Moyers 85th percentile in the upper arch and 75th percentile in the lower arch predicted the sum precisely in males. For females, the Moyers 75th percentile predicted the sum precisely for the upper arch, but none of the Moyers percentiles predicted in the lower arch. Both the Tanaka-Johnston and Moyers method may not be applied universally without question. Hence, it may be safer to develop regression equations for specific populations. Validating studies must be conducted to confirm the precision of these newly developed regression equations.

  11. Generation and analyses of human synthetic antibody libraries and their application for protein microarrays.

    Science.gov (United States)

    Säll, Anna; Walle, Maria; Wingren, Christer; Müller, Susanne; Nyman, Tomas; Vala, Andrea; Ohlin, Mats; Borrebaeck, Carl A K; Persson, Helena

    2016-10-01

    Antibody-based proteomics offers distinct advantages in the analysis of complex samples for discovery and validation of biomarkers associated with disease. However, its large-scale implementation requires tools and technologies that allow development of suitable antibody or antibody fragments in a high-throughput manner. To address this we designed and constructed two human synthetic antibody fragment (scFv) libraries denoted HelL-11 and HelL-13. By the use of phage display technology, in total 466 unique scFv antibodies specific for 114 different antigens were generated. The specificities of these antibodies were analyzed in a variety of immunochemical assays and a subset was further evaluated for functionality in protein microarray applications. This high-throughput approach demonstrates the ability to rapidly generate a wealth of reagents not only for proteome research, but potentially also for diagnostics and therapeutics. In addition, this work provides a great example on how a synthetic approach can be used to optimize library designs. By having precise control of the diversity introduced into the antigen-binding sites, synthetic libraries offer increased understanding of how different diversity contributes to antibody binding reactivity and stability, thereby providing the key to future library optimization. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Method for analysing radium in powder samples and its application to uranium prospecting

    International Nuclear Information System (INIS)

    Gong Xinxi; Hu Minzhi.

    1987-01-01

    The decayed daughter of Rn released from the power sample (soil) in a sealed bottle were collected on a piece of copper and the radium in the sample can be measured by counting α-particles with an Alphameter for uranium prospection, thus it is called the radium method. This method has many advantages, such as high sensitivity (the lowest limit of detection for radium sample per gram is 2.7 x 10 -15 g), high efficiency, low cost and easy to use. On the basis of measuring more than 700 samples taken along 20 sections in 8 deposits, the results show that the radium method is better than γ-measurement and equal to 210 Po method for the capability to descover anomalies. The author also summarizes the anomaly intensities of radium method, 210 Po method and γ-measurement respectively at the surface with deep blind ores, with or without surficial mineralization, and the figures of their profiles and the variation of Ra/ 210 Po ratios. According to the above-mentioned distinguishing features, the uranium mineralization located in deep and/or shallow parts can be distinguishd. The combined application of radium, 210 Po and γ-measurement methods may be regarded as one of the important methods used for anomaly assessment. Based on the experiments of the radium measurements with 771 stream sediments samples in an area of 100 km 2 , it is demonstrated that the radium mehtod can be used in the stages of uranium reconnaissance and prospecting

  13. RBS and XPS analyses of the composite calcium phosphate coatings for biomedical applications

    International Nuclear Information System (INIS)

    Ide-Ektessabi, Ari; Yamaguchi, Tetsuro; Tanaka, Yoshikazu

    2005-01-01

    The calcium phosphate coatings on metallic implants are widely used for biomedical applications. The calcium phosphate coatings require mechanical strength, strong adhesion to the metallic implants, chemical stability and low dissolution into the human body fluid for stable functioning in the corrosive environment of the human body. In this study, a novel approach for improving the calcium phosphate coatings is utilized by adding trace metallic element into the coatings. We focused on teeth enamel, which is the hardest calcium phosphate tissue in the human body. Zn concentration increases exponentially from the interior to the surface of the enamel. As the Zn concentration increases, so the local hardness increases. Our previous studies suggest that Zn has influence on the hardness and other properties of enamel, calcium phosphate tissue. Calcium phosphate coatings doped with Zn was fabricated and characterized. The atomic composition and chemical state were investigated by using Rutherford backscattering spectroscopy (RBS) and X-ray photoelectron spectrometer (XPS), respectively. Scratch test was also carried out for measuring the adhesion of the coatings

  14. Who's coming to dinner? Microbial phylogenetic analyses of various subsurface petroleum well environments for MEOR applications

    Energy Technology Data Exchange (ETDEWEB)

    Keeler, Sharon J.; Fallon, Robert; Jackson, Scott; Zhang, Shiping; Tomb, Jean-Francois; Miller, Mark A.; Rees, Bethany [Central Research and Development (Canada)

    2011-07-01

    This paper discussed the microbial phylogenetic analyses of various subsurface petroleum well environments for microbial-enhanced oil recovery (MEOR) applications. The objective is to add nutrients and microbes to injection water. Close to 47,000 compounds are present in petroleum and most of them are polyaromatic hydrocarbons (PAHs). Microbes that can predominate the biomass produced and the over all bioactivity are needed. Changing the electron acceptor modifies the microbial community. Characterization of microbial diversity in production water with two independent molecular methods is shown. The geology of well systems in North America was analyzed; the analyses and the results are given. Summarizing the North Slope reservoir system phylogenetics, it can be said that many genera found in association with other petroleum environments suggest they are autocthonous and transiently very high levels of acetate signify a mutual metabolic codependency on the amount of acetate present in the system.

  15. Application of the IEAF-2001 activation data library to activation analyses of the IFMIF high flux test module

    International Nuclear Information System (INIS)

    Fischer, U.; Wilson, P.P.H.; Leichtle, D.; Simakov, S.P.; Moellendorff, U. von; Konobeev, A.; Korovin, Yu.; Pereslavtsev, P.; Schmuck, I.

    2002-01-01

    A complete activation data library IEAF-2001 (intermediate energy activation file) has been developed in standard ENDF-6 format with neutron-induced activation cross sections for 679 target nuclides from Z=1 (hydrogen) to Z=84 (polonium) and incident neutron energies up to 150 MeV. Using the NJOY processing code, an IEAF-2001 working library has been prepared in a 256 energy group structure for enabling activation analyses of the International Fusion Material Irradiation Facility (IFMIF) D-Li neutron source. This library was applied to the activation analysis of the IFMIF high flux test module using the recent Analytical and Laplacian Adaptive Radioactivity Analysis activation code which is capable of handling the variety of reaction channels open in the energy domain above 20 MeV. The IEAF-2001 activation library was thus shown to be suitable for activation analyses in fusion technology and intermediate energy applications such as the IFMIF D-Li neutron source

  16. DNA migration mechanism analyses for applications in capillary and microchip electrophoresis

    Science.gov (United States)

    Forster, Ryan E.; Hert, Daniel G.; Chiesl, Thomas N.; Fredlake, Christopher P.; Barron, Annelise E.

    2009-01-01

    In 2009, electrophoretically driven DNA separations in slab gels and capillaries have the sepia tones of an old-fashioned technology in the eyes of many, even while they remain ubiquitously used, fill a unique niche, and arguably have yet to reach their full potential. For comic relief, what is old becomes new again: agarose slab gel separations are used to prepare DNA samples for “next-gen” sequencing platforms (e.g., the Illumina and 454 machines)—dsDNA molecules within a certain size range are “cut out” of a gel and recovered for subsequent “massively parallel” pyrosequencing. In this review, we give a Barron lab perspective on how our comprehension of DNA migration mechanisms in electrophoresis has evolved, since the first reports of DNA separations by CE (∼1989) until now, 20 years later. Fused silica capillaries, and borosilicate glass and plastic microchips, quietly offer increasing capacities for fast (and even “ultra-fast”), efficient DNA separations. While the channel-by-channel scaling of both old and new electrophoresis platforms provides key flexibility, it requires each unique DNA sample to be prepared in its own micro- or nanovolume. This Achille's heel of electrophoresis technologies left an opening through which pooled-sample, next-gen DNA sequencing technologies rushed. We shall see, over time, whether sharpening understanding of transitions in DNA migration modes in crosslinked gels, nanogel solutions, and uncrosslinked polymer solutions will allow electrophoretic DNA analysis technologies to flower again. Microchannel electrophoresis, after a quiet period of metamorphosis, may emerge sleeker and more powerful, to claim its own important niche applications. PMID:19582705

  17. ePRO-MP: A Tool for Profiling and Optimizing Energy and Performance of Mobile Multiprocessor Applications

    Directory of Open Access Journals (Sweden)

    Wonil Choi

    2009-01-01

    Full Text Available For mobile multiprocessor applications, achieving high performance with low energy consumption is a challenging task. In order to help programmers to meet these design requirements, system development tools play an important role. In this paper, we describe one such development tool, ePRO-MP, which profiles and optimizes both performance and energy consumption of multi-threaded applications running on top of Linux for ARM11 MPCore-based embedded systems. One of the key features of ePRO-MP is that it can accurately estimate the energy consumption of multi-threaded applications without requiring a power measurement equipment, using a regression-based energy model. We also describe another key benefit of ePRO-MP, an automatic optimization function, using two example problems. Using the automatic optimization function, ePRO-MP can achieve high performance and low power consumption without programmer intervention. Our experimental results show that ePRO-MP can improve the performance and energy consumption by 6.1% and 4.1%, respectively, over a baseline version for the co-running applications optimization example. For the producer-consumer application optimization example, ePRO-MP improves the performance and energy consumption by 60.5% and 43.3%, respectively over a baseline version.

  18. Hyperspectral solar-induced chlorophyll fluorescence of urban tree leaves: Analyses and applications

    Science.gov (United States)

    Van Wittenberghe, Shari

    Solar energy is the primary energy source for life on Earth which is converted into chemical energy through photosynthesis by plants, algae and cyanobacteria, releasing fuel for the organisms' activities. To dissipate excess of absorbed light energy, plants emit chlorophyll (Chl) fluorescence (650-850 nm) from the same location where photosynthesis takes place. Hence, it provides information on the efficiency of primary energy conversion. From this knowledge, many applications on vegetation and crop stress monitoring could be developed, a necessity for our planet under threat of a changing global climate. Even though the Chl fluorescence signal is weak against the intense reflected radiation background, methods for retrieving the solar-induced Chl fluorescence have been refined over the last years, both at leaf and airborne scale. However, a lack of studies on solar-induced Chl fluorescence gives difficulties for the interpretation of the signal. Within this thesis, hyperspectral upward and downward solar-induced Chl fluorescence is measured at leaf level. Fluorescence yield (FY) is calculated as well as different ratios characterizing the emitted Chl fluorescence shape. The research in this PhD dissertation illustrates the influence of several factors on the solar-induced Chl fluorescence signal. For instance, both the intensity of FY and its spectral shape of urban tree leaves are able to change under influence of stress factors such as traffic air pollution. This shows how solar-induced Chl fluorescence could function as an early stress indicator for vegetation. Further, it is shown that the signal contains information on the ultrastructure of the photosynthetic apparatus. Also, it is proven that the leaf anatomical structure and related light scattering properties play a role in the partitioning between upward and downward Chl fluorescence emission. All these findings indicate how the Chl fluorescence spectrum is influenced by factors which also influence

  19. Experimental and finite element analyses of multifunctional skins for morphing wing applications

    Science.gov (United States)

    Geier, Sebastian; Kintscher, Markus; Mahrholz, Thorsten; Wierach, Peter; Monner, Hans-Peter; Wiedemann, Martin

    2016-04-01

    As a consequence of operational efficiency because of rising energy costs, future transport systems need to be mission-adaptive. Especially in aircraft design the limits of lightweight construction, reduced aerodynamic drag and optimized propulsion are pushed further and further. The first two aspects can be addressed by using a morphing leading edge. Great economic advantages can be expected as a result of gapless surfaces which feature longer areas of laminar flow. Instead of focusing on the kinematics, which are already published in a great number of varieties, this paper emphasizes as major challenge, the qualification of a multi-material layup which meets the compromise of needed stiffness, flexibility and essential functions to match the flight worthiness requirements, such as erosion shielding, impact safety, lighting protection and de-icing. It is the aim to develop an gapless leading edge device and to prepare the path for higher technology readiness levels resulting in an airborne application. During several national and European projects the DLR developed a gapless smart droop nose concept, which functionality was successfully demonstrated using a two-dimensional 5 m in span prototype in low speed (up to 50 m/s) wind tunnel tests. The basic structure is made of commercially available and certified glass-fiber reinforced plastics (GFRP, Hexcel Hexply 913). This paper presents 4-point bending tests to characterize the composite with its integrated functions. The integrity and aging/fatigue issues of different material combinations are analyzed by experiments. It can be demonstrated that only by adding functional layers the mentioned requirements such as erosion-shielding or de-icing can be satisfied. The total thickness of the composite skin increases by more than 100 % when required functions are integrated as additional layers. This fact has a tremendous impact on the maximum strain of the outer surface if it features a complete monolithic build

  20. Comparative energy consumption analyses of an ultra high frequency induction heating system for material processing applications

    Directory of Open Access Journals (Sweden)

    Taştan, Mehmet

    2015-09-01

    Full Text Available This study compares an energy consumption results of the TI-6Al-4V based material processing under the 900 kHz induction heating for different cases. By this means, total power consumption and energy consumptions per sample and amount have been analyzed. Experiments have been conducted with 900 kHz, 2.8 kW ultra-high frequency induction system. Two cases are considered in the study. In the first case, TI-6Al-4V samples have been heated up to 900 °C with classical heating method, which is used in industrial applications, and then they have been cooled down by water. Afterwards, the samples have been heated up to 600 °C, 650 °C and 700 °C respectively and stress relieving process has been applied through natural cooling. During these processes, energy consumptions for each defined process have been measured. In the second case, unlike the first study, can be used five different samples have been heated up to the various temperatures between 600 °C and 1120 °C and energy consumptions have been measured for these processes. Thereby, the effect of temperature increase on each sample on energy cost has been analyzed. It has been seen that as a result of heating the titanium bulk materials, which have been used in the experiment, with ultra high frequency induction, temperature increase also increases the energy consumption. But it has been revealed that the increase rate in the energy consumption is more than the increase rate of the temperature.En este estudio se comparan los consumos energéticos al procesar Ti-6Al-4V por inducción a 900 kHz. Se ha analizado la potencia total consumida y la energía consumida por muestra. Los experimentos se han realizado en un sistema de inducción de ultra alta frecuencia a 900 kHz, 2,8 kW. Se han considerado dos casos, en el primero se ha calentado Ti-6Al-4V a 900 °C por el método clásico usado en la industria y enfriado en agua; posteriormente las muestras se han calentado a 600, 650 y 700 °C y

  1. Application des ondelettes à l'analyse de texture et à l'inspection de surface industrielle

    Science.gov (United States)

    Wolf, D.; Husson, R.

    1993-11-01

    This paper presents a method of texture analysis based on multiresolution wavelets analysis. We discuss the problem of theoretical and experimental choice of the wavelet. Statistical modelling of wavelet images is treated and it results in considering statistical distribution to be a generalized Gaussian law. An algorithm for texture classification is developed with respect of the variances of different wavelet images. An industrial application of this algorithm illustrates its quality and proves its aptitude for automation of certain tasks in industrial control. Nous présentons une méthode d'analyse de texture fondée sur l'analyse multirésolution par ondelettes. Nous discutons du problème du choix théorique et expérimental de l'ondelette. Le problème de la modélisation statistique des images d'ondelettes est traité et aboutit à considérer la distribution statistique comme une loi de Gauss généralisée. Un algorithme de classification de texture est construit à l'aide de la variance des différentes images d'ondelettes. Enfin, une application industrielle de cet algorithme illustre ses qualités et démontre son aptitude à l'automatisation de certaines tâches de contrôle industriel.

  2. Methodology for accident analyses of fusion breeder blankets and its application to helium-cooled pebble bed blanket

    International Nuclear Information System (INIS)

    Panayotov, Dobromir; Grief, Andrew; Merrill, Brad J.; Humrickhouse, Paul; Trow, Martin; Dillistone, Michael; Murgatroyd, Julian T.; Owen, Simon; Poitevin, Yves; Peers, Karen; Lyons, Alex; Heaton, Adam; Scott, Richard

    2016-01-01

    Graphical abstract: - Highlights: • Test Blanket Systems (TBS) DEMO breeding blankets (BB) safety demonstration. • Comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials, and phenomena. • Development of accident analysis specifications (AAS) via the use of phenomena identification and ranking tables (PIRT). • PIRT application to identify required physical models for BB accidents analysis, code assessment and selection. • Development of MELCOR and RELAP5 codes TBS models. • Qualification of the models via comparison with finite element calculations, code-tocode comparisons, and sensitivity studies. - Abstract: ‘Fusion for Energy’ (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials, and phenomena while remaining consistent with the approach already applied to ITER accident analyses. The methodology phases are illustrated in the paper by its application to the EU HCPB TBS using both MELCOR and RELAP5 codes.

  3. Methodology for accident analyses of fusion breeder blankets and its application to helium-cooled pebble bed blanket

    Energy Technology Data Exchange (ETDEWEB)

    Panayotov, Dobromir, E-mail: dobromir.panayotov@f4e.europa.eu [Fusion for Energy (F4E), Josep Pla, 2, Torres Diagonal Litoral B3, Barcelona E-08019 (Spain); Grief, Andrew [Amec Foster Wheeler, Booths Park, Chelford Road, Knutsford WA16 8QZ, Cheshire (United Kingdom); Merrill, Brad J.; Humrickhouse, Paul [Idaho National Laboratory, PO Box 1625, Idaho Falls, ID (United States); Trow, Martin; Dillistone, Michael; Murgatroyd, Julian T.; Owen, Simon [Amec Foster Wheeler, Booths Park, Chelford Road, Knutsford WA16 8QZ, Cheshire (United Kingdom); Poitevin, Yves [Fusion for Energy (F4E), Josep Pla, 2, Torres Diagonal Litoral B3, Barcelona E-08019 (Spain); Peers, Karen; Lyons, Alex; Heaton, Adam; Scott, Richard [Amec Foster Wheeler, Booths Park, Chelford Road, Knutsford WA16 8QZ, Cheshire (United Kingdom)

    2016-11-01

    Graphical abstract: - Highlights: • Test Blanket Systems (TBS) DEMO breeding blankets (BB) safety demonstration. • Comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials, and phenomena. • Development of accident analysis specifications (AAS) via the use of phenomena identification and ranking tables (PIRT). • PIRT application to identify required physical models for BB accidents analysis, code assessment and selection. • Development of MELCOR and RELAP5 codes TBS models. • Qualification of the models via comparison with finite element calculations, code-tocode comparisons, and sensitivity studies. - Abstract: ‘Fusion for Energy’ (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials, and phenomena while remaining consistent with the approach already applied to ITER accident analyses. The methodology phases are illustrated in the paper by its application to the EU HCPB TBS using both MELCOR and RELAP5 codes.

  4. Overview of PAT process analysers applicable in monitoring of film coating unit operations for manufacturing of solid oral dosage forms.

    Science.gov (United States)

    Korasa, Klemen; Vrečer, Franc

    2018-01-01

    Over the last two decades, regulatory agencies have demanded better understanding of pharmaceutical products and processes by implementing new technological approaches, such as process analytical technology (PAT). Process analysers present a key PAT tool, which enables effective process monitoring, and thus improved process control of medicinal product manufacturing. Process analysers applicable in pharmaceutical coating unit operations are comprehensibly described in the present article. The review is focused on monitoring of solid oral dosage forms during film coating in two most commonly used coating systems, i.e. pan and fluid bed coaters. Brief theoretical background and critical overview of process analysers used for real-time or near real-time (in-, on-, at- line) monitoring of critical quality attributes of film coated dosage forms are presented. Besides well recognized spectroscopic methods (NIR and Raman spectroscopy), other techniques, which have made a significant breakthrough in recent years, are discussed (terahertz pulsed imaging (TPI), chord length distribution (CLD) analysis, and image analysis). Last part of the review is dedicated to novel techniques with high potential to become valuable PAT tools in the future (optical coherence tomography (OCT), acoustic emission (AE), microwave resonance (MR), and laser induced breakdown spectroscopy (LIBS)). Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Some applications of neutron activation analysis in plant biology and agronomy; Quelques applications de l'analyse par radioactivation neutronique en biologie vegetale et en agronomie

    Energy Technology Data Exchange (ETDEWEB)

    Fourcy, A [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires

    1966-06-01

    Plants materials are not so commonly analysed by radioactivation than biological extracts of medical importance. With help of concrete examples, applications of neutrons activation analysis to the determination of some metals (Mn, Cu, Co, Fe, Zn, and K) in plant materials, are proposed. Samples are activated in a swimming-pool reactor at the thermal flux of 5.10{sup 12} n.cm{sup -2}s{sup -1} for a time varying between few minutes and several days according to the element being analysed. The induced radioactivity is measured by spectrometry, with radiochemical separation ( Cu, Co, Fe, Zn and K) or without separation in best cases (Mn,Cu, K). Described dosages are related to: manganese in a graminaceous plant, copper in vine treatments, cobalt, iron and zinc in animal feeding, potassium in a radiological experiment. (author) [French] Les produits vegetaux sont beaucoup moins souvent soumis a l'analyse par activation que les extraits biologiques d'interet medical. En s'appuyant sur des cas concrets, nous proposons des applications de l'analyse par activation neutronique au dosage de differents metaux (Mn, Cu, Co, Fe, Zn, et K) dans la matiere vegetale. Les echantillons sont actives en pile piscine au flux thermique de 5.10{sup 12} n.cm{sup -2} s{sup -1} pendant des temps variant de quelques minutes a plusieurs jours selon les elements a analyser. le comptage de la radioactivite induite est effectue par spectrometrie gamma apres separation radiochimique (Cu, Co, Fe, Zn, et K) ou sans separation dans les cas favorables (Mn, Cu, K). Les dosages decrits concernent: le manganese dans une graminee, le cuivre en viticulture, le cobalt, le fer, et le zinc dans un aliment du betail, le potassium dans une experience de radioecologie. (auteur)

  6. Ecological analyses and applications

    International Nuclear Information System (INIS)

    Brocksen, R.W.

    1977-01-01

    Progress is reported on the following: analysis of ecological impacts of construction and operation of nuclear power plants; fossil energy environmental project; ecological analysis of geothermal energy development; HUD modular integrated utility systems; expansion of uranium enrichment facilities at Portsmouth; environmental standard review plans; environmental assessment of cooling reservoirs; and analysis of fish impingement at power plants in the southeastern United States

  7. Ecological analyses and applications

    International Nuclear Information System (INIS)

    Kroodsma, R.L.; Craig, R.B.; Hildebrand, S.G.

    1978-01-01

    Progress is reported on the following: assessment of nuclear power plants; ecological analysis of uranium mining, milling, and fuel fabrication; environmental impact statements concerning uranium enrichment facilities; site evaluations for storage of radioactive wastes; ecological analysis of geothermal energy development; enhanced oil recovery; environmental monitoring plan for modular integrated utility systems; and fossil energy environmental project

  8. Exergy and energy analyses of two different types of PCM based thermal management systems for space air conditioning applications

    International Nuclear Information System (INIS)

    Tyagi, V.V.; Pandey, A.K.; Buddhi, D.; Tyagi, S.K.

    2013-01-01

    Highlights: ► Calcium chloride hexahydrate (CaCl 2 ⋅6H 2 O) as a PCM was used in this study. ► Two different capsulated system (HDPE based panel and balls) were designed. ► The results of CaCl 2 ⋅6H 2 O are very attractive for space air conditioning. ► Energy and exergy analyses for space cooling applications. - Abstract: This communication presents the experimental study of PCM based thermal management systems for space heating and cooling applications using energy and exergy analysis. Two different types of based thermal management system (TMS-I and TMS-II) using calcium chloride hexahydrate as the heat carrier has been designed, fabricated and studied for space heating and cooling applications at a typical climatic zone in India. In the first experimental arrangement the charging of PCM has been carried out with air conditioning system while discharging has been carried out using electric heater for both the thermal management systems. While in the second arrangement the charging of PCM has been carried out by solar energy and the discharging has been carried out by circulating the cooler ambient air during the night time. In the first experiment, TMS-I is found to be more effective than that of TMS-II while it was found to be reverse in the case of second experiment for both the charging and discharging processes not only for energetic but also for the exergetic performances

  9. Application of Metagenomic Analyses in Dentistry as a Novel Strategy Enabling Complex Insight into Microbial Diversity of the Oral Cavity.

    Science.gov (United States)

    Burczynska, Aleksandra; Dziewit, Lukasz; Decewicz, Przemysław; Struzycka, Izabela; Wroblewska, Marta

    2017-03-30

    The composition of the oral microbiome in healthy individuals is complex and dynamic, and depends on many factors, such as anatomical location in the oral cavity, diet, oral hygiene habits or host immune responses. It is estimated at present that worldwide about 2 billion people suffer from diseases of the oral cavity, mainly periodontal disease and dental caries. Importantly, the oral microflora involved in local infections may spread and cause systemic, even life-threatening infections. In search for etiological agents of infections in dentistry, traditional approaches are not sufficient, as about 50% of oral bacteria are not cultivable. Instead, metagenomic analyses are particularly useful for studies of the complex oral microbiome - both in healthy individuals, and in patients with oral and dental diseases. In this paper we review the current and future applications of metagenomic studies in evaluation of both the composition of the oral microbiome as well as its potential pathogenic role in infections in dentistry.

  10. The application of fluid structure interaction techniques within finite element analyses of water-filled transport flasks

    International Nuclear Information System (INIS)

    Smith, C.; Stojko, S.

    2004-01-01

    Historically, Finite Element (FE) analyses of water-filled transport flasks and their payloads have been carried out assuming a dry environment, mainly due to a lack of robust Fluid Structure Interaction (FSI) modelling techniques. Also it has been accepted within the RAM transport industry that the presence of water would improve the impact withstand capability of dropped payloads within containers. In recent years the FE community has seen significant progress and improvement in FSI techniques. These methods have been utilised to investigate the effects of a wet environment on payload behaviour for the regulatory drop test within a recent transport licence renewal application. Fluid flow and pressure vary significantly during a wet impact and the effects on the contents become complex when water is incorporated into the flask analyses. Modelling a fluid environment within the entire flask is considered impractical; hence a good understanding of the FSI techniques and assumptions regarding fluid boundaries is required in order to create a representative FSI model. Therefore, a Verification and Validation (V and V) exercise was undertaken to underpin the FSI techniques eventually utilised. A number of problems of varying complexity have been identified to test the FSI capabilities of the explicit code LS-DYNA, which is used in the extant dry container impact analyses. RADIOSS explicit code has been used for comparison, to provide further confidence in LS-DYNA predictions. Various methods of modelling fluid are tested, and the relative advantages and limitations of each method and FSI coupling approaches are discussed. Results from the V and V problems examined provided sufficient confidence that FSI effects within containers can be accurately modelled

  11. Applicability of a Diffuse Reflectance Infrared Fourier Transform handheld spectrometer to perform in situ analyses on Cultural Heritage materials.

    Science.gov (United States)

    Arrizabalaga, Iker; Gómez-Laserna, Olivia; Aramendia, Julene; Arana, Gorka; Madariaga, Juan Manuel

    2014-08-14

    This work studies the applicability of a Diffuse Reflectance Infrared Fourier Transform handheld device to perform in situ analyses on Cultural Heritage assets. This portable diffuse reflectance spectrometer has been used to characterise and diagnose the conservation state of (a) building materials of the Guevara Palace (15th century, Segura, Basque Country, Spain) and (b) different 19th century wallpapers manufactured by the Santa Isabel factory (Vitoria-Gasteiz, Basque Country, Spain) and by the well known Dufour and Leroy manufacturers (Paris, France), all of them belonging to the Torre de los Varona Castle (Villanañe, Basque Country, Spain). In all cases, in situ measurements were carried out and also a few samples were collected and measured in the laboratory by diffuse reflectance spectroscopy (DRIFT) in order to validate the information obtained by the handheld instrument. In the analyses performed in situ, distortions in the diffuse reflectance spectra can be observed due to the presence of specular reflection, showing the inverted bands caused by the Reststrahlen effect, in particular on those IR bands with the highest absorption coefficients. This paper concludes that the results obtained in situ by a diffuse reflectance handheld device are comparable to those obtained with laboratory diffuse reflectance spectroscopy equipment and proposes a few guidelines to acquire good spectra in the field, minimising the influence caused by the specular reflection. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Application of annual ring analyses to the detection of smoke damage. I. A methodical contribution to the treatment of annual ring analyses

    Energy Technology Data Exchange (ETDEWEB)

    Vins, B

    1961-01-01

    Losses in growth of silvicultural stands caused by smoke can be measured by annual ring analysis. The method is advantageous mainly because the annual gains can be checked far into the past and thus compared with gains before the onset of the pollution. Experience gained in the Krusne Hory area of Czechoslovakia with the methodical processing of 2000 annual ring analyses is reviewed. The principal problem was that more than half the trees exposed to pollution failed to grow annual rings. At first no rings are added from the ground up to a certain height; then the defect spreads all the way into the crowns of the affected trees. This observation is of fundamental importance in the calculation of losses in growth gains due to industrial emissions because hitherto the last annual ring next to the bark was always identified with the test year, while in reality a number of annual rings might already have failed to grow due to the effects of pollution. Errors far exceeding permissible limits might have occurred in the analysis.

  13. Assessing residential building values in Spain for risk analyses - application to the landslide hazard in the Autonomous Community of Valencia

    Science.gov (United States)

    Cantarino, I.; Torrijo, F. J.; Palencia, S.; Gielen, E.

    2014-11-01

    This paper proposes a method of valuing the stock of residential buildings in Spain as the first step in assessing possible damage caused to them by natural hazards. For the purposes of the study we had access to the SIOSE (the Spanish Land Use and Cover Information System), a high-resolution land-use model, as well as to a report on the financial valuations of this type of building throughout Spain. Using dasymetric disaggregation processes and GIS techniques we developed a geolocalized method of obtaining this information, which was the exposure variable in the general risk assessment formula. Then, with the application over a hazard map, the risk value can be easily obtained. An example of its application is given in a case study that assesses the risk of a landslide in the entire 23 200 km2 of the Valencia Autonomous Community (NUT2), the results of which are analysed by municipal areas (LAU2) for the years 2005 and 2009.

  14. An introduction to the Omega 500 High Resolution FT-NMR and its application to biochemical analyses

    International Nuclear Information System (INIS)

    Oyabu, Matashige; Ohno, Yasushi; Fujita, Shin; Koide, Junichi; Iwata, Yosuke; Terashita, Eisaku; Masuda, Junichi

    1991-01-01

    The Omega 500 High Resolution FT-NMR was designed using the latest radio frequency (RF) and computer technologies resulting in an instrument which is capable of executing many of the most advanced NMR methods. In this article, quadrature phase detection and Fourier transformation signal processing, which are basic principles in FT-NMR, are explained. Special emphasis is given to the unique NMR shell which serves as the user interface to the system and which takes advantage of the tools provided in the UNIX C environment. Each specific application program-called a 'panel'-provides for simple operation of the instrument and ready execution of the powerful data processing functions contained in the system. An overview is given of these software panels and their convenience in the execution of analyses. NMR spectroscopy has been applied to structural determinations of complex biochemicals such as proteins, nucleic acids and peptides. As an example Omega 500 application, the cyclic peptide Gramicidin S -an antibiotic produced by a strain of Bacillus brevis- was analyzed by the DQF-COSY, HOHAHA and NOESY methods which are typical for structural determination sequences for materials of biological origin. The algorithm used for spectral interpretation is discussed. (author)

  15. Combined U-Pb and Lu-Hf isotope analyses by laser ablation MC-ICP-MS: methodology and applications

    Energy Technology Data Exchange (ETDEWEB)

    Matteini, Massimo; Dantas, Elton L.; Pimentel, Marcio M.; Bühn, Bernhard, E-mail: massimo@unb.br [Universidade de Brasilia (UnB), DF (Brazil). Instituto de Geociencias

    2010-06-15

    The Lutetium-Hafnium isotopic system represents one of the most innovative and powerful tools for geochronology and isotopic studies. Combined U-Pb and Lu-Hf in situ analyses on zircon by LA-MC-ICP-MS permit to characterize isotopically the host magma from which it crystallized furnishing significant information for sediment provenance and crustal evolution studies. In this paper e describe the Lu-Hf systematic by LA-MC-ICP-MS developed in the laboratory of Geochronology of the University of Brasilia and report the results obtained by repeated analyses of {sup 176}Hf/{sup 177}Hf isotopic ratio of three zircon standards: GJ-1 = 0.282022 ± 11 (n=56), Temora 2 = 0.282693 ± 14 (n=25) and UQZ = 0.282127 ± 33 (n=11). The {sup 176}Hf/{sup 177}Hf ratio (0.282352 ± 22, n=14) of gem quality zircon used as in-house standard have been also characterized. As a geological application, we analyzed two complex zircons selected from a migmatitic rocks from the Borborema Province, NE Brazil. On the basis of U-Pb and Lu-Hf data, two main crystallization events have been identified in both studied zircons. An older event at ca. 2.05 Ga recognized in the inherited cores represents a well-characterized paleoproterozoic magmatic event that affected the whole Borborema Province. A second crystallization event at ∼ 575 Ma, recognized at the rims, represents a Neoproterozoic (Brazilian) high grade metamorphic-magmatic event. (author)

  16. Application of IBA in the comparative analyses of fish scales used as biomonitors in the Matola River, Mozambique

    International Nuclear Information System (INIS)

    Guambe, J.F.; Mars, J.A.; Day, J.

    2013-01-01

    Full text: Many natural resources are invariably contaminated by industries located on the periphery of the resources. More so, fish found in the resources are used as dietary supplements, especially by individual that reside near the natural resources. The scale offish have been proven to be applicable in monitoring contamination of the natural resources. However, the morphology and chemical composition of the scale of various species differ to a significant degree. Consequently, the incorporation of contaminants into the scale structure will be different. There is a need of pilot for contaminants which can harm the biota. The composition of the fish scales is different. To quantify the degree of incorporation onto the scale matrix we have analysed, using PIXE, RBS and SEM, the scale of four types of fish scales, that is, Pomadasys kaakan the javelin grunter; Luljanus gibbus the humpback red snapper; Pinjalo pinjalo the pinjalo and Uthognathus mormyrus the sand streenbras. In this work we report on the viability of using various fish scales as monitors of natural resource contamination. (author)

  17. Application of IBA in the comparative analyses of fish scales used as biomonitors in the Matola River, Mozambique

    Energy Technology Data Exchange (ETDEWEB)

    Guambe, J.F. [Freshwater Research Unit, Department of Zoology, University of Cape Town, Private Bag, Rondebosch, 7701 (South Africa); Physics Department, Eduardo Mondlane Universily, PO Box 257, Maputo (Mozambique); Materials Research Department, iThemba LABS, PO Box 722, Somerset West, 7129 (South Africa); Mars, J.A. [Faculty of Health and Wellness Sciences, Cape Peninsula University of Technology, PO Box 1906, Bellville, 7535 (South Africa); Day, J. [Freshwater Research Unit, Department of Zoology, University of Cape Town, Private Bag, Rondebosch, 7701 (South Africa)

    2013-07-01

    Full text: Many natural resources are invariably contaminated by industries located on the periphery of the resources. More so, fish found in the resources are used as dietary supplements, especially by individual that reside near the natural resources. The scale offish have been proven to be applicable in monitoring contamination of the natural resources. However, the morphology and chemical composition of the scale of various species differ to a significant degree. Consequently, the incorporation of contaminants into the scale structure will be different. There is a need of pilot for contaminants which can harm the biota. The composition of the fish scales is different. To quantify the degree of incorporation onto the scale matrix we have analysed, using PIXE, RBS and SEM, the scale of four types of fish scales, that is, Pomadasys kaakan the javelin grunter; Luljanus gibbus the humpback red snapper; Pinjalo pinjalo the pinjalo and Uthognathus mormyrus the sand streenbras. In this work we report on the viability of using various fish scales as monitors of natural resource contamination. (author)

  18. Development and application of neutron transport methods and uncertainty analyses for reactor core calculations. Technical report; Entwicklung und Einsatz von Neutronentransportmethoden und Unsicherheitsanalysen fuer Reaktorkernberechnungen. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Zwermann, W.; Aures, A.; Bernnat, W.; and others

    2013-06-15

    This report documents the status of the research and development goals reached within the reactor safety research project RS1503 ''Development and Application of Neutron Transport Methods and Uncertainty Analyses for Reactor Core Calculations'' as of the 1{sup st} quarter of 2013. The superordinate goal of the project is the development, validation, and application of neutron transport methods and uncertainty analyses for reactor core calculations. These calculation methods will mainly be applied to problems related to the core behaviour of light water reactors and innovative reactor concepts. The contributions of this project towards achieving this goal are the further development, validation, and application of deterministic and stochastic calculation programmes and of methods for uncertainty and sensitivity analyses, as well as the assessment of artificial neutral networks, for providing a complete nuclear calculation chain. This comprises processing nuclear basis data, creating multi-group data for diffusion and transport codes, obtaining reference solutions for stationary states with Monte Carlo codes, performing coupled 3D full core analyses in diffusion approximation and with other deterministic and also Monte Carlo transport codes, and implementing uncertainty and sensitivity analyses with the aim of propagating uncertainties through the whole calculation chain from fuel assembly, spectral and depletion calculations to coupled transient analyses. This calculation chain shall be applicable to light water reactors and also to innovative reactor concepts, and therefore has to be extensively validated with the help of benchmarks and critical experiments.

  19. Application of power spectrum, cepstrum, higher order spectrum and neural network analyses for induction motor fault diagnosis

    Science.gov (United States)

    Liang, B.; Iwnicki, S. D.; Zhao, Y.

    2013-08-01

    The power spectrum is defined as the square of the magnitude of the Fourier transform (FT) of a signal. The advantage of FT analysis is that it allows the decomposition of a signal into individual periodic frequency components and establishes the relative intensity of each component. It is the most commonly used signal processing technique today. If the same principle is applied for the detection of periodicity components in a Fourier spectrum, the process is called the cepstrum analysis. Cepstrum analysis is a very useful tool for detection families of harmonics with uniform spacing or the families of sidebands commonly found in gearbox, bearing and engine vibration fault spectra. Higher order spectra (HOS) (also known as polyspectra) consist of higher order moment of spectra which are able to detect non-linear interactions between frequency components. For HOS, the most commonly used is the bispectrum. The bispectrum is the third-order frequency domain measure, which contains information that standard power spectral analysis techniques cannot provide. It is well known that neural networks can represent complex non-linear relationships, and therefore they are extremely useful for fault identification and classification. This paper presents an application of power spectrum, cepstrum, bispectrum and neural network for fault pattern extraction of induction motors. The potential for using the power spectrum, cepstrum, bispectrum and neural network as a means for differentiating between healthy and faulty induction motor operation is examined. A series of experiments is done and the advantages and disadvantages between them are discussed. It has been found that a combination of power spectrum, cepstrum and bispectrum plus neural network analyses could be a very useful tool for condition monitoring and fault diagnosis of induction motors.

  20. Risk analyses in nuclear engineerig, their value in terms of information, and their limits in terms of applicability

    International Nuclear Information System (INIS)

    Heuser, F.W.

    1983-01-01

    This contribution first briefly explains the main pillars of the deterministic safety concept as developed in nuclear engineering, and some basic ideas on risk analyses in general. This is followed by an outline of the methodology and main purposes of risk analyses. The German Risk Study is taken as an example to discuss selected aspects with regard to information value and limits of risk analyses. The main conclusions state that risk analyses are a valuable instrument for quantitative safety evaluation, leading to a better understanding of safety problems and their prevention, and allowing a comparative assessment of various safety measures. They furthermore allow a refined evaluation of a variety of accident parameters and other impacts determining the risk emanating from accidents. The current state of the art in this sector still leaves numerous uncertainties so that risk analyses yield information for assessments rather than for definite predictions. However, the urge for quantifying the lack of knowledge leads to a better and more precise determination of the gaps still to be filled up by researchers and engineers. Thus risk analyses are a useful help in defining suitable approaches and setting up standards, showing the tasks to be fulfilled in safety research in general. (orig./HSCH) [de

  1. False-positive findings in Cochrane meta-analyses with and without application of trial sequential analysis

    DEFF Research Database (Denmark)

    Imberger, Georgina; Thorlund, Kristian; Gluud, Christian

    2016-01-01

    outcome, a negative result and sufficient power. We defined a negative result as one where the 95% CI for the effect included 1.00, a positive result as one where the 95% CI did not include 1.00, and sufficient power as the required information size for 80% power, 5% type 1 error, relative risk reduction...... of 10% or number needed to treat of 100, and control event proportion and heterogeneity taken from the included studies. We re-conducted the meta-analyses, using conventional cumulative techniques, to measure how many false positives would have occurred if these meta-analyses had been updated after each...... new trial. For each false positive, we performed TSA, using three different approaches. RESULTS: We screened 4736 systematic reviews to find 100 meta-analyses that fulfilled our inclusion criteria. Using conventional cumulative meta-analysis, false positives were present in seven of the meta...

  2. A Microsoft-Excel-based tool for running and critically appraising network meta-analyses--an overview and application of NetMetaXL.

    Science.gov (United States)

    Brown, Stephen; Hutton, Brian; Clifford, Tammy; Coyle, Doug; Grima, Daniel; Wells, George; Cameron, Chris

    2014-09-29

    The use of network meta-analysis has increased dramatically in recent years. WinBUGS, a freely available Bayesian software package, has been the most widely used software package to conduct network meta-analyses. However, the learning curve for WinBUGS can be daunting, especially for new users. Furthermore, critical appraisal of network meta-analyses conducted in WinBUGS can be challenging given its limited data manipulation capabilities and the fact that generation of graphical output from network meta-analyses often relies on different software packages than the analyses themselves. We developed a freely available Microsoft-Excel-based tool called NetMetaXL, programmed in Visual Basic for Applications, which provides an interface for conducting a Bayesian network meta-analysis using WinBUGS from within Microsoft Excel. . This tool allows the user to easily prepare and enter data, set model assumptions, and run the network meta-analysis, with results being automatically displayed in an Excel spreadsheet. It also contains macros that use NetMetaXL's interface to generate evidence network diagrams, forest plots, league tables of pairwise comparisons, probability plots (rankograms), and inconsistency plots within Microsoft Excel. All figures generated are publication quality, thereby increasing the efficiency of knowledge transfer and manuscript preparation. We demonstrate the application of NetMetaXL using data from a network meta-analysis published previously which compares combined resynchronization and implantable defibrillator therapy in left ventricular dysfunction. We replicate results from the previous publication while demonstrating result summaries generated by the software. Use of the freely available NetMetaXL successfully demonstrated its ability to make running network meta-analyses more accessible to novice WinBUGS users by allowing analyses to be conducted entirely within Microsoft Excel. NetMetaXL also allows for more efficient and transparent

  3. ANALYSE RIGOUREUSE D’IRIS METALLIQUES EPAIS ET LEUR APPLICATION DANS LES FILTRES A CAVITES RECTANGULAIRES DIRECTEMENT COUPLEES

    Directory of Open Access Journals (Sweden)

    M BELMEGUENAI

    2000-12-01

    Full Text Available Une technique générale, basée sur l’analyse modale, est utilisée pour la modélisation de discontinuités et de filtres en guides d’ondes rectangulaires. Dans notre étude où l’épaisseur de l’obstacle métallique (iris épais est prise en compte, nous déterminons d’abord la matrice de diffraction de l’iris par utilisation de l’analyse en modes symétriques et asymétriques, puis en considérant le principe d’association de quadripôles en cascade, nous obtenons la réponse du filtre. Nos différents résultats sont comparés avec ceux publiés dans la littérature.

  4. SPECIFICS OF THE APPLICATIONS OF MULTIPLE REGRESSION MODEL IN THE ANALYSES OF THE EFFECTS OF GLOBAL FINANCIAL CRISES

    Directory of Open Access Journals (Sweden)

    Željko V. Račić

    2010-12-01

    Full Text Available This paper aims to present the specifics of the application of multiple linear regression model. The economic (financial crisis is analyzed in terms of gross domestic product which is in a function of the foreign trade balance (on one hand and the credit cards, i.e. indebtedness of the population on this basis (on the other hand, in the USA (from 1999. to 2008. We used the extended application model which shows how the analyst should run the whole development process of regression model. This process began with simple statistical features and the application of regression procedures, and ended with residual analysis, intended for the study of compatibility of data and model settings. This paper also analyzes the values of some standard statistics used in the selection of appropriate regression model. Testing of the model is carried out with the use of the Statistics PASW 17 program.

  5. Application of pathways analyses for site performance prediction for the Gas Centrifuge Enrichment Plant and Oak Ridge Central Waste Disposal Facility

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1984-01-01

    The suitability of the Gas Centrifuge Enrichment Plant and the Oak Ridge Central Waste Disposal Facility for shallow-land burial of low-level radioactive waste is evaluated using pathways analyses. The analyses rely on conservative scenarios to describe the generation and migration of contamination and the potential human exposure to the waste. Conceptual and numerical models are developed using data from comprehensive laboratory and field investigations and are used to simulate the long-term transport of contamination to man. Conservatism is built into the analyses when assumptions concerning future events have to be made or when uncertainties concerning site or waste characteristics exist. Maximum potential doses to man are calculated and compared to the appropriate standards. The sites are found to provide adequate buffer to persons outside the DOE reservations. Conclusions concerning site capacity and site acceptability are drawn. In reaching these conclusions, some consideration is given to the uncertainties and conservatisms involved in the analyses. Analytical methods to quantitatively assess the probability of future events to occur and the sensitivity of the results to data uncertainty may prove useful in relaxing some of the conservatism built into the analyses. The applicability of such methods to pathways analyses is briefly discussed. 18 refs., 9 figs

  6. Developing and Analysing sub-10 µm Fluidic Systems with Integrated Electrodes for Pumping and Sensing in Nanotechnology Applications

    NARCIS (Netherlands)

    Heuck, F.C.A.

    2010-01-01

    In this thesis, sub-10 µm fluidic systems with integrated electrodes for pumping and sensing in nanotechnology applications were developed and analyzed. This work contributes to the development of the scanning ion pipette (SIP), a tool to investigate surface changes on the nanometer scale induced by

  7. Application of the inter-line PCR for the analyse of genomic rearrangements in radiation-transformed mammalian cell lines

    International Nuclear Information System (INIS)

    Leibhard, S.; Smida, J.

    1996-01-01

    Repetitive DNA sequences of the LINE-family (long interspersed elements) that are widely distributed among the mammalian genome can be activated or altered by the exposure to ionizing radiation [1]. By the integration at new sites in the genome alterations in the expression of genes that are involved in cell transformation and/or carcinogenesis may occur [2, 3]. A new technique -the inter-LINE PCR - has been developed in order to detect and analyse such genomic rearrangements in radiation-transformed cell lines. From the sites of transformation- or tumour-specific changes in the genome it might be possible to develop new tumour markers for diagnostic purpose. (orig.) [de

  8. Dynamic analyses, FPGA implementation and engineering applications of multi-butterfly chaotic attractors generated from generalised Sprott C system

    Science.gov (United States)

    Lai, Qiang; Zhao, Xiao-Wen; Rajagopal, Karthikeyan; Xu, Guanghui; Akgul, Akif; Guleryuz, Emre

    2018-01-01

    This paper considers the generation of multi-butterfly chaotic attractors from a generalised Sprott C system with multiple non-hyperbolic equilibria. The system is constructed by introducing an additional variable whose derivative has a switching function to the Sprott C system. It is numerically found that the system creates two-, three-, four-, five-butterfly attractors and any other multi-butterfly attractors. First, the dynamic analyses of multi-butterfly chaotic attractors are presented. Secondly, the field programmable gate array implementation, electronic circuit realisation and random number generator are done with the multi-butterfly chaotic attractors.

  9. Application of carbon and hydrogen stable isotope analyses to detect exogenous citric acid in Japanese apricot liqueur.

    Science.gov (United States)

    Akamatsu, Fumikazu; Oe, Takaaki; Hashiguchi, Tomokazu; Hisatsune, Yuri; Kawao, Takafumi; Fujii, Tsutomu

    2017-08-01

    Japanese apricot liqueur manufacturers are required to control the quality and authenticity of their liqueur products. Citric acid made from corn is the main acidulant used in commercial liqueurs. In this study, we conducted spiking experiments and carbon and hydrogen stable isotope analyses to detect exogenous citric acid used as an acidulant in Japanese apricot liqueurs. Our results showed that the δ 13 C values detected exogenous citric acid originating from C 4 plants but not from C 3 plants. The δ 2 H values of citric acid decreased as the amount of citric acid added increased, whether the citric acid originated from C 3 or C 4 plants. Commercial liqueurs with declared added acidulant provided higher δ 13 C values and lower δ 2 H values than did authentic liqueurs and commercial liqueurs with no declared added acidulant. Carbon and hydrogen stable isotope analyses are suitable as routine methods for detecting exogenous citric acid in Japanese apricot liqueur. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Medicine and ionizing rays: a help sheet in analysing risks in intra-oral dental radiology and applicable texts

    International Nuclear Information System (INIS)

    Gauron, C.

    2009-01-01

    This document proposes a synthesis of useful knowledge for radioprotection in the case of intra-oral dental radiology. In the first part, several aspects are considered: the concerned personnel, the course of treatment procedures, the hazards, the identification of the risk associated with ionizing radiation, the risk assessment and the determination of exposure levels, the strategy to control the risks (reduction of risks, technical measures concerning the installation or the personnel, teaching and information, prevention and medical monitoring), and risk control assessment. A second part indicates the various applicable legal and regulatory texts (European directives, institutions in charge of radioprotection, general arrangements applicable to workers and patients, and regulatory texts concerning worker protection or patient protection against ionizing radiations)

  11. Application of EPR spectrometry, thermoluminescence, analyses of DNA damage and germination power for detection of irradiated foods

    International Nuclear Information System (INIS)

    Malec-Czechowska, K.; Stachowicz, W.; Dancewicz, A.M.; Szot, Z.

    1999-01-01

    The results of our own detection of irradiation in various foods: meat, poultry, fishes, spices, dried fruits, mushrooms, crops, fresh fruits and food additives are presented. The techniques for detection whether foods have been irradiated or not, such as EPR spectrometry, thermoluminescence (TL), DNA damage by ''comet'' method and ability for germination of grains has been discussed. The applicability of particular technique to specific foodstuffs has been indicated. (author)

  12. Application of avalanche photodiode for soft X-ray pulse-height analyses in the Ht-7 tokamak

    CERN Document Server

    Shi Yue Jiang; Hu Li Qun; Sun Yan Jun; LiuSheng; Ling Bil

    2002-01-01

    An avalanche photodiode (APD) has been used as soft X-ray energy pulse-height analysis system for the measurement of the electron temperature on the HT-7 tokamak. The experimental results obtained with the APD with its inferior energy resolution show a little difference compared to the conventional high energy-resolution Si (Li) detector. Both numerical analysis and experimental results prove that the APD is good enough for application of the electron temperature measurement in tokamaks.

  13. Exploring Hardware Support For Scaling Irregular Applications on Multi-node Multi-core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Ceriani, Marco; Tumeo, Antonino; Villa, Oreste; Palermo, Gianluca; Raffo, Luigi

    2013-06-05

    With the recent emergence of large-scale knowledge dis- covery, data mining and social network analysis, irregular applications have gained renewed interest. Classic cache-based high-performance architectures do not provide optimal performances with such kind of workloads, mainly due to the very low spatial and temporal locality of the irregular control and memory access patterns. In this paper, we present a multi-node, multi-core, fine-grained multi-threaded shared-memory system architecture specifically designed for the execution of large-scale irregular applications, and built on top of three pillars, that we believe are fundamental to support these workloads. First, we offer transparent hardware support for Partitioned Global Address Space (PGAS) to provide a large globally-shared address space with no software library overhead. Second, we employ multi-threaded multi-core processing nodes to achieve the necessary latency tolerance required by accessing global memory, which potentially resides in a remote node. Finally, we devise hardware support for inter-thread synchronization on the whole global address space. We first model the performances by using an analytical model that takes into account the main architecture and application characteristics. We describe the hardware design of the proposed cus- tom architectural building blocks that provide support for the above- mentioned three pillars. Finally, we present a limited-scale evaluation of the system on a multi-board FPGA prototype with typical irregular kernels and benchmarks. The experimental evaluation demonstrates the architecture performance scalability for different configurations of the whole system.

  14. Application of LiDAR technology in analyses of the topography of Margum/Morava and Kulič

    Directory of Open Access Journals (Sweden)

    Ivanišević Vujadin

    2012-01-01

    Full Text Available Roman Margum and Mediaeval town of Morava, situated on the Orašje site in Dubravica at the confluence of the Velika Morava and the Danube, could not have been analysed more thoroughly in the past because of the damage caused by the river bed displacements and soil erosion on the one hand, and dense vegetation growing on such a moist terrain on the other. Archaeological research has so far failed to produce even a site plan. Available data on this important site are contradictory to a considerable extent, so the information one could obtain from the written and cartographic sources needed to be confronted with the archaeological ones and, especially, those derived from the recent LiDAR scanning of the terrain, conducted within the scope of the Archaeo-Landscapes Europe Project. Among the most important plans of the confluence area are those left by Marsigli in the 18th and Kanitz in the 19th century. Felix Kanitz, the famous Balkan traveler, also provided us with a textual description of his visit to the site in 1887. Our analyses of the two plans have revealed a number of inaccuracies. Through analyses of the obtaineded LiDAR scans, however, the preserved area of the two settlements has been clearly demarcated, measuring 7-8 hectares, and the eastern edge of the Roman agglomeration - presumed already in the course of the 2011 excavations - was confirmed. Most probably it was the eastern rampart of the Roman fortification. Apart from this, the purpose of a canal stretching along the whole plateau, and mentioned by Kanitz, has been established. Given that to the east of the canal there was the presumably Roman rampart, and to the west of it there were recently excavated ruins of Roman buildings, the canal itself must have been of a more recent date. Bearing in mind the established vertical stratigraphy of the site, we conclude that it was in fact a Mediaeval defence trench. The topography of the nearby fort Kulič has been studied as well. It is

  15. Assessment applicability of selected models of multiple discriminant analyses to forecast financial situation of Polish wood sector enterprises

    Directory of Open Access Journals (Sweden)

    Adamowicz Krzysztof

    2017-03-01

    Full Text Available In the last three decades forecasting bankruptcy of enterprises has been an important and difficult problem, used as an impulse for many research projects (Ribeiro et al. 2012. At present many methods of bankruptcy prediction are available. In view of the specific character of economic activity in individual sectors, specialised methods adapted to a given branch of industry are being used increasingly often. For this reason an important scientific problem is related with the indication of an appropriate model or group of models to prepare forecasts for a given branch of industry. Thus research has been conducted to select an appropriate model of Multiple Discriminant Analysis (MDA, best adapted to forecasting changes in the wood industry. This study analyses 10 prediction models popular in Poland. Effectiveness of the model proposed by Jagiełło, developed for all industrial enterprises, may be labelled accidental. That model is not adapted to predict financial changes in wood sector companies in Poland.

  16. Use of Speech Analyses within a Mobile Application for the Assessment of Cognitive Impairment in Elderly People.

    Science.gov (United States)

    Konig, Alexandra; Satt, Aharon; Sorin, Alex; Hoory, Ran; Derreumaux, Alexandre; David, Renaud; Robert, Phillippe H

    2018-01-01

    Various types of dementia and Mild Cognitive Impairment (MCI) are manifested as irregularities in human speech and language, which have proven to be strong predictors for the disease presence and progress ion. Therefore, automatic speech analytics provided by a mobile application may be a useful tool in providing additional indicators for assessment and detection of early stage dementia and MCI. 165 participants (subjects with subjective cognitive impairment (SCI), MCI patients, Alzheimer's disease (AD) and mixed dementia (MD) patients) were recorded with a mobile application while performing several short vocal cognitive tasks during a regular consultation. These tasks included verbal fluency, picture description, counting down and a free speech task. The voice recordings were processed in two steps: in the first step, vocal markers were extracted using speech signal processing techniques; in the second, the vocal markers were tested to assess their 'power' to distinguish between SCI, MCI, AD and MD. The second step included training automatic classifiers for detecting MCI and AD, based on machine learning methods, and testing the detection accuracy. The fluency and free speech tasks obtain the highest accuracy rates of classifying AD vs. MD vs. MCI vs. SCI. Using the data, we demonstrated classification accuracy as follows: SCI vs. AD = 92% accuracy; SCI vs. MD = 92% accuracy; SCI vs. MCI = 86% accuracy and MCI vs. AD = 86%. Our results indicate the potential value of vocal analytics and the use of a mobile application for accurate automatic differentiation between SCI, MCI and AD. This tool can provide the clinician with meaningful information for assessment and monitoring of people with MCI and AD based on a non-invasive, simple and low-cost method. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. A practical approach for calculating reliable cost estimates from observational data: application to cost analyses in maternal and child health.

    Science.gov (United States)

    Salemi, Jason L; Comins, Meg M; Chandler, Kristen; Mogos, Mulubrhan F; Salihu, Hamisu M

    2013-08-01

    Comparative effectiveness research (CER) and cost-effectiveness analysis are valuable tools for informing health policy and clinical care decisions. Despite the increased availability of rich observational databases with economic measures, few researchers have the skills needed to conduct valid and reliable cost analyses for CER. The objectives of this paper are to (i) describe a practical approach for calculating cost estimates from hospital charges in discharge data using publicly available hospital cost reports, and (ii) assess the impact of using different methods for cost estimation in maternal and child health (MCH) studies by conducting economic analyses on gestational diabetes (GDM) and pre-pregnancy overweight/obesity. In Florida, we have constructed a clinically enhanced, longitudinal, encounter-level MCH database covering over 2.3 million infants (and their mothers) born alive from 1998 to 2009. Using this as a template, we describe a detailed methodology to use publicly available data to calculate hospital-wide and department-specific cost-to-charge ratios (CCRs), link them to the master database, and convert reported hospital charges to refined cost estimates. We then conduct an economic analysis as a case study on women by GDM and pre-pregnancy body mass index (BMI) status to compare the impact of using different methods on cost estimation. Over 60 % of inpatient charges for birth hospitalizations came from the nursery/labor/delivery units, which have very different cost-to-charge markups (CCR = 0.70) than the commonly substituted hospital average (CCR = 0.29). Using estimated mean, per-person maternal hospitalization costs for women with GDM as an example, unadjusted charges ($US14,696) grossly overestimated actual cost, compared with hospital-wide ($US3,498) and department-level ($US4,986) CCR adjustments. However, the refined cost estimation method, although more accurate, did not alter our conclusions that infant/maternal hospitalization costs

  18. Development of the temperature field at the WWER-440 core outlet monitoring system and application of the data analyses methods

    International Nuclear Information System (INIS)

    Spasova, V.; Georgieva, N.; Haralampieva, Tz.

    2001-01-01

    On-line internal reactor monitoring by 216 thermal couples, located at the reactor core outlet, is carried out during power operation of WWER-440 Units 1 and 2 at Kozloduy NPP. Automatic monitoring of technology process is performed by IB-500MA, which collects and performs initial data processing (discrediting and conversion of analogue signals into digital mode). The paper also presents the results and analyses of power distribution monitoring during the past 21-th and current 22-th fuel cycle at Kozloduy NPP, Unit 1 by using archiving system capacity and related software. The possibility to perform operational assessment and analysis of power distribution in the reactor core in each point of the fuel cycle is checked by comparison of the neutron-physical calculation results with reactor coolant system parameters. Paper shows that the processing and analysis of accumulated significant amount of data in the archive files increases accuracy and reliability of power distribution monitoring in the reactor core in each moment of the fuel cycle of WWER-440 reactors at Kozloduy NPP

  19. Thermodynamic and thermoeconomic analyses of a trigeneration (TRIGEN) system with a gas-diesel engine: Part II - An application

    International Nuclear Information System (INIS)

    Balli, Ozgur; Aras, Haydar; Hepbasli, Arif

    2010-01-01

    The paper is Part 2 of the study on the thermodynamic and thermoeconomic analyses of trigeneration system with a gas-diesel engine. In Part 1, thermodynamic and thermoeconomic methodologies for such a comprehensive analysis were provided, while this paper applies the developed methodology to an actual TRIGEN system with a rated output of 6.5 MW gas-diesel engine installed in the Eskisehir Industry Estate Zone, Turkey. Energy and exergy efficiencies, equivalent electrical efficiency, the Public Utility Regulatory Policies Act (PURPA) efficiency, fuel energy saving ratio, fuel exergy saving ratio and other thermodynamic performance parameters are determined for the TRIGEN system. The efficiencies of energy, exergy, PURPA and equivalent electrical efficiency of the entire system are found to be 58.97%, 36.13%, 45.7% and 48.53%, respectively. For the whole system and its components, exergetic cost allocations and various exergoeconomic performance parameters are calculated using the exergoeconomic analysis based on specific exergy costing method (SPECO). The specific unit exergetic cost of the net electrical power, heat energy in the Factory Heating Center (FHC) heating, heat energy in the Painting Factory Heating (PFH) and chilled water in the absorption chiller (ACh) produced by the TRIGEN system are obtained to be 45.94 US$/GJ, 29.98 US$/GJ, 42.42 US$/GJ and 167.52 US$/GJ, respectively.

  20. Phenomenological analyses and their application to the Defense Waste Processing Facility probabilistic safety analysis accident progression event tree. Revision 1

    International Nuclear Information System (INIS)

    Kalinich, D.A.; Thomas, J.K.; Gough, S.T.; Bailey, R.T.; Kearnaghan, D.P.

    1995-01-01

    In the Defense Waste Processing Facility (DWPF) Safety Analysis Reports (SARs) for the Savannah River Site (SRS), risk-based perspectives have been included per US Department of Energy (DOE) Order 5480.23. The NUREG-1150 Level 2/3 Probabilistic Risk Assessment (PRA) methodology was selected as the basis for calculating facility risk. The backbone of this methodology is the generation of an Accident Progression Event Tree (APET), which is solved using the EVNTRE computer code. To support the development of the DWPF APET, deterministic modeling of accident phenomena was necessary. From these analyses, (1) accident progressions were identified for inclusion into the APET; (2) branch point probabilities and any attendant parameters were quantified; and (3) the radionuclide releases to the environment from accidents were determined. The phenomena of interest for accident progressions included explosions, fires, a molten glass spill, and the response of the facility confinement system during such challenges. A variety of methodologies, from hand calculations to large system-model codes, were used in the evaluation of these phenomena

  1. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD-Level Meta-Regression Analyses

    Directory of Open Access Journals (Sweden)

    Kevin D. Cashman

    2017-05-01

    Full Text Available Dietary Reference Values (DRVs for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years of the vitamin D intake–serum 25-hydroxyvitamin D (25(OHD dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OHD concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OHD >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OHD to vitamin D intake.

  2. Potential applicability of fuzzy set theory to analyses of containment response and uncertainty for postulated severe accidents

    International Nuclear Information System (INIS)

    Chun, M.H.; Ahn, K.I.

    1991-01-01

    An important issue faced by contemporary risk analysts of nuclear power plants is how to deal with uncertainties that arise in each phase of probabilistic risk assessments. The major uncertainty addressed in this paper is the one that arises in the accident-progression event trees (APETs), which treat the physical processes affecting the core after an initiating event occurs. Recent advances in the theory of fuzzy sets make it possible to analyze the uncertainty related to complex physical phenomena that may occur during a severe accident of nuclear power plants by means of fuzzy set or possibility concept. The main purpose of this paper is to prevent the results of assessment of the potential applicability of the fuzzy set theory to the uncertainty analysis of APETs as a possible alternative procedure to that used in the most recent risk assessment

  3. Micro-CT and FE-SEM enamel analyses of calcium-based agent application after bleaching.

    Science.gov (United States)

    Gomes, Mauricio Neves; Rodrigues, Flávia Pires; Silikas, Nick; Francci, Carlos Eduardo

    2018-03-01

    The objective of the present study is to evaluate the effects of casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) on bleached enamel. A bleaching agent (35% hydrogen peroxide) was applied, 4 × 8 min on premolar teeth (n = 8). A CPP-ACP paste was applied for 7 days. Prior and post-treatment, microtomography images were obtained and 3D regions of interest (ROIs) were selected, from outer enamel, extending to 110.2-μm depth. CT parameters of structure: thickness (St.Th), separation (St.Sp), and fragmentation index (Fr.I.) were calculated for each (ROI). Data was submitted to paired t tests at a 95% confidence level. The samples were evaluated at 3000 to 100,000 magnification. Quantitative analysis of enamel mineral content was also determined by SEM EDX. There was a significant increase in structure thickness and calcium content. The phosphorus content increased after bleaching. There was also a decreased separation and fragmentation index on the outer enamel to a depth of 56.2 μm (p spaces between the hydroxyapatite crystals appeared around the enamel prisms, 7 days after the CPP-ACP application. The application of a CPP-ACP provides a compact structure on the enamel's outer surface, for 7 days, due to calcium deposition. CT parameters seem to be a useful tool for mineralizing and remineralizing future studies. CPP-ACP neutralizes any adverse effects on enamel surface when applied during a week after bleaching and minimizes any side effects of the bleaching treatment due to a more compact structure.

  4. X-ray diffraction and infrared spectroscopy analyses on the crystallinity of engineered biological hydroxyapatite for medical application

    Science.gov (United States)

    Poralan, G. M., Jr.; Gambe, J. E.; Alcantara, E. M.; Vequizo, R. M.

    2015-06-01

    Biological hydroxyapatite (BHAp) derived from thermally-treated fish bones was successfully produced. However, the obtained biological HAp was amorphous and thus making it unfavorable for medical application. Consequently, this research exploits and engineers the crystallinity of BHAp powders by addition of CaCO3 and investigates its degree of crystallinity using XRD and IR spectroscopy. On XRD, the HAp powders with [Ca]/[P] ratios 1.42, 1.46, 1.61 and 1.93 have degree of crystallinity equal to 58.08, 72.13, 85.79, 75.85% and crystal size equal to 0.67, 0.74, 0.75, 0.72 nm, respectively. The degree of crystallinity and crystal size of the obtained calcium deficient biological HAp powders increase as their [Ca]/[P] ratio approaches the stoichiometric ratio by addition of CaCO3 as source of Ca2+ ions. These results show the possibility of engineering the crystallinity and crystal size of biological HAp by addition of CaCO3. Moreover, the splitting factor of PO4 vibration matches the result with % crystallinity on XRD. Also, the area of phosphate-substitution site of PO4 vibration shows linear relationship (R2 = 0.994) with crystal size calculated from XRD. It is worth noting that the crystallinity of the biological HAp with [Ca]/[P] ratios 1.42 and 1.48 fall near the range 60-70% for highly resorbable HAp used in the medical application.

  5. New MRI technologies. Diffusion MRI and its application to functional neuroimaging and analyses of white matter integrity

    International Nuclear Information System (INIS)

    Kobayashi, Tetsuo

    2010-01-01

    Described is the technological aspect of MRI, MR diffusion-weighted imaging (MR-DWI), principles of its measurement and application for imaging the cerebral function and for aiding the quantitative diagnosis of brain diseases. The author explains the principle of MR imaging process; diffusion properties of water molecules, MR-DWI based on them and DW-fMRI of the brain; MR-diffusion tensor imaging (MR-DTI), its analysis and color acquisition, and tracking of white matter nerve fibers; analysis of white matter lesions by the tracking; and the new tracking method at the chiasm of nerve fascicles. The usual fMRI reflects the blood oxygen level depending (BOLD) signals whereas recently attracted DW-fMRI, the volume changes of nerve cells concomitant to nerve activation accompanying apparent changes of water diffusion coefficients in and out of cells which occur faster than BOLD signs, resulting in higher resolution of time and space. However, DWI requires the higher intensity of static magnetic field like 3T. MR-DTI acquires the anisotropic diffusion of water molecules using MR-DWI technique with application of 6 or more motion probing gradients, thus makes it possible to track the running directions of nerve fibers and capillary vessels, and is proposed to be a useful mean of specific fiber tracking in the white matter when displayed by 3 different colors exhibiting the directions like the right/left (x axis, red), anterior/posterior (y, green) and upper/lower (z, blue) sides of head. Recently, MR-DWI and MR-DTI have been found usable for pathogenic studies of brain diseases such as dementia. Tensor anisotropy is apparently lowered at the chiasm of nerve fascicles, the cause of tracking error, for which authors have developed a new method using the similarity of directional vector, not of tensor, before and behind the chiasm. As exemplified, MRI technology is further advancing even at present. (T.T.)

  6. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD)-Level Meta-Regression Analyses

    Science.gov (United States)

    Cashman, Kevin D.; Ritz, Christian; Kiely, Mairead

    2017-01-01

    Dietary Reference Values (DRVs) for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs) are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD)-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years) of the vitamin D intake–serum 25-hydroxyvitamin D (25(OH)D) dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OH)D concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years) from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OH)D >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OH)D to vitamin D intake. PMID:28481259

  7. Application of artificial neural network method to exergy and energy analyses of fluidized bed dryer for potato cubes

    International Nuclear Information System (INIS)

    Azadbakht, Mohsen; Aghili, Hajar; Ziaratban, Armin; Torshizi, Mohammad Vahedi

    2017-01-01

    Drying the samples was performed in the inlet temperatures of 45, 50, and 55 °C, air velocity of 3.2, 6.8, and 9.1 m s"−"1, and bed depth of 1.5, 2.2, and 3 cm. The effects of these parameters were evaluated on energy utilization, energy efficiency and utilization ratio and exergy loss and efficiency. Furthermore, artificial neural network was employed in order to predict the energy and exergy parameters, and simulation of thermodynamic drying process was carried out, using the ANN created. A network was constructed from learning algorithms and transfer functions that could predict, with good accuracy, the exergy and energy parameters related to the drying process. The results revealed that energy utilization, efficiency, and utilization ratio increased by increasing the air velocity and depth of the bed; however, energy utilization and efficiency were augmented by increasing the temperature; additionally, energy utilization ratio decreased along with the rise in temperature. Also was found that exergy loss and efficiency improved by increasing the air velocity, temperature, and depth of the bed. Finally, the results of the statistical analyses indicated that neural networks can be utilized in intelligent drying process which has a large share of energy utilization in the food industry. - Highlights: • Energy utilization increased by increasing temperature, air velocity and depth of the bed. • Exergy loss increased with increasing the air velocity, temperature and depth of the bed. • Prediction by a trained neural network is faster than usual mathematical models. • ANN it is a suitable method to predict the energy and exergy in various driers.

  8. The applicability of detailed process for neutron resonance absorption to neutronics analyses in LWR next generation fuels to extend burnup

    International Nuclear Information System (INIS)

    Kameyama, Takanori; Nauchi, Yasushi

    2004-01-01

    Neutronics analyses with detail processing for neutron resonance absorption in LWR next generation UOX and MOX fuels to extend burnup were performed based on the neutronic transport and burnup calculation. In the detailed processing, ultra-fine energy nuclear library and collision probabilities between neutron and U, Pu nuclides (actinide nuclides) are utilized for two-dimension geometry. In the usual simple processing (narrow resonance approximation), shielding factors and compensation equations for neutron resonance absorption are utilized. The results with detailed and simple processing were compared to clarify where the detailed processing is needed. The two processing caused difference of neutron multiplication factor by 0.5% at the beginning of irradiation, while the difference became smaller as burnup increased and was not significant at high burnup. The nuclide compositions of the fuel rods for main actinide nuclides were little different besides Cm isotopes by the processing, since the neutron absorption rate of 244 Cm became different. The detail processing is needed to evaluate the neutron emission rate in spent fuels. In the fuel assemblies, the distributions of rod power rates were not different within 0.5%, and the peak rates of fuel rod were almost the same by the two processing at the beginning of irradiation when the peak rate is the largest during the irradiation. The simple processing is also satisfied for safety evaluation based on the peak rate of rod power. The difference of local power densities in fuel pellets became larger as burnup increased, since the neutron absorption rate of 238 U in the peripheral region of pellets were significantly different by the two processing. The detail processing is needed to evaluate the fuel behavior at high burnup. (author)

  9. On-site phytoremediation applicability assessment in Alur Ilmu, Universiti Kebangsaan Malaysia based on spatial and pollution removal analyses.

    Science.gov (United States)

    Mahmud, Mohd Hafiyyan; Lee, Khai Ern; Goh, Thian Lai

    2017-10-01

    The present paper aims to assess the phytoremediation performance based on pollution removal efficiency of the highly polluted region of Alur Ilmu urban river for its applicability of on-site treatment. Thirteen stations along Alur Ilmu were selected to produce thematic maps through spatial distribution analysis based on six water quality parameters of Malaysia's Water Quality Index (WQI) for dry and raining seasons. The maps generated were used to identify the highly polluted region for phytoremediation applicability assessment. Four free-floating plants were tested in treating water samples from the highly polluted region under three different conditions, namely controlled, aerated and normal treatments. The selected free-floating plants were water hyacinth (Eichhornia crassipes), water lettuce (Pistia stratiotes), rose water lettuce (Pistia sp.) and pennywort (Centella asiatica). The results showed that Alur Ilmu was more polluted during dry season compared to raining season based on the water quality analysis. During dry season, four parameters were marked as polluted along Alur Ilmu, namely dissolve oxygen (DO), 4.72 mg/L (class III); ammoniacal nitrogen (NH 3 -N), 0.85 mg/L (class IV); total suspended solid (TSS), 402 mg/L (class V) and biological oxygen demand (BOD), 3.89 mg/L (class III), whereas, two parameters were classed as polluted during raining season, namely total suspended solid (TSS), 571 mg/L (class V) and biological oxygen demand (BOD), 4.01 mg/L (class III). The thematic maps generated from spatial distribution analysis using Kriging gridding method showed that the highly polluted region was recorded at station AL 5. Hence, water samples were taken from this station for pollution removal analysis. All the free-floating plants were able to reduce TSS and COD in less than 14 days. However, water hyacinth showed the least detrimental effect from the phytoremediation process compared to other free-floating plants, thus made it a suitable

  10. Applications of Micro-CT scanning in medicine and dentistry: Microstructural analyses of a Wistar Rat mandible and a urinary tract stone

    Science.gov (United States)

    Latief, F. D. E.; Sari, D. S.; Fitri, L. A.

    2017-08-01

    High-resolution tomographic imaging by means of x-ray micro-computed tomography (μCT) has been widely utilized for morphological evaluations in dentistry and medicine. The use of μCT follows a standard procedure: image acquisition, reconstruction, processing, evaluation using image analysis, and reporting of results. This paper discusses methods of μCT using a specific scanning device, the Bruker SkyScan 1173 High Energy Micro-CT. We present a description of the general workflow, information on terminology for the measured parameters and corresponding units, and further analyses that can potentially be conducted with this technology. Brief qualitative and quantitative analyses, including basic image processing (VOI selection and thresholding) and measurement of several morphometrical variables (total VOI volume, object volume, percentage of total volume, total VOI surface, object surface, object surface/volume ratio, object surface density, structure thickness, structure separation, total porosity) were conducted on two samples, the mandible of a wistar rat and a urinary tract stone, to illustrate the abilities of this device and its accompanying software package. The results of these analyses for both samples are reported, along with a discussion of the types of analyses that are possible using digital images obtained with a μCT scanning device, paying particular attention to non-diagnostic ex vivo research applications.

  11. Assessing residential buildings value in Spain for risk analyses. Application to the landslide hazard in the Autonomous Community of Valencia

    Science.gov (United States)

    Cantarino, I.; Torrijo, F. J.; Palencia, S.; Gielen, E.

    2014-05-01

    This paper proposes a method of valuing the stock of residential buildings in Spain as the first step in assessing possible damage caused to them by natural hazards. For the purposes of the study we had access to the SIOSE (the Spanish Land Use and Cover Information System), a high-resolution land-use model, as well as to a report on the financial valuations of this type of buildings throughout Spain. Using dasymetric disaggregation processes and GIS techniques we developed a geolocalized method of obtaining this information, which was the exposure variable in the general risk assessment formula. If hazard maps and risk assessment methods - the other variables - are available, the risk value can easily be obtained. An example of its application is given in a case study that assesses the risk of a landslide in the entire 23 200 km2 of the Valencia Autonomous Community (NUT2), the results of which are analyzed by municipal areas (LAU2) for the years 2005 and 2009.

  12. A methodology for evaluating weighting functions using MCNP and its application to PWR ex-core analyses

    International Nuclear Information System (INIS)

    Pecchia, Marco; Vasiliev, Alexander; Ferroukhi, Hakim; Pautz, Andreas

    2017-01-01

    Highlights: • Evaluation of neutron source importance for a given tally. • Assessment of ex-core detector response plus its uncertainty. • Direct use of neutron track evaluated by a Monte Carlo neutron transport code. - Abstract: The ex-core neutron detectors are commonly used to control reactor power in light water reactors. Therefore, it is relevant to understand the importance of a neutron source to the ex-core detectors response. In mathematical terms, this information is conveniently represented by the so called weighting functions. A new methodology based on the MCNP code for evaluating the weighting functions starting from the neutron history database is presented in this work. A simultaneous evaluation of the weighting functions in a user-given Cartesian coverage mesh is the main advantage of the method. The capability to generate weighting functions simultaneously in both spatial and energy ranges is the innovative part of this work. Then, an interpolation tool complements the methodology, allowing the generation of weighting functions up to the pin-by-pin fuel segment, where a direct evaluation is not possible due to low statistical precision. A comparison to reference results provides a verification of the methodology. Finally, an application to investigate the role of ex-core detectors spatial location and core burnup for a Swiss nuclear power plant is provided.

  13. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    Science.gov (United States)

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent

  14. On the applicability of probabilistic analyses to assess the structural reliability of materials and components for solid-oxide fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Lara-Curzio, Edgar [ORNL; Radovic, Miladin [Texas A& M University; Luttrell, Claire R [ORNL

    2016-01-01

    The applicability of probabilistic analyses to assess the structural reliability of materials and components for solid-oxide fuel cells (SOFC) is investigated by measuring the failure rate of Ni-YSZ when subjected to a temperature gradient and comparing it with that predicted using the Ceramics Analysis and Reliability Evaluation of Structures (CARES) code. The use of a temperature gradient to induce stresses was chosen because temperature gradients resulting from gas flow patterns generate stresses during SOFC operation that are the likely to control the structural reliability of cell components The magnitude of the predicted failure rate was found to be comparable to that determined experimentally, which suggests that such probabilistic analyses are appropriate for predicting the structural reliability of materials and components for SOFCs. Considerations for performing more comprehensive studies are discussed.

  15. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  16. A review of modern advances in analyses and applications of single-phase natural circulation loop in nuclear thermal hydraulics

    International Nuclear Information System (INIS)

    Basu, Dipankar N.; Bhattacharyya, Souvik; Das, P.K.

    2014-01-01

    Highlights: • Comprehensive review of state-of-the-art on single-phase natural circulation loops. • Detailed discussion on growth in solar thermal system and nuclear thermal hydraulics. • Systematic development in scaling methodologies for fabrication of test facilities. • Importance of numerical modeling schemes for stability assessment using 1-D codes. • Appraisal of current trend of research and possible future directions. - Abstract: A comprehensive review of single-phase natural circulation loop (NCL) is presented here. Relevant literature reported since the later part of 1980s has been meticulously surveyed, with occasional obligatory reference to a few pioneering studies originating prior to that period, summarizing the key observations and the present trend of research. Development in the concept of buoyancy-induced flow is discussed, with introduction to flow initiation in an NCL due to instability. Detailed discussion on modern advancement in important application areas like solar thermal systems and nuclear thermal hydraulics are presented, with separate analysis for various reactor designs working on natural circulation. Identification of scaling criteria for designing lab-scale experimental facilities has gone through a series of modification. A systematic analysis of the same is presented, considering the state-of-the-art knowledge base. Different approaches have been followed for modeling single-phase NCLs, including simplified Lorenz system mostly for toroidal loops, 1-D computational modeling for both steady-state and stability characterization and 3-D commercial system codes to have a better flow visualization. Methodical review of the relevant studies is presented following a systematic approach, to assess the gradual progression in understanding of the practical system. Brief appraisal of current research interest is reported, including the use of nanofluids for fluid property augmentation, marine reactors subjected to rolling waves

  17. A review of modern advances in analyses and applications of single-phase natural circulation loop in nuclear thermal hydraulics

    Energy Technology Data Exchange (ETDEWEB)

    Basu, Dipankar N., E-mail: dipankar.n.basu@gmail.com [Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati 781039 (India); Bhattacharyya, Souvik; Das, P.K. [Department of Mechanical Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302 (India)

    2014-12-15

    Highlights: • Comprehensive review of state-of-the-art on single-phase natural circulation loops. • Detailed discussion on growth in solar thermal system and nuclear thermal hydraulics. • Systematic development in scaling methodologies for fabrication of test facilities. • Importance of numerical modeling schemes for stability assessment using 1-D codes. • Appraisal of current trend of research and possible future directions. - Abstract: A comprehensive review of single-phase natural circulation loop (NCL) is presented here. Relevant literature reported since the later part of 1980s has been meticulously surveyed, with occasional obligatory reference to a few pioneering studies originating prior to that period, summarizing the key observations and the present trend of research. Development in the concept of buoyancy-induced flow is discussed, with introduction to flow initiation in an NCL due to instability. Detailed discussion on modern advancement in important application areas like solar thermal systems and nuclear thermal hydraulics are presented, with separate analysis for various reactor designs working on natural circulation. Identification of scaling criteria for designing lab-scale experimental facilities has gone through a series of modification. A systematic analysis of the same is presented, considering the state-of-the-art knowledge base. Different approaches have been followed for modeling single-phase NCLs, including simplified Lorenz system mostly for toroidal loops, 1-D computational modeling for both steady-state and stability characterization and 3-D commercial system codes to have a better flow visualization. Methodical review of the relevant studies is presented following a systematic approach, to assess the gradual progression in understanding of the practical system. Brief appraisal of current research interest is reported, including the use of nanofluids for fluid property augmentation, marine reactors subjected to rolling waves

  18. Permission-based separation logic for multi-threaded Java programs

    NARCIS (Netherlands)

    Amighi, A.; Haack, Christian; Huisman, Marieke; Hurlin, C.

    This paper presents a program logic for reasoning about multithreaded Java-like programs with concurrency primitives such as dynamic thread creation, thread joining and reentrant object monitors. The logic is based on concurrent separation logic. It is the first detailed adaptation of concurrent

  19. Investigating multi-thread utilization as a software defence mechanism against side channel attacks

    CSIR Research Space (South Africa)

    Frieslaar, Ibraheem

    2016-11-01

    Full Text Available out information at critical points in the cryptographic algorithm and confuse the attacker. This research demonstrates it is capable of outperforming the known countermeasure of hiding and shuffling in terms of preventing the secret information from...

  20. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is currently observing proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~1033 cm-2 s-1. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted rate of ~200 Hz for an event size of ~1.5 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the results into different raw data files according to the trigger decision. The data files are subsequently moved to the central mass storage facility at CERN. The system currently in production has been commissioned in 2007 and has been working smoothly since then. It is however based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size that is foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limi...

  1. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment observes proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~ 10^33 cm^-2 s^-1 in 2011. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted average rate of ~ 400 Hz for an event size of ~1.2 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the data into different raw files according to the trigger decision. The system currently in production is based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limits the possibility of performing additional CPU-intensive tasks. Therefore, a novel design able to exploit the full power of multi-core architecture is needed. The main challen...

  2. LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework

    NARCIS (Netherlands)

    Bezemer, M.M.; Wilterdink, R.J.W.; Welch, Peter H.; Sampson, Adam T.; Pedersen, Jan B.; Kerridge, Jon M.; Broenink, Johannes F.; Barnes, Frederick R.M.

    Modern embedded systems have multiple cores available. The CTC++ library is not able to make use of these cores, so a new framework is required to control the robotic setups in our lab. This paper first looks into the available frameworks and compares them to the requirements for controlling the

  3. Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger

    Science.gov (United States)

    Conde Muíño, P.; ATLAS Collaboration

    2017-10-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.

  4. Qualitative and Quantitative Information Flow Analysis for Multi-threaded Programs

    NARCIS (Netherlands)

    Ngo, Minh Tri

    2014-01-01

    In today’s information-based society, guaranteeing information security plays an important role in all aspects of life: governments, military, companies, financial information systems, web-based services etc. With the existence of Internet, Google, and shared-information networks, it is easier than

  5. Qualitative and quantitative information flow analysis for multi-thread programs

    NARCIS (Netherlands)

    Ngo, Minh Tri

    2014-01-01

    In today's information-based society, guaranteeing information security plays an important role in all aspects of life: communication between citizens and governments, military, companies, financial information systems, web-based services etc. With the increasing popularity of computer systems with

  6. Multi-threaded algorithms for GPGPU in the ATLAS High Level Trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00212700; The ATLAS collaboration

    2017-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significa...

  7. FODEM: A Multi-Threaded Research and Development Method for Educational Technology

    Science.gov (United States)

    Suhonen, Jarkko; de Villiers, M. Ruth; Sutinen, Erkki

    2012-01-01

    Formative development method (FODEM) is a multithreaded design approach that was originated to support the design and development of various types of educational technology innovations, such as learning tools, and online study programmes. The threaded and agile structure of the approach provides flexibility to the design process. Intensive…

  8. Towards Fast Reverse Time Migration Kernels using Multi-threaded Wavefront Diamond Tiling

    KAUST Repository

    Malas, T.; Hager, G.; Ltaief, Hatem; Keyes, David E.

    2015-01-01

    Today’s high-end multicore systems are characterized by a deep memory hierarchy, i.e., several levels of local and shared caches, with limited size and bandwidth per core. The ever-increasing gap between the processor and memory speed will further

  9. Hardware Support for Fine-Grain Multi-Threading in LEON3

    Czech Academy of Sciences Publication Activity Database

    Daněk, Martin; Kafka, Leoš; Kohout, Lukáš; Sýkora, Jaroslav

    2011-01-01

    Roč. 4, č. 1 (2011), s. 27-34 ISSN 1844-9689 R&D Projects: GA MŠk 7E08013 Grant - others:European Commission(BE) FP7-ICT-215216 Keywords : multithreading * microthreading * SPARC * microarchitecture * FPGA Subject RIV: JC - Computer Hardware ; Software http://library.utia.cas.cz/separaty/2011/ZS/danek-0380861.pdf

  10. System, methods and apparatus for program optimization for multi-threaded processor architectures

    Science.gov (United States)

    Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E

    2015-01-06

    Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.

  11. Hardware based redundant multi-threading inside a GPU for improved reliability

    Science.gov (United States)

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  12. Chroni - an Android Application for Geochronologists to Access Archived Sample Analyses from the NSF-Funded Geochron.Org Data Repository.

    Science.gov (United States)

    Nettles, J. J.; Bowring, J. F.

    2014-12-01

    NSF requires data management plans as part of funding proposals and geochronologists, among other scientists, are archiving their data and results to the public cloud archives managed by the NSF-funded Integrated Earth Data Applications, or IEDA. GeoChron is a database for geochronology housed within IEDA. The software application U-Pb_Redux developed at the Cyber Infrastructure Research and Development Lab for the Earth Sciences (CIRDLES.org) at the College of Charleston provides seamless connectivity to GeoChron for uranium-lead (U-Pb) geochronologists to automatically upload and retrieve their data and results. U-Pb_Redux also manages publication-quality documents including report tables and graphs. CHRONI is a lightweight mobile application for Android devices that provides easy access to these archived data and results. With CHRONI, U-Pb geochronologists can view archived data and analyses downloaded from the Geochron database, or any other location, in a customizable format. CHRONI uses the same extensible markup language (XML) schema and documents used by U-Pb_Redux and GeoChron. Report Settings are special XML files that can be customized in U-Pb_Redux, stored in the cloud, and then accessed and used in CHRONI to create the same customized data display on the mobile device. In addition to providing geologists effortless and mobile access to archived data and analyses, CHRONI allows users to manage their GeoChron credentials, quickly download private and public files via a specified IEDA International Geo Sample Number (IGSN) or URL, and view specialized graphics associated with particular IGSNs. Future versions of CHRONI will be developed to support iOS compatible devices. CHRONI is an open source project under the Apache 2 license and is hosted at https://github.com/CIRDLES/CHRONI. We encourage community participation in its continued development.

  13. Microscopic analysis of the optoelectronic properties of semiconductor gain media for laser applications; Mikroskopische Analyse optoelektronischer Eigenschaften von Halbleiterverstaerkungsmedien fuer Laseranwendungen

    Energy Technology Data Exchange (ETDEWEB)

    Bueckers, Christina

    2010-12-03

    A microscopic many-particle theory is applied to model a wide range of semiconductor laser gain materials. The fundamental understanding of the gain medium and the underlying carrier interaction processes allow for the quantitative prediction of the optoelectronic properties governing the laser performance. Detailed theory-experiment-comparisons are shown for a variety of structures demonstrating the application capabilities of the theoretical approach. The microscopically calculated material properties, in particular absorption, optical gain, luminescence and the intrinsic carrier losses due to radiative and Auger-recombination, constitute the critical input to analyse and design laser structures. On this basis, important system features such as laser wavelength or threshold behaviour become predictable. However, the theory is also used in a diagnostic fashion, e.g. to extract otherwise poorly known structural parameter. Thus, novel concepts for the optimisation of laser designs may be developed with regard to the requirements of specific applications. Moreover, the approach allows for the systematic exploration and assessment of completely novel material systems and their application potential. (orig.)

  14. Spectral components of laser Doppler flowmetry signals recorded in healthy and type 1 diabetic subjects at rest and during a local and progressive cutaneous pressure application: scalogram analyses

    International Nuclear Information System (INIS)

    Humeau, Anne; Koitka, Audrey; Abraham, Pierre; Saumet, Jean-Louis; L'Huillier, Jean-Pierre

    2004-01-01

    A significant transient increase in laser Doppler flowmetry (LDF) signals is observed in response to a local and progressive cutaneous pressure application in healthy subjects. This reflex may be impaired in diabetic patients. The work presents a signal processing providing the clarification of this phenomenon. Scalogram analyses of LDF signals recorded at rest and during a local and progressive cutaneous pressure application are performed on healthy and type 1 diabetic subjects. Three frequency bands, corresponding to myogenic, neurogenic and endothelial related metabolic activities, are studied. The results show that, at rest, the scalogram energy of each frequency band is significantly lower for diabetic patients than for healthy subjects, but the scalogram relative energies do not show any statistical difference between the two groups. Moreover, the neurogenic and endothelial related metabolic activities are significantly higher during the progressive pressure than at rest, in healthy and diabetic subjects. However, the relative contribution of the endothelial related metabolic activity is significantly higher during the progressive pressure than at rest, in the interval 200-400 s following the beginning of the pressure application, but only for healthy subjects. These results may improve knowledge on cutaneous microvascular responses to injuries or local pressures initiating diabetic complications

  15. Spectral components of laser Doppler flowmetry signals recorded in healthy and type 1 diabetic subjects at rest and during a local and progressive cutaneous pressure application: scalogram analyses

    Energy Technology Data Exchange (ETDEWEB)

    Humeau, Anne [Groupe ISAIP-ESAIP, 18 rue du 8 mai 1945, BP 80022, 49180 Saint Barthelemy d' Anjou cedex (France); Koitka, Audrey [Laboratoire de Physiologie et d' Explorations Vasculaires, Centre Hospitalier Universitaire d' Angers, 49033 Angers cedex 01 (France); Abraham, Pierre [Laboratoire de Physiologie et d' Explorations Vasculaires, Centre Hospitalier Universitaire d' Angers, 49033 Angers cedex 01 (France); Saumet, Jean-Louis [Laboratoire de Physiologie et d' Explorations Vasculaires, Centre Hospitalier Universitaire d' Angers, 49033 Angers cedex 01 (France); L' Huillier, Jean-Pierre [Ecole Nationale Superieure d' Arts et Metiers (ENSAM), Laboratoire Procedes-Materiaux-Instrumentation (LPMI), 2 boulevard du Ronceray, BP 3525, 49035 Angers cedex (France)

    2004-09-07

    A significant transient increase in laser Doppler flowmetry (LDF) signals is observed in response to a local and progressive cutaneous pressure application in healthy subjects. This reflex may be impaired in diabetic patients. The work presents a signal processing providing the clarification of this phenomenon. Scalogram analyses of LDF signals recorded at rest and during a local and progressive cutaneous pressure application are performed on healthy and type 1 diabetic subjects. Three frequency bands, corresponding to myogenic, neurogenic and endothelial related metabolic activities, are studied. The results show that, at rest, the scalogram energy of each frequency band is significantly lower for diabetic patients than for healthy subjects, but the scalogram relative energies do not show any statistical difference between the two groups. Moreover, the neurogenic and endothelial related metabolic activities are significantly higher during the progressive pressure than at rest, in healthy and diabetic subjects. However, the relative contribution of the endothelial related metabolic activity is significantly higher during the progressive pressure than at rest, in the interval 200-400 s following the beginning of the pressure application, but only for healthy subjects. These results may improve knowledge on cutaneous microvascular responses to injuries or local pressures initiating diabetic complications.

  16. Radiation applications in art and archaeometry X-ray fluorescence applications to archaeometry. Possibility of obtaining non-destructive quantitative analyses

    International Nuclear Information System (INIS)

    Milazzo, Mario

    2004-01-01

    The possibility of obtaining quantitative XRF analysis in archaeometric applications is considered in the following cases: - Examinations of metallic objects with irregular surface: coins, for instance. - Metallic objects with a natural or artificial patina on the surface. - Glass or ceramic samples for which the problems for quantitative analysis rise from the non-detectability of matrix low Z elements. The fundamental parameter method for quantitative XRF analysis is based on a numerical procedure involving he relative values of XRF lines intensity. As a consequence it can be applied also to the experimental XRF spectra obtained for metallic objects if the correction for the irregular shape consists only in introducing a constant factor which does not affect the XRF intensity relative value. This is in fact possible in non-very-restrictive conditions for the experimental set up. The finenesses of coins with a superficial patina can be evaluated by resorting to the measurements of Rayleigh to Compton scattering intensity ratio at an incident energy higher than the one of characteristic X-ray. For glasses and ceramics the measurements of the Compton scattered intensity of the exciting radiation and the use of a proper scaling law make possible to evaluate the matrix absorption coefficients for all characteristic X-ray line energies

  17. Application of wavelet analysis to the nuclear phase space study; Application de l`analyse en ondelettes a l`etude de l`espace des phases nucleaire

    Energy Technology Data Exchange (ETDEWEB)

    Jouault, B. [Nantes Univ., 44 (France)

    1996-11-22

    The objective of this thesis is to present a methodology, based on the projection methods used in statistical physics and on the wavelet approach, which allows to obtain various classes of information. A coherent modelling was elaborated as the tools used for generating and solving the evolution equations, expressed in terms of pertinent variables, are based on common concepts. The property of scale separation of the wavelet analysis allows an approximation hierarchy based on the geometrical structure of phase space to be defined. This information structuration offers the opportunity of solving the evolution equations with various degrees of precision by controlling the information loss and avoiding the sampling methods of Monte Carlo type. The application of this methodology to the case of heavy ion collisions needs an entirely numerical treatment of the density matrix evolution equation. This implies a very precise level of description in order to take into account the important dissipation effects occurring in intermediate energy nuclear dynamics. A proper solution less expensive was adopted by using the wavelets analytically expressed, this entailing also the testing of model validity by comparing its results with the analytical solutions. This model takes into account the structure of the system wave functions, thus conserving the microscopical information. The present methodology can be applied also at other energy domains providing the nuclear systems are subject to transient non steady-state regimes. The wavelet analysis was used extensively in the field of signal processing particularly to extract from background a physical signal and also in the field of turbulence phenomena 152 refs.

  18. Patterns of use and impact of standardised MedDRA query analyses on the safety evaluation and review of new drug and biologics license applications.

    Science.gov (United States)

    Chang, Lin-Chau; Mahmood, Riaz; Qureshi, Samina; Breder, Christopher D

    2017-01-01

    Standardised MedDRA Queries (SMQs) have been developed since the early 2000's and used by academia, industry, public health, and government sectors for detecting safety signals in adverse event safety databases. The purpose of the present study is to characterize how SMQs are used and the impact in safety analyses for New Drug Application (NDA) and Biologics License Application (BLA) submissions to the United States Food and Drug Administration (USFDA). We used the PharmaPendium database to capture SMQ use in Summary Basis of Approvals (SBoAs) of drugs and biologics approved by the USFDA. Characteristics of the drugs and the SMQ use were employed to evaluate the role of SMQ safety analyses in regulatory decisions and the veracity of signals they revealed. A comprehensive search of the SBoAs yielded 184 regulatory submissions approved from 2006 to 2015. Search strategies more frequently utilized restrictive searches with "narrow terms" to enhance specificity over strategies using "broad terms" to increase sensitivity, while some involved modification of search terms. A majority (59%) of 1290 searches used descriptive statistics, however inferential statistics were utilized in 35% of them. Commentary from reviewers and supervisory staff suggested that a small, yet notable percentage (18%) of 1290 searches supported regulatory decisions. The searches with regulatory impact were found in 73 submissions (40% of the submissions investigated). Most searches (75% of 227 searches) with regulatory implications described how the searches were confirmed, indicating prudence in the decision-making process. SMQs have an increasing role in the presentation and review of safety analysis for NDAs/BLAs and their regulatory reviews. This study suggests that SMQs are best used for screening process, with descriptive statistics, description of SMQ modifications, and systematic verification of cases which is crucial for drawing regulatory conclusions.

  19. Patterns of use and impact of standardised MedDRA query analyses on the safety evaluation and review of new drug and biologics license applications.

    Directory of Open Access Journals (Sweden)

    Lin-Chau Chang

    Full Text Available Standardised MedDRA Queries (SMQs have been developed since the early 2000's and used by academia, industry, public health, and government sectors for detecting safety signals in adverse event safety databases. The purpose of the present study is to characterize how SMQs are used and the impact in safety analyses for New Drug Application (NDA and Biologics License Application (BLA submissions to the United States Food and Drug Administration (USFDA.We used the PharmaPendium database to capture SMQ use in Summary Basis of Approvals (SBoAs of drugs and biologics approved by the USFDA. Characteristics of the drugs and the SMQ use were employed to evaluate the role of SMQ safety analyses in regulatory decisions and the veracity of signals they revealed.A comprehensive search of the SBoAs yielded 184 regulatory submissions approved from 2006 to 2015. Search strategies more frequently utilized restrictive searches with "narrow terms" to enhance specificity over strategies using "broad terms" to increase sensitivity, while some involved modification of search terms. A majority (59% of 1290 searches used descriptive statistics, however inferential statistics were utilized in 35% of them. Commentary from reviewers and supervisory staff suggested that a small, yet notable percentage (18% of 1290 searches supported regulatory decisions. The searches with regulatory impact were found in 73 submissions (40% of the submissions investigated. Most searches (75% of 227 searches with regulatory implications described how the searches were confirmed, indicating prudence in the decision-making process.SMQs have an increasing role in the presentation and review of safety analysis for NDAs/BLAs and their regulatory reviews. This study suggests that SMQs are best used for screening process, with descriptive statistics, description of SMQ modifications, and systematic verification of cases which is crucial for drawing regulatory conclusions.

  20. Sliding window analyses for optimal selection of mini-barcodes, and application to 454-pyrosequencing for specimen identification from degraded DNA.

    Science.gov (United States)

    Boyer, Stephane; Brown, Samuel D J; Collins, Rupert A; Cruickshank, Robert H; Lefort, Marie-Caroline; Malumbres-Olarte, Jagoba; Wratten, Stephen D

    2012-01-01

    DNA barcoding remains a challenge when applied to diet analyses, ancient DNA studies, environmental DNA samples and, more generally, in any cases where DNA samples have not been adequately preserved. Because the size of the commonly used barcoding marker (COI) is over 600 base pairs (bp), amplification fails when the DNA molecule is degraded into smaller fragments. However, relevant information for specimen identification may not be evenly distributed along the barcoding region, and a shorter target can be sufficient for identification purposes. This study proposes a new, widely applicable, method to compare the performance of all potential 'mini-barcodes' for a given molecular marker and to objectively select the shortest and most informative one. Our method is based on a sliding window analysis implemented in the new R package SPIDER (Species IDentity and Evolution in R). This method is applicable to any taxon and any molecular marker. Here, it was tested on earthworm DNA that had been degraded through digestion by carnivorous landsnails. A 100 bp region of 16 S rDNA was selected as the shortest informative fragment (mini-barcode) required for accurate specimen identification. Corresponding primers were designed and used to amplify degraded earthworm (prey) DNA from 46 landsnail (predator) faeces using 454-pyrosequencing. This led to the detection of 18 earthworm species in the diet of the snail. We encourage molecular ecologists to use this method to objectively select the most informative region of the gene they aim to amplify from degraded DNA. The method and tools provided here, can be particularly useful (1) when dealing with degraded DNA for which only small fragments can be amplified, (2) for cases where no consensus has yet been reached on the appropriate barcode gene, or (3) to allow direct analysis of short reads derived from massively parallel sequencing without the need for bioinformatic consolidation.

  1. Sliding window analyses for optimal selection of mini-barcodes, and application to 454-pyrosequencing for specimen identification from degraded DNA.

    Directory of Open Access Journals (Sweden)

    Stephane Boyer

    Full Text Available DNA barcoding remains a challenge when applied to diet analyses, ancient DNA studies, environmental DNA samples and, more generally, in any cases where DNA samples have not been adequately preserved. Because the size of the commonly used barcoding marker (COI is over 600 base pairs (bp, amplification fails when the DNA molecule is degraded into smaller fragments. However, relevant information for specimen identification may not be evenly distributed along the barcoding region, and a shorter target can be sufficient for identification purposes. This study proposes a new, widely applicable, method to compare the performance of all potential 'mini-barcodes' for a given molecular marker and to objectively select the shortest and most informative one. Our method is based on a sliding window analysis implemented in the new R package SPIDER (Species IDentity and Evolution in R. This method is applicable to any taxon and any molecular marker. Here, it was tested on earthworm DNA that had been degraded through digestion by carnivorous landsnails. A 100 bp region of 16 S rDNA was selected as the shortest informative fragment (mini-barcode required for accurate specimen identification. Corresponding primers were designed and used to amplify degraded earthworm (prey DNA from 46 landsnail (predator faeces using 454-pyrosequencing. This led to the detection of 18 earthworm species in the diet of the snail. We encourage molecular ecologists to use this method to objectively select the most informative region of the gene they aim to amplify from degraded DNA. The method and tools provided here, can be particularly useful (1 when dealing with degraded DNA for which only small fragments can be amplified, (2 for cases where no consensus has yet been reached on the appropriate barcode gene, or (3 to allow direct analysis of short reads derived from massively parallel sequencing without the need for bioinformatic consolidation.

  2. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    Science.gov (United States)

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Application of the inter-line PCR for the analyse of genomic rearrangements in radiation-transformed mammalian cell lines; Anwendung der Inter-Line PCR zur Analyse von genomischen Veraenderungen in strahlentransformierten Saeugerzellinien

    Energy Technology Data Exchange (ETDEWEB)

    Leibhard, S.; Smida, J. [Muenchen Univ. (Germany). Strahlenbiologisches Inst.; Eckardt-Schupp, F.; Hieber, L. [GSF-Inst. fuer Strahlenbiologie, Oberschleissheim (Germany)

    1996-12-31

    Repetitive DNA sequences of the LINE-family (long interspersed elements) that are widely distributed among the mammalian genome can be activated or altered by the exposure to ionizing radiation [1]. By the integration at new sites in the genome alterations in the expression of genes that are involved in cell transformation and/or carcinogenesis may occur [2, 3]. A new technique - the inter-LINE PCR - has been developed in order to detect and analyse such genomic rearrangements in radiation-transformed cell lines. From the sites of transformation- or tumour-specific changes in the genome it might be possible to develop new tumour markers for diagnostic purpose. (orig.) [Deutsch] Repetitive DNA-Sequenzen der LINE-Familie, die weit verbreitet im Genom von Saeugerzellen vorkommen, koennen durch Exposition mit ionisierender Strahlung aktiviert und veraendert werden [1]. Durch eine Neu- bzw. Reintegration an anderen Positionen im Genom kann es zu bedeutenden Veraenderungen im Genom der Zelle kommen. Die Expression von Genen, die bei den Prozessen der Zelltransformation bzw. der Karzinogenese beteiligt sind, kann dadurch veraendert werden [2, 3]. Mithilfe der von uns entwickelten Inter-LINE PCR und der anschliessenden Analyse der veraenderten Produktmuster nach gelelektrophoretischer Auftrennung koennen solche `genomic rearrangements` unter Beteiligung von LINE-Elementen untersucht und naeher charakterisiert werden. Durch Klonierung und Sequenzierung transformations- bzw. tumorspezifischer PCR-Produkte sollte es moeglich sein Tumormarker fuer diagnostische Zwecke zu entwickeln. Die Methode wurde fuer die Analyse von Zellen des Syrischen Hamster aufgebaut, sie ist jedoch universell fuer alle Saeuger anwendbar. (orig.)

  4. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    Science.gov (United States)

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.

  5. Determination of correction coefficients for quantitative analysis by mass spectrometry. Application to uranium impurities analysis; Recherche des coefficients de correction permettant l'analyse quantitative par spectrometrie de masse. Application a l'analyse d'impuretes dans l'uranium

    Energy Technology Data Exchange (ETDEWEB)

    Billon, J P [Commissariat a l' Energie Atomique, Bruyeres-le-Chatel (France). Centre d' Etudes

    1970-07-01

    Some of basic principles in spark source mass spectrometry are recalled. It is shown how this method can lead to quantitative analysis when attention is paid to some theoretical aspects. A time constant relation being assumed between the analysed solid sample and the ionic beam it gives we determined experimental relative sensitivity factors for impurities in uranium matrix. Results being in fairly good agreement with: an unelaborate theory on ionization yield in spark-source use of theoretically obtained relative sensitivity factors in uranium matrix has been developed. (author) [French] Apres avoir rappele quelques principes fondamentaux regissant la spectrometrie de masse a etincelles, nous avons montre que moyennant un certain nombre de precautions, il etait possible d'utiliser cette methode en analyse quantitative. Ayant admis qu'il existait une relation constante dans le temps entre l'echantillon solide analyse et le faisceau ionique qui en est issu, nous avons d'abord entrepris de determiner des coefficients de correction experimentaux pour des matrices d'uranium. Les premiers resultats pratiques semblant en accord avec une theorie simple relative au rendement d'ionisation dans la source a etincelles, nous avons etudie la possibilite d'appliquer directement les coefficients theoriques ainsi definis, l'application etant toujours faite sur des matrices d'uranium. (auteur)

  6. Application of the results of pipe stress analyses into fracture mechanics defect analyses for welds of nuclear piping components; Uebernahme der Ergebnisse von Rohrsystemanalysen (Spannungsanalysen) fuer bruchmechanische Fehlerbewertungen fuer Schweissnaehte an Rohrleitungsbauteilen in kerntechnischen Anlagen

    Energy Technology Data Exchange (ETDEWEB)

    Dittmar, S.; Neubrech, G.E.; Wernicke, R. [TUeV Nord SysTec GmbH und Co.KG (Germany); Rieck, D. [IGN Ingenieurgesellschaft Nord mbH und Co.KG (Germany)

    2008-07-01

    For the fracture mechanical assessment of postulated or detected crack-like defects in welds of piping systems it is necessary to know the stresses in the un-cracked component normal to the crack plane. Results of piping stress analyses may be used if these are evaluated for the locations of the welds in the piping system. Using stress enhancing factors (stress indices, stress factors) the needed stress components are calculated from the component specific sectional loads (forces and moments). For this procedure the tabulated stress enhancing factors, given in the standards (ASME Code, German KTA regulations) for determination and limitation of the effective stresses, are not always and immediately adequate for the calculation of the stress component normal to the crack plane. The contribution shows fundamental possibilities and validity limits for adoption of the results of piping system analyses for the fracture mechanical evaluation of axial and circumferential defects in welded joints, with special emphasis on typical piping system components (straight pipe, elbow, pipe fitting, T-joint). The lecture is supposed to contribute to the standardization of a code compliant and task-related use of the piping system analysis results for fracture mechanical failure assessment. [German] Fuer die bruchmechanische Bewertung von postulierten oder bei der wiederkehrenden zerstoerungsfreien Pruefung detektierten rissartigen Fehlern in Schweissnaehten von Rohrsystemen werden die Spannungen in der ungerissenen Bauteilwand senkrecht zur Rissebene benoetigt. Hierfuer koennen die Ergebnisse von Rohrsystemanalysen (Spannungsanalysen) genutzt werden, wenn sie fuer die Orte der Schweissnaehte im Rohrsystem ausgewertet werden. Mit Hilfe von Spannungserhoehungsfaktoren (Spannungsindizes, Spannungsbeiwerten) werden aus den komponentenweise berechneten Schnittlasten (Kraefte und Momente) die benoetigten Spannungskomponenten berechnet. Dabei sind jedoch die in den Regelwerken (ASME

  7. Use of 1012 ohm current amplifiers in Sr and Nd isotope analyses by TIMS for application to sub-nanogram samples

    NARCIS (Netherlands)

    Koornneef, J.M.; Bouman, C.; Schwieters, J.B.; Davies, G.R.

    2013-01-01

    We have investigated the use of current amplifiers equipped with 10 12 ohm feedback resistors in thermal ionisation mass spectrometry (TIMS) analyses of sub-nanogram sample aliquots for Nd and Sr isotope ratios. The results of analyses using the 1012 ohm resistors were compared to those obtained

  8. Measurement and analysis of high energy radiation through activation detectors. Application in dosimetry; Sur la mesure et l'analyse des rayonnements de haute energie par detecteurs a activation. Application a la dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Sklavenitis, L. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1967-10-15

    This work is concerned with the possibility of measurement and analysis of radiation fluences within objects of small volume submitted to a high energy proton beam. The first part, consecrated to the establishment of a method of analysis, comprises a detailed study of the radiation nature and energy spectra as well as of the various dosimetry methods. In order to select a group of detectors, high energy nuclear reactions were systematically studied and for some of them cross sections were measured or calculated: for example the cross section of the reaction {sup 11}B (p,n) {sup 11}C between 150 and 3000 MeV and of the reaction {sup 34}S (p,2pn) {sup 32}P between 50 and 3000 MeV. The second part is relative to the application of the fore-mentioned analysis to radiation within a tissue equivalent phantom irradiated by 3 GeV protons. This analysis is sufficiently detailed to allow the reconstitution of the absorbed doses, the dose equivalent and, contingent on a better knowledge of the dose due to heavy particles, the quality factors. It allowed also to follow the evolution of the various dosimetric data as a function of the depth inside the phantom and to verify calculations already done by other researchers. The comparison of the measured doses and the corresponding detector activities revealed the possibility that some detectors could give directly the absorbed dose, or even the dose equivalent, by a simple activity measurement. (author) [French] Le travail porte sur la possibilite de mesure et d'analyse, a l'aide de detecteurs a activation, des fluences de rayonnements a l'interieur d'un objet de petit volume soumis a un faisceau de protons de tres haute energie. La premiere partie, consacree a la mise au point de la methode d'analyse des fluences, comporte une etude detaillee de la nature des rayonnements et de leurs spectres energetiques ainsi que des differentes methodes de dosimetrie. Pour arriver au choix d'un groupe de

  9. Measurement and analysis of high energy radiation through activation detectors. Application in dosimetry; Sur la mesure et l'analyse des rayonnements de haute energie par detecteurs a activation. Application a la dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Sklavenitis, L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1967-10-15

    This work is concerned with the possibility of measurement and analysis of radiation fluences within objects of small volume submitted to a high energy proton beam. The first part, consecrated to the establishment of a method of analysis, comprises a detailed study of the radiation nature and energy spectra as well as of the various dosimetry methods. In order to select a group of detectors, high energy nuclear reactions were systematically studied and for some of them cross sections were measured or calculated: for example the cross section of the reaction {sup 11}B (p,n) {sup 11}C between 150 and 3000 MeV and of the reaction {sup 34}S (p,2pn) {sup 32}P between 50 and 3000 MeV. The second part is relative to the application of the fore-mentioned analysis to radiation within a tissue equivalent phantom irradiated by 3 GeV protons. This analysis is sufficiently detailed to allow the reconstitution of the absorbed doses, the dose equivalent and, contingent on a better knowledge of the dose due to heavy particles, the quality factors. It allowed also to follow the evolution of the various dosimetric data as a function of the depth inside the phantom and to verify calculations already done by other researchers. The comparison of the measured doses and the corresponding detector activities revealed the possibility that some detectors could give directly the absorbed dose, or even the dose equivalent, by a simple activity measurement. (author) [French] Le travail porte sur la possibilite de mesure et d'analyse, a l'aide de detecteurs a activation, des fluences de rayonnements a l'interieur d'un objet de petit volume soumis a un faisceau de protons de tres haute energie. La premiere partie, consacree a la mise au point de la methode d'analyse des fluences, comporte une etude detaillee de la nature des rayonnements et de leurs spectres energetiques ainsi que des differentes methodes de dosimetrie. Pour arriver au choix d'un groupe de detecteurs, une etude systematique des

  10. Possibilities of gas-phase radio-chromatography application to permanent-gas analysis; Possibilites de la radiochromatographie en phase gazeuse applications a l'analyse des gaz permanents

    Energy Technology Data Exchange (ETDEWEB)

    Dupuis, M C; Charrier, G; Alba, C; Massimino, D [Commissariat a l' Energie Atomique, Bruyeres-le-Chatel (France). Centre d' Etudes

    1970-07-01

    The gas-phase radio-chromatography technique has been applied to the rapid analysis of permanent gases (H{sub 2}, O{sub 2}, N{sub 2}, A, Kr, Xe, CO, CH{sub 4}) labelled with radioactive indicators ({sup 3}H, {sup 37}A, {sup 85}Kr, {sup 133}Xe). After calibration, the components of such a mixture can be identified and their concentrations measured in less than two hours, using a sample volume of from 0.1 to 10 cm{sup 3}. The minimum detectable activity is of the order of 10{sup -4} {mu}C for each radioactive isotope. The measurements are reproducible to about 2 to 3 per cent. This work has been mainly concerned with the influence of parameters affecting the response of the radioactivity detector (ionization chamber or gas flow proportional counter). The method has very numerous applications both theoretically, for the study of chromatographic phenomena under ideal conditions (infinitesimal concentrations made possible by the use of radioactive tracers), and practically, for rapid and accurate radiochemical analysis of radioactive gas mixtures. (authors) [French] La technique de radiochromatographie en phase gazeuse est appliquee a l'analyse rapide de gaz permanents (H{sub 2}, O{sub 2}, N{sub 2}, A, Kr, Xe, CO, CH{sub 4}) marques par des indicateurs radioactifs ({sup 3}H, {sup 37}A, {sup 85}Kr, {sup 133}Xe). Apres etalonnage, l'identification et la mesure des concentrations des constituants d'un tel melange requierent moins de deux heures, sur un volume d'echantillon de 0.1 a 10 cm{sup 3}. L'activite minimum detectable est de l'ordre de 10{sup -4} {mu}C pour chaque isotope radioactif. La reproductibilite des mesures est de l'ordre de 2 a 3 pour cent. L'etude porte principalement sur l'influence des parametres affectant la reponse du detecteur de radioactivite (chambre d'ionisation, ou compteur proportionnel a circulation). La methode est extremement fertile en applications tant sur le plan theorique pour l'etude du phenomene chromatographique dans les conditions

  11. Possibilities of gas-phase radio-chromatography application to permanent-gas analysis; Possibilites de la radiochromatographie en phase gazeuse applications a l'analyse des gaz permanents

    Energy Technology Data Exchange (ETDEWEB)

    Dupuis, M.C.; Charrier, G.; Alba, C.; Massimino, D. [Commissariat a l' Energie Atomique, Bruyeres-le-Chatel (France). Centre d' Etudes

    1970-07-01

    The gas-phase radio-chromatography technique has been applied to the rapid analysis of permanent gases (H{sub 2}, O{sub 2}, N{sub 2}, A, Kr, Xe, CO, CH{sub 4}) labelled with radioactive indicators ({sup 3}H, {sup 37}A, {sup 85}Kr, {sup 133}Xe). After calibration, the components of such a mixture can be identified and their concentrations measured in less than two hours, using a sample volume of from 0.1 to 10 cm{sup 3}. The minimum detectable activity is of the order of 10{sup -4} {mu}C for each radioactive isotope. The measurements are reproducible to about 2 to 3 per cent. This work has been mainly concerned with the influence of parameters affecting the response of the radioactivity detector (ionization chamber or gas flow proportional counter). The method has very numerous applications both theoretically, for the study of chromatographic phenomena under ideal conditions (infinitesimal concentrations made possible by the use of radioactive tracers), and practically, for rapid and accurate radiochemical analysis of radioactive gas mixtures. (authors) [French] La technique de radiochromatographie en phase gazeuse est appliquee a l'analyse rapide de gaz permanents (H{sub 2}, O{sub 2}, N{sub 2}, A, Kr, Xe, CO, CH{sub 4}) marques par des indicateurs radioactifs ({sup 3}H, {sup 37}A, {sup 85}Kr, {sup 133}Xe). Apres etalonnage, l'identification et la mesure des concentrations des constituants d'un tel melange requierent moins de deux heures, sur un volume d'echantillon de 0.1 a 10 cm{sup 3}. L'activite minimum detectable est de l'ordre de 10{sup -4} {mu}C pour chaque isotope radioactif. La reproductibilite des mesures est de l'ordre de 2 a 3 pour cent. L'etude porte principalement sur l'influence des parametres affectant la reponse du detecteur de radioactivite (chambre d'ionisation, ou compteur proportionnel a circulation). La methode est extremement fertile en applications tant sur le plan theorique pour l

  12. Possible future HERA analyses

    International Nuclear Information System (INIS)

    Geiser, Achim

    2015-12-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  13. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    NARCIS (Netherlands)

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  14. [Results transferability on RXL, ARX, X-Pand, BN2 (Dade Behring) and modular DP (Roche Diagnostics) analysers: application to component assays of fibrotest and Actitest].

    Science.gov (United States)

    Imbert-Bismut, F; Messous, D; Raoult, A; Poynard, T; Bertrand, J J; Marie, P A; Louis, V; Audy, C; Thouy, J M; Hainque, B; Piton, A

    2005-01-01

    The follow up of patients with chronic liver diseases and the data from multicentric clinical studies are affected by the variability of assay results for the same parameter between the different laboratories. Today, the main objective in clinical chemistry throughout the world is to harmonise the assay results between the laboratories after the confirmation of their traceability, in relation to defined reference systems. In this context, the purpose of our study was to verify the homogeneity of haptoglobin, apolipoprotein A1, total bilirubin, GGT activity, ALAT activity results, which are combined in Fibrotest and Actitest, between Dimension Analysers RXL, ARX and X-PAND (Dade Behring Society). Moreover, we verified the transferability of Fibrotest and Actitest results between the RXL, and either the BN2 (haptoglobin and apolipoprotein A1) or the Modular DP (total bilirubin, GGT and ALAT activity concentrations). The serum samples from 150 hospitalised patients were analysed on the different analysers. Specific protein assays were calibrated using solutions standardised against reference material on Dimension and BN2 analysers. Total bilirubin assays were performed by a diazoreaction on Dimension and Modular DP analysers. The GGT and ALAT activity measurements on the Dimension analysers were performed in accordance with the reference methods defined by the International Federation of Clinical Chemisty and Laboratory Medicine (IFCC). On the Modular, enzyme activity measurements were performed according to the Szasz method (L-gamma- glutamyl-4-nitroanilide as substrate) modified by Persijn and van der Slik (L-gamma- glutamyl-3-carboxy- 4-nitroanilide as substrat) for GGT and according to the IFCC specifications for ALAT. The methods of enzymatic activity measurement were calibrated on the Modular only. Liver fibrosis and necroinflammatory activity indices were determined using calculation algorithms, after having adjusted each component's result of Fibrotest and

  15. Applicability of RELAP5-3D for Thermal-Hydraulic Analyses of a Sodium-Cooled Actinide Burner Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    C. B. Davis

    2006-07-01

    The Actinide Burner Test Reactor (ABTR) is envisioned as a sodium-cooled, fast reactor that will burn the actinides generated in light water reactors to reduce nuclear waste and ease proliferation concerns. The RELAP5-3D computer code is being considered as the thermal-hydraulic system code to support the development of the ABTR. An evaluation was performed to determine the applicability of RELAP5-3D for the analysis of a sodium-cooled fast reactor. The applicability evaluation consisted of several steps, including identifying the important transients and phenomena expected in the ABTR, identifying the models and correlations that affect the code’s calculation of the important phenomena, and evaluating the applicability of the important models and correlations for calculating the important phenomena expected in the ABTR. The applicability evaluation identified code improvements and additional models needed to simulate the ABTR. The accuracy of the calculated thermodynamic and transport properties for sodium was also evaluated.

  16. MARTe framework; a middle-ware for real-time applications development

    International Nuclear Information System (INIS)

    Neto, A.; Alves, D.; Carvalho, B.B.; Carvalho, P.J.; Fernandes, H.; Valcarcel, D.F.; Sartori, F.; Barbalace, A.; Manduchi, G.; Boncagni, L.; Tommasi, G. de; McCullen, P.; Stephen, A.; Vitelli, R.; Zabeo, L.

    2012-01-01

    The Multi-threaded Application Real-Time executor (MARTe) is a C++ framework that provides a development environment for the design and deployment of real-time applications, e.g. control systems. The kernel of MARTe comprises a set of data-driven independent blocks, connected using a shared bus. This modular design enforces a clear boundary between algorithms, hardware interaction and system configuration. The architecture, being multi-platform, facilitates the test and commissioning of new systems, enabling the execution of plant models in offline environments and with the hardware-in-the-loop, whilst also providing a set of non-intrusive introspection and logging facilities. Furthermore, applications can be developed in non real-time environments and deployed in a real-time operating system, using exactly the same code and configuration data. The framework is already being used in several fusion experiments, with control cycles ranging from 50 microseconds to 10 milliseconds exhibiting jitters of less than 2%, using VxWorks R , RTAI or Linux. Codes can also be developed and executed in Microsoft Windows R and Solaris R . This paper discusses the main design concepts of MARTe, in particular the architectural choices which enabled the combination of real-time accuracy, performance and robustness with complex and modular data driven applications. (authors)

  17. Gas-cooled reactor commercialization study: introduction scenario and commercialization analyses for process heat applications. Final report, July 8, 1977--November 30, 1977

    International Nuclear Information System (INIS)

    1977-12-01

    This report identifies and presents an introduction scenario which can lead to the operation of High Temperature Gas Cooled Reactor demonstration plants for combined process heat and electric power generation applications, and presents a commercialization analysis relevant to the organizational and management plans which could implement a development program

  18. Gas-cooled reactor commercialization study: introduction scenario and commercialization analyses for process heat applications. Final report, July 8, 1977--November 30, 1977

    Energy Technology Data Exchange (ETDEWEB)

    1977-12-01

    This report identifies and presents an introduction scenario which can lead to the operation of High Temperature Gas Cooled Reactor demonstration plants for combined process heat and electric power generation applications, and presents a commercialization analysis relevant to the organizational and management plans which could implement a development program.

  19. Application and further development of models for the final repository safety analyses on the clearance of radioactive materials for disposal. Final report; Anwendung und Weiterentwicklung von Modellen fuer Endlagersicherheitsanalysen auf die Freigabe radioaktiver Stoffe zur Deponierung. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Artmann, Andreas; Larue, Juergen; Seher, Holger; Weiss, Dietmar

    2014-08-15

    The project of application and further development of models for the final repository safety analyses on the clearance of radioactive materials for disposal is aimed to study the long-term safety using repository-specific simulation programs with respect to radiation exposure for different scenarios. It was supposed to investigate whether the 10 micro Sv criterion can be guaranteed under consideration of human intrusion scenarios. The report covers the following issues: selection and identification of models and codes and the definition of boundary conditions; applicability of conventional repository models for long-term safety analyses; modeling results for the pollutant release and transport and calculation of radiation exposure; determination of the radiation exposure.

  20. Application of the Random Forest method to analyse epidemiological and phenotypic characteristics of Salmonella 4,[5],12:i:- and Salmonella Typhimurium strains

    DEFF Research Database (Denmark)

    Barco, L.; Mancin, M.; Ruffa, M.

    2012-01-01

    in Italy, particularly as far as veterinary isolates are concerned. For this reason, a data set of 877 strains isolated in the north-east of Italy from foodstuffs, animals and environment was analysed during 2005-2010. The Random Forests (RF) method was used to identify the most important epidemiological...... and phenotypic variables to show the difference between the two serovars. Both descriptive analysis and RF revealed that S. 4,[5],12:i:- is less heterogeneous than S. Typhimurium. RF highlighted that phage type was the most important variable to differentiate the two serovars. The most common phage types...

  1. Development of a fully automated open-column chemical-separation system—COLUMNSPIDER—and its application to Sr-Nd-Pb isotope analyses of igneous rock samples

    Science.gov (United States)

    Miyazaki, Takashi; Vaglarov, Bogdan Stefanov; Takei, Masakazu; Suzuki, Masahiro; Suzuki, Hiroaki; Ohsawa, Kouzou; Chang, Qing; Takahashi, Toshiro; Hirahara, Yuka; Hanyu, Takeshi; Kimura, Jun-Ichi; Tatsumi, Yoshiyuki

    A fully automated open-column resin-bed chemical-separation system, named COLUMNSPIDER, has been developed. The system consists of a programmable micropipetting robot that dispenses chemical reagents and sample solutions into an open-column resin bed for elemental separation. After the initial set up of resin columns, chemical reagents, and beakers for the separated chemical components, all separation procedures are automated. As many as ten samples can be eluted in parallel in a single automated run. Many separation procedures, such as radiogenic isotope ratio analyses for Sr and Nd, involve the use of multiple column separations with different resin columns, chemical reagents, and beakers of various volumes. COLUMNSPIDER completes these separations using multiple runs. Programmable functions, including the positioning of the micropipetter, reagent volume, and elution time, enable flexible operation. Optimized movements for solution take-up and high-efficiency column flushing allow the system to perform as precisely as when carried out manually by a skilled operator. Procedural blanks, examined for COLUMNSPIDER separations of Sr, Nd, and Pb, are low and negligible. The measured Sr, Nd, and Pb isotope ratios for JB-2 and Nd isotope ratios for JB-3 and BCR-2 rock standards all fall within the ranges reported previously in high-accuracy analyses. COLUMNSPIDER is a versatile tool for the efficient elemental separation of igneous rock samples, a process that is both labor intensive and time consuming.

  2. Applications of quaternary stratigraphic, soil-geomorphic, and quantitative geomorphic analyses to the evaluation of tectonic activity and landscape evolution in the Upper Coastal Plain, South Carolina

    International Nuclear Information System (INIS)

    Hanson, K.L.; Bullard, T.F.; Wit, M.W. de; Stieve, A.L.

    1993-01-01

    Geomorphic analyses combined with mapping of fluvial terraces and upland geomorphic surfaces provide new approaches and data for evaluating the Quaternary activity of post-Cretaceous faults that are recognized in subsurface data at the Savannah River Site in the Upper Coastal Plain of southwestern South Carolina. Analyses of longitudinal stream and terrace profiles, regional slope maps, and drainage basin morphometry indicate long-term uplift and southeast tilt of the site region. Preliminary results of drainage basin characterization suggests an apparent rejuvenation of drainages along the trace of the Pen Branch fault (a Tertiary reactivated reverse fault that initiated as a basin-margin normal fault along the northern boundary of the Triassic Dunbarton Basin). This apparent rejuvenation of drainages may be the result of nontectonic geomorphic processes or local tectonic uplift and tilting within a framework of regional uplift. Longitudinal profiles of fluvial terrace surfaces that are laterally continuous across the projected surface trace of the Pen Branch fault show no obvious evidence of warping or faulting within a resolution of ∼3 m. This combined with the estimated age of the terrace surfaces (350 ka to 1 Ma) indicates that if the Pen Branch fault is active, the Pleistocene rate of slip is very low (0.002 to 0.009 mm/yr)

  3. Some stream waters of the Western United States, with chapters on sediment carried by the Rio Grande and the industrial application of water analyses

    Science.gov (United States)

    Stabler, Herman

    1911-01-01

    A systematic study of the waters likely to be utilized on the Reclamation Service projects was made in order to determine the influence of the salinity of the waters on the growth of vegetation and the effect of suspended matter in silting canals and reservoirs. The work was begun early in 1905, under the direction of Thomas H. Means, engineer, and was continued during 1906 and until May, 1907, under the direction of W. H. Heileman, engineer. The analyses were made in a laboratory established at quarters provided by the University of California at Berkeley, Cal., by C. H. Stone, P. L. McCreary, F. M. Eaton, O. J. Hawley, W. C. Riddell, F. T. Berry, H. A. Burns, J. H. Hampson, J. A. Pearce, and M. Vaygouny, the greater part of the work being that of the first five named. C. H. Stone was chemist in charge at the beginning of the investigations and is chiefly responsible for the plan of the analytical work and the methods of analysis.The results of the investigations were prepared for publication under instructions from F. H. Newell, Director of the United States Reclamation Service, by Herman Stabler, assistant engineer, who assembled and checked- the analyses, compiled the accompanying stream-flow data from records of the United States Geological Survey, and computed daily discharge of suspended matter and dissolved solids, under the supervision of D. W. Murphy, engineer in charge of Washington office engineering.

  4. Application of multivariate statistical analyses in the interpretation of geochemical behaviour of uranium in phosphatic rocks in the Red Sea, Nile Valley and Western Desert, Egypt

    International Nuclear Information System (INIS)

    El-Arabi, A.M.Abd El-Gabar M.; Khalifa, Ibrahim H.

    2002-01-01

    Factor and cluster analyses as well as the Pearson correlation coefficient have been applied to geochemical data obtained from phosphorite and phosphatic rocks of Duwi Formation exposed at the Red Sea coast, Nile Valley and Western Desert. Sixty-six samples from a total of 71 collected samples were analysed for SiO 2 , TiO 2 , Al 2 O 3 , Fe 2 O 3 , CaO, MgO, Na 2 O, K 2 O, P 2 O 5 , Sr, U and Pb by XRF and their mineral constituents were determined by the use of XRD techniques. In addition, the natural radioactivity of the phosphatic samples due to their uranium, thorium and potassium contents was measured by gamma-spectrometry.The uranium content in the phosphate rocks with P 2 O 5 >15% (average of 106.6 ppm) is higher than in rocks with P 2 O 5 2 O 5 and CaO, whereas it is not related to changes in SiO 2 , TiO 2 , Al 2 O 3 , Fe 2 O 3 , MgO, Na 2 O and K 2 O concentrations.Factor analysis and the Pearson correlation coefficient revealed that uranium behaves geochemically in different ways in the phosphatic sediments and phosphorites in the Red Sea, Nile Valley and Western Desert. In the Red Sea and Western Desert phosphorites, uranium occurs mainly in oxidized U 6+ state where it seems to be fixed by the phosphate ion, forming secondary uranium phosphate minerals such as phosphuranylite.In the Nile Valley phosphorites, ionic substitution of Ca 2+ by U 4+ is the main controlling factor in the concentration of uranium in phosphate rocks. Moreover, fixation of U 6+ by phosphate ion and adsorption of uranium on phosphate minerals play subordinate roles

  5. Application of Probabilistic Multiple-Bias Analyses to a Cohort- and a Case-Control Study on the Association between Pandemrix™ and Narcolepsy.

    Directory of Open Access Journals (Sweden)

    Kaatje Bollaerts

    Full Text Available An increase in narcolepsy cases was observed in Finland and Sweden towards the end of the 2009 H1N1 influenza pandemic. Preliminary observational studies suggested a temporal link with the pandemic influenza vaccine Pandemrix™, leading to a number of additional studies across Europe. Given the public health urgency, these studies used readily available retrospective data from various sources. The potential for bias in such settings was generally acknowledged. Although generally advocated by key opinion leaders and international health authorities, no systematic quantitative assessment of the potential joint impact of biases was undertaken in any of these studies.We applied bias-level multiple-bias analyses to two of the published narcolepsy studies: a pediatric cohort study from Finland and a case-control study from France. In particular, we developed Monte Carlo simulation models to evaluate a potential cascade of biases, including confounding by age, by indication and by natural H1N1 infection, selection bias, disease- and exposure misclassification. All bias parameters were evidence-based to the extent possible.Given the assumptions used for confounding, selection bias and misclassification, the Finnish rate ratio of 13.78 (95% CI: 5.72-28.11 reduced to a median value of 6.06 (2.5th- 97.5th percentile: 2.49-15.1 and the French odds ratio of 5.43 (95% CI: 2.6-10.08 to 1.85 (2.5th-97.5th percentile: 0.85-4.08.We illustrate multiple-bias analyses using two studies on the Pandemrix™-narcolepsy association and advocate their use to better understand the robustness of study findings. Based on our multiple-bias models, the observed Pandemrix™-narcolepsy association consistently persists in the Finnish study. For the French study, the results of our multiple-bias models were inconclusive.

  6. Cost Analyses after a single intervention using a computer application (DIAGETHER in the treatment of diabetic patients admitted to a third level hospital

    Directory of Open Access Journals (Sweden)

    César Carballo Cardona

    2018-01-01

    Full Text Available Goals: To quantify the savings that could be made by the hospital implementation of a computer application (DIAGETHER®, which advises the treatment of hyperglycemia of the diabetic patient in the emergency department when this patient is admitted to a third level hospital. Methods: A multicenter interventional study was designed, including patients in two arms, one in the conventional treatment prescribed by the physician and the other applied the treatment indicated by the computer application DIAGETHER®. The days of hospitalization were collected in the two arms of intervention. Results: A total of 183 patients were included, 86 received treatment with the computer application, and 97 received conventional treatment. The mean blood glucose level on the first day of admission in the GLIKAL group was 178.56 (59.53, compared to 212.93 (62.23 in the conventional group (p <0.001 and on the second day 173.86 (58.86 versus 196.37 (66.60 (p = 0.017. There was no difference in the frequency of hypoglycemia reported in each group (p = 0.555. A reduction in mean stay was observed in patients treated with DIAGETHER. The days of admission were 7 (2-39 days for the GLIKAL group and 10 (2-53 days for the PCH group (p <0.001. Conclusions: The annual savings that could be generated with the use of the computer tool (DIAGETHER®, with the volume of diabetic patients admitted to the hospital, could decrease hospitalization days by 26,147 (14,134 patients for 1.85 days of stay reduction, this would generate a saving of 8,811,842 million euros per year (cost of stay / day of the diabetic patient, for the savings days generated.

  7. Qualification des logiciels numériques. Application à un logiciel d'analyse de la combustion dans les moteurs à allumage commandé Qualification of Numerical Software. Application to a Software for Analysing Combustion in Spark-Ignition Engines

    Directory of Open Access Journals (Sweden)

    Vignes J.

    2006-11-01

    -point arithmetic and highlighting the serious consequences it may have on the results obtained, we describe a probabilistic approach to the analysis of round-off errors, the CESTAC (Contrôle et Estimation STochastique des Arrondis de Calculs method, from the standpoint of both its theoritical bases and its practical implementation. This method has given rise to a new arithmetic, called stochastic arithmetic, the principal properties of which are summed up. Likewise, a probabilistic approach estimating the influence of data errors is described. A software called CADNA (Control of Accuracy and Debugging for Numerical Applications able to automaticaly implement stochastic arithmetic in any Fortran program, is described in this paper. When used in programs implementing the three classes of numerical computing methods (finite, iterative and approximate methods, it can detect numerical instabilities, control branchings and provide accuracy of the results considering the propagation of round-off errors and data errors. It is an efficient tool for validating the results of numerical software. The second part is devoted to the use of the CADNA software for qualifying the simulation software, ANALCO (ANALyse de COmbustion which analyses combustion in spark-ingnition engines. After a description of the normal model of the phenomenon being analyzed and after mathematical model has been deduced, the ANALCO simulation software is described. The results obtained with ANALCO, not using CADNA, reveal the disagreement between the simulation results and the experimental results. The use of the CADNA software eliminates the numerical instabilties, controls the execution of the program and demonstrates that the disagreement between the simulation results and the results observed is due only to numerical problems. Likewise, the CADNA software brings out both the validity range of the model in the light of the data errors and the data that make the mathematical model the most sensitive. From this

  8. Millifluidic droplet analyser for microbiology

    NARCIS (Netherlands)

    Baraban, L.; Bertholle, F.; Salverda, M.L.M.; Bremond, N.; Panizza, P.; Baudry, J.; Visser, de J.A.G.M.; Bibette, J.

    2011-01-01

    We present a novel millifluidic droplet analyser (MDA) for precisely monitoring the dynamics of microbial populations over multiple generations in numerous (=103) aqueous emulsion droplets (100 nL). As a first application, we measure the growth rate of a bacterial strain and determine the minimal

  9. Analyses of PWR spent fuel composition using SCALE and SWAT code systems to find correction factors for criticality safety applications adopting burnup credit

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Hee Sung; Suyama, Kenya; Mochizuki, Hiroki; Okuno, Hiroshi; Nomura, Yasushi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    The isotopic composition calculations were performed for 26 spent fuel samples from the Obrigheim PWR reactor and 55 spent fuel samples from 7 PWR reactors using the SAS2H module of the SCALE4.4 code system with 27, 44 and 238 group cross-section libraries and the SWAT code system with the 107 group cross-section library. For the analyses of samples from the Obrigheim PWR reactor, geometrical models were constructed for each of SCALE4.4/SAS2H and SWAT. For the analyses of samples from 7 PWR reactors, the geometrical model already adopted in the SCALE/SAS2H was directly converted to the model of SWAT. The four kinds of calculation results were compared with the measured data. For convenience, the ratio of the measured to calculated values was used as a parameter. When the ratio is less than unity, the calculation overestimates the measurement, and the ratio becomes closer to unity, they have a better agreement. For many important nuclides for burnup credit criticality safety evaluation, the four methods applied in this study showed good coincidence with measurements in general. More precise observations showed, however: (1) Less unity ratios were found for Pu-239 and -241 for selected 16 samples out of the 26 samples from the Obrigheim reactor (10 samples were deselected because their burnups were measured with Cs-137 non-destructive method, less reliable than Nd-148 method the rest 16 samples were measured with); (2) Larger than unity ratios were found for Am-241 and Cm-242 for both the 16 and 55 samples; (3) Larger than unity ratios were found for Sm-149 for the 55 samples; (4) SWAT was generally accompanied by larger ratios than those of SAS2H with some exceptions. Based on the measured-to-calculated ratios for 71 samples of a combined set in which 16 selected samples and 55 samples were included, the correction factors that should be multiplied to the calculated isotopic compositions were generated for a conservative estimate of the neutron multiplication factor

  10. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  11. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu

    2014-01-15

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located on a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.

  12. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    International Nuclear Information System (INIS)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef

    2014-01-01

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located on a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers

  13. Application of cluster and discriminant analyses to diagnose lithological heterogeneity of the parent material according to its particle-size distribution

    Science.gov (United States)

    Giniyatullin, K. G.; Valeeva, A. A.; Smirnova, E. V.

    2017-08-01

    Particle-size distribution in soddy-podzolic and light gray forest soils of the Botanical Garden of Kazan Federal University has been studied. The cluster analysis of data on the samples from genetic soil horizons attests to the lithological heterogeneity of the profiles of all the studied soils. It is probable that they are developed from the two-layered sediments with the upper colluvial layer underlain by the alluvial layer. According to the discriminant analysis, the major contribution to the discrimination of colluvial and alluvial layers is that of the fraction >0.25 mm. The results of canonical analysis show that there is only one significant discriminant function that separates alluvial and colluvial sediments on the investigated territory. The discriminant function correlates with the contents of fractions 0.05-0.01, 0.25-0.05, and >0.25 mm. Classification functions making it possible to distinguish between alluvial and colluvial sediments have been calculated. Statistical assessment of particle-size distribution data obtained for the plow horizons on ten plowed fields within the garden indicates that this horizon is formed from colluvial sediments. We conclude that the contents of separate fractions and their ratios cannot be used as a universal criterion of the lithological heterogeneity. However, adequate combination of the cluster and discriminant analyses makes it possible to give a comprehensive assessment of the lithology of soil samples from data on the contents of sand and silt fractions, which considerably increases the information value and reliability of the results.

  14. The Development of Protein Microarrays and Their Applications in DNA-Protein and Protein-Protein Interaction Analyses of Arabidopsis Transcription Factors

    Science.gov (United States)

    Gong, Wei; He, Kun; Covington, Mike; Dinesh-Kumar, S. P.; Snyder, Michael; Harmer, Stacey L.; Zhu, Yu-Xian; Deng, Xing Wang

    2009-01-01

    We used our collection of Arabidopsis transcription factor (TF) ORFeome clones to construct protein microarrays containing as many as 802 TF proteins. These protein microarrays were used for both protein-DNA and protein-protein interaction analyses. For protein-DNA interaction studies, we examined AP2/ERF family TFs and their cognate cis-elements. By careful comparison of the DNA-binding specificity of 13 TFs on the protein microarray with previous non-microarray data, we showed that protein microarrays provide an efficient and high throughput tool for genome-wide analysis of TF-DNA interactions. This microarray protein-DNA interaction analysis allowed us to derive a comprehensive view of DNA-binding profiles of AP2/ERF family proteins in Arabidopsis. It also revealed four TFs that bound the EE (evening element) and had the expected phased gene expression under clock-regulation, thus providing a basis for further functional analysis of their roles in clock regulation of gene expression. We also developed procedures for detecting protein interactions using this TF protein microarray and discovered four novel partners that interact with HY5, which can be validated by yeast two-hybrid assays. Thus, plant TF protein microarrays offer an attractive high-throughput alternative to traditional techniques for TF functional characterization on a global scale. PMID:19802365

  15. Development of proton-induced x-ray emission techniques with application to multielement analyses of human autopsy tissues and obsidian artifacts

    International Nuclear Information System (INIS)

    Nielson, K.K.

    1975-01-01

    A method of trace element analysis using proton-induced x-ray emission (PIXE) techniques with energy dispersive x-ray detection methods is described. Data were processed using the computer program ANALEX. PIXE analysis methods were applied to the analysis of liver, spleen, aorta, kidney medulla, kidney cortex, abdominal fat, pancreas, and hair from autopsies of Pima Indians. Tissues were freeze dried and low temperature ashed before analysis. Concentrations were tabulated for K, Ca, Ti, Mn, Fe, Co, Ni, Cu, Zn, Pb, Se, Br, Rb, Sr, Cd, and Cs and examined for significant differences related to diabetes. Concentrations of Ca and Sr in aorta, Fe and Rb in spleen and Mn in liver had different patterns in diabetics than in nondiabetics. High Cs concentrations were also observed in the kidneys of two subjects who died of renal disorders. Analyses by atomic absorption and PIXE methods were compared. PIXE methods were also applied to elemental analysis of obsidian artifacts from Campeche, Mexico. Based on K, Ba, Mn, Fe, Rb, Sr and Zr concentrations, the artifacts were related to several Guatemalan sources. (Diss. Abstr. Int., B)

  16. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  17. Performance Analysis of MTD64, our Tiny Multi-Threaded DNS64 Server Implementation: Proof of Concept

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-07-01

    In this paper, the performance of MTD64 is measured and compared to that of the industry standard BIND in order to check the correctness of the design concepts of MTD64, especially of the one that we use a new thread for each request. For the performance measurements, our earlier proposed dns64perf program is enhanced as dns64perf2, which one is also documented in this paper. We found that MTD64 seriously outperformed BIND and hence our design principles may be useful for the design of a high performance production class DNS64 server. As an additional test, we have also examined the effect of dynamic CPU frequency scaling to the performance of the implementations.

  18. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    performance difficult. Likewise, a demonstration of the magnitude of conservatisms in the dose estimates that result from conservative inputs is difficult to determine. To respond to these issues, the DOE explored the significance of uncertainties and the magnitude of conservatisms in the SSPA Volumes 1 and 2 (BSC 2001 [DIRS 155950]; BSC 2001 [DIRS 154659]). The three main goals of this report are: (1) To briefly summarize and consolidate the discussion of much of the work that has been done over the past few years to evaluate, clarify, and improve the representation of uncertainties in the TSPA and performance projections for a potential repository. This report does not contain any new analyses of those uncertainties, but it summarizes in one place the main findings of that work. (2) To develop a strategy for how uncertainties may be handled in the TSPA and supporting analyses and models to support a License Application, should the site be recommended. It should be noted that the strategy outlined in this report is based on current information available to DOE. The strategy may be modified pending receipt of additional pertinent information, such as the Yucca Mountain Review Plan. (3) To discuss issues related to communication about uncertainties, and propose some approaches the DOE may use in the future to improve how it communicates uncertainty in its models and performance assessments to decision-makers and to technical audiences

  19. High-Precision In Situ 87Sr/86Sr Analyses through Microsampling on Solid Samples: Applications to Earth and Life Sciences

    Directory of Open Access Journals (Sweden)

    Sara Di Salvo

    2018-01-01

    Full Text Available An analytical protocol for high-precision, in situ microscale isotopic investigations is presented here, which combines the use of a high-performing mechanical microsampling device and high-precision TIMS measurements on micro-Sr samples, allowing for excellent results both in accuracy and precision. The present paper is a detailed methodological description of the whole analytical procedure from sampling to elemental purification and Sr-isotope measurements. The method offers the potential to attain isotope data at the microscale on a wide range of solid materials with the use of minimally invasive sampling. In addition, we present three significant case studies for geological and life sciences, as examples of the various applications of microscale 87Sr/86Sr isotope ratios, concerning (i the pre-eruptive mechanisms triggering recent eruptions at Nisyros volcano (Greece, (ii the dynamics involved with the initial magma ascent during Eyjafjallajökull volcano’s (Iceland 2010 eruption, which are usually related to the precursory signals of the eruption, and (iii the environmental context of a MIS 3 cave bear, Ursus spelaeus. The studied cases show the robustness of the methods, which can be also be applied in other areas, such as cultural heritage, archaeology, petrology, and forensic sciences.

  20. Application of 1013 ohm Faraday cup current amplifiers for boron isotopic analyses by solution mode and laser ablation multicollector inductively coupled plasma mass spectrometry.

    Science.gov (United States)

    Lloyd, Nicholas S; Sadekov, Aleksey Yu; Misra, Sambuddha

    2018-01-15

    Boron isotope ratios (δ 11 B values) are used as a proxy for seawater paleo-pH, amongst several other applications. The analytical precision can be limited by the detection of low intensity ion beams from limited sample amounts. High-gain amplifiers offer improvements in signal/noise ratio and can be used to increase measurement precision and reduce sample amounts. 10 13 ohm amplifier technology has previously been applied to several radiogenic systems, but has thus far not been applied to non-traditional stable isotopes. Here we apply 10 13 ohm amplifier technology for the measurement of boron isotope ratios using solution mode MC-ICP-MS and laser ablation mode (LA-)MC-ICP-MS techniques. Precision is shown for reference materials as well as for low-volume foraminifera samples. The baseline uncertainty for a 0.1 pA 10 B + ion beam is reduced to ohm amplifier technology is demonstrated to offer advantages for the determination of δ 11 B values by both MC-ICP-MS and LA-MC-ICP-MS for small samples of biogenic carbonates, such as foraminifera shells. 10 13 ohm amplifier technology will also be of benefit to other non-traditional stable isotope measurements. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Generation of a predicted protein database from EST data and application to iTRAQ analyses in grape (Vitis vinifera cv. Cabernet Sauvignon) berries at ripening initiation

    Science.gov (United States)

    Lücker, Joost; Laszczak, Mario; Smith, Derek; Lund, Steven T

    2009-01-01

    Background iTRAQ is a proteomics technique that uses isobaric tags for relative and absolute quantitation of tryptic peptides. In proteomics experiments, the detection and high confidence annotation of proteins and the significance of corresponding expression differences can depend on the quality and the species specificity of the tryptic peptide map database used for analysis of the data. For species for which finished genome sequence data are not available, identification of proteins relies on similarity to proteins from other species using comprehensive peptide map databases such as the MSDB. Results We were interested in characterizing ripening initiation ('veraison') in grape berries at the protein level in order to better define the molecular control of this important process for grape growers and wine makers. We developed a bioinformatic pipeline for processing EST data in order to produce a predicted tryptic peptide database specifically targeted to the wine grape cultivar, Vitis vinifera cv. Cabernet Sauvignon, and lacking truncated N- and C-terminal fragments. By searching iTRAQ MS/MS data generated from berry exocarp and mesocarp samples at ripening initiation, we determined that implementation of the custom database afforded a large improvement in high confidence peptide annotation in comparison to the MSDB. We used iTRAQ MS/MS in conjunction with custom peptide db searches to quantitatively characterize several important pathway components for berry ripening previously described at the transcriptional level and confirmed expression patterns for these at the protein level. Conclusion We determined that a predicted peptide database for MS/MS applications can be derived from EST data using advanced clustering and trimming approaches and successfully implemented for quantitative proteome profiling. Quantitative shotgun proteome profiling holds great promise for characterizing biological processes such as fruit ripening initiation and may be further

  2. Generation of a predicted protein database from EST data and application to iTRAQ analyses in grape (Vitis vinifera cv. Cabernet Sauvignon berries at ripening initiation

    Directory of Open Access Journals (Sweden)

    Smith Derek

    2009-01-01

    Full Text Available Abstract Background iTRAQ is a proteomics technique that uses isobaric tags for relative and absolute quantitation of tryptic peptides. In proteomics experiments, the detection and high confidence annotation of proteins and the significance of corresponding expression differences can depend on the quality and the species specificity of the tryptic peptide map database used for analysis of the data. For species for which finished genome sequence data are not available, identification of proteins relies on similarity to proteins from other species using comprehensive peptide map databases such as the MSDB. Results We were interested in characterizing ripening initiation ('veraison' in grape berries at the protein level in order to better define the molecular control of this important process for grape growers and wine makers. We developed a bioinformatic pipeline for processing EST data in order to produce a predicted tryptic peptide database specifically targeted to the wine grape cultivar, Vitis vinifera cv. Cabernet Sauvignon, and lacking truncated N- and C-terminal fragments. By searching iTRAQ MS/MS data generated from berry exocarp and mesocarp samples at ripening initiation, we determined that implementation of the custom database afforded a large improvement in high confidence peptide annotation in comparison to the MSDB. We used iTRAQ MS/MS in conjunction with custom peptide db searches to quantitatively characterize several important pathway components for berry ripening previously described at the transcriptional level and confirmed expression patterns for these at the protein level. Conclusion We determined that a predicted peptide database for MS/MS applications can be derived from EST data using advanced clustering and trimming approaches and successfully implemented for quantitative proteome profiling. Quantitative shotgun proteome profiling holds great promise for characterizing biological processes such as fruit ripening

  3. Simultaneous spectral and temporal analyses of kinetic energies in nonequilibrium systems: theory and application to vibrational relaxation of O-D stretch mode of HOD in water.

    Science.gov (United States)

    Jeon, Jonggu; Lim, Joon Hyung; Kim, Seongheun; Kim, Heejae; Cho, Minhaeng

    2015-05-28

    A time series of kinetic energies (KE) from classical molecular dynamics (MD) simulation contains fundamental information on system dynamics. It can also be analyzed in the frequency domain through Fourier transformation (FT) of velocity correlation functions, providing energy content of different spectral regions. By limiting the FT time span, we have previously shown that spectral resolution of KE evolution is possible in the nonequilibrium situations [Jeon and Cho, J. Chem. Phys. 2011, 135, 214504]. In this paper, we refine the method by employing the concept of instantaneous power spectra, extending it to reflect an instantaneous time-correlation of velocities with those in the future as well as with those in the past, and present a new method to obtain the instantaneous spectral density of KE (iKESD). This approach enables the simultaneous spectral and temporal resolution of KE with unlimited time precision. We discuss the formal and novel properties of the new iKESD approaches and how to optimize computational methods and determine parameters for practical applications. The method is specifically applied to the nonequilibrium MD simulation of vibrational relaxation of the OD stretch mode in a hydrated HOD molecule by employing a hybrid quantum mechanical/molecular mechanical (QM/MM) potential. We directly compare the computational results with the OD band population relaxation time profiles extracted from the IR pump-probe measurements for 5% HOD in water. The calculated iKESD yields the OD bond relaxation time scale ∼30% larger than the experimental value, and this decay is largely frequency-independent if the classical anharmonicity is accounted for. From the integrated iKESD over intra- and intermolecular bands, the major energy transfer pathways were found to involve the HOD bending mode in the subps range, then the internal modes of the solvent until 5 ps after excitation, and eventually the solvent intermolecular modes. Also, strong hydrogen

  4. Validation of an UHPLC-MS/MS Method for Screening of Antimicrobial Residues in Eggs and Their Application to Analyses of Eggs from Laying Hens Subjected to Pharmacological Treatment

    Directory of Open Access Journals (Sweden)

    Letícia Gomes Magnago Caldeira

    2017-01-01

    Full Text Available A multiresidue method by UHPLC/MS-MS was optimized and validated for the screening and semiquantitative detection of antimicrobials residues from tetracyclines, aminoglycosides, quinolones, lincosamides, β-lactams, sulfonamides, and macrolides families in eggs. A qualitative approach was used to ensure adequate sensitivity to detect residues at the level of interest, defined as maximum residue limit (MRL, or less. The applicability of the methods was assessed by analyzing egg samples from hens that had been subjected to pharmacological treatment with neomycin, enrofloxacin, lincomycin, oxytetracycline, and doxycycline during five days and after discontinuation of medication (10 days. The method was adequate for screening all studied analytes in eggs, since the performance parameters ensured a false-compliant rate below or equal to 5%, except for flumequine. In the analyses of eggs from laying hens subjected to pharmacological treatment, all antimicrobial residues were detected throughout the experimental period, even after discontinuation of medication, except for neomycin, demonstrating the applicability of the method for analyses of antimicrobial residues in eggs.

  5. JaSTA-2: Second version of the Java Superposition T-matrix Application

    Science.gov (United States)

    Halder, Prithish; Das, Himadri Sekhar

    2017-12-01

    In this article, we announce the development of a new version of the Java Superposition T-matrix App (JaSTA-2), to study the light scattering properties of porous aggregate particles. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precision superposition T-matrix codes for multi-sphere clusters in random orientation, developed by Mackowski and Mischenko (1996). The new version consists of two options as part of the input parameters: (i) single wavelength and (ii) multiple wavelengths. The first option (which retains the applicability of older version of JaSTA) calculates the light scattering properties of aggregates of spheres for a single wavelength at a given instant of time whereas the second option can execute the code for a multiple numbers of wavelengths in a single run. JaSTA-2 provides convenient and quicker data analysis which can be used in diverse fields like Planetary Science, Atmospheric Physics, Nanoscience, etc. This version of the software is developed for Linux platform only, and it can be operated over all the cores of a processor using the multi-threading option.

  6. Technical center for transportation analyses

    International Nuclear Information System (INIS)

    Foley, J.T.

    1978-01-01

    A description is presented of an information search/retrieval/research activity of Sandia Laboratories which provides technical environmental information which may be used in transportation risk analyses, environmental impact statements, development of design and test criteria for packaging of energy materials, and transportation mode research studies. General activities described are: (1) history of center development; (2) environmental information storage/retrieval system; (3) information searches; (4) data needs identification; and (5) field data acquisition system and applications

  7. Application of neutron activation analysis to the study of impurities in molybdenum, tungsten and nuclear graphite; Application de l'analyse par activation neutronique a l'etude des impuretes dans le molybdene, le tungstene et le graphite nucleaire

    Energy Technology Data Exchange (ETDEWEB)

    Pinte, G [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1967-10-15

    A neutron activation method is described for the analysis of a maximum number of foreign elements in molybdenum, tungsten and graphite. The various elements are isolated using a systematic separation programme; the elements are subsequently analysed qualitatively and quantitatively using {gamma}-spectrometry. By this method are dosed 27 elements in molybdenum and tungsten, and 20 elements in graphite to which can be added those elements which are already the object of routine analysis: V, Mn, Si, P, S, Cl and 14 rare earths. (author) [French] On decrit une methode d'analyse par activation neutronique permettant de doser un maximum d'elements etrangers dans le molybdene, le tungstene et le graphite. En suivant un schema de separation systematique, on isole les differents elements dont les analyses qualitatives et quantitatives sont ensuite effectuees par spectrometrie {gamma}. Par cette methode, on dose 27 elements dans le molybdene et le tungstene, 20 elements dans le graphite, auxquels on peut encore ajouter les elements doses couramment: V, Mn, Si, P, S, Cl et 14 Terres Rares. (auteur)

  8. Applications of Historical Analyses in Combat Modelling

    Science.gov (United States)

    2011-12-01

    causes of those results [2]. Models can be classified into three descriptive types [8], according to the degree of abstraction required:  iconic ...22 ( A5 ) Hence the first bracket of Equation A3 is zero. Therefore:         022 22 22 122 22 22 2      o o o o xx xxy yy yyx...A6) Equation A5 can also be used to replace the term in Equation A6, leaving: 22 oxx     222 2 22 1 2222 2 oo xxa byyyx

  9. Mass spectrometer introduction line: application to the analysis of impurities in uranium hexafluoride; Ligne d'introduction pour spectrometre de masse: application a l'analyse des impuretes contenues dans l'UF{sub 6}

    Energy Technology Data Exchange (ETDEWEB)

    Besson, M. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires. Departement de physico-chimie, service des isotopes stables, section de spectrometrie de masse

    1967-01-01

    The continuous mass-spectrometric analysis of impurities in UF{sub 6} is possible industrially if certain conditions imposed by the nature of the gas are respected. The gas introduction line arriving at the spectrometer's source makes it possible to fix the flow-rate, to control the inlet pressure and to selectively destroy the gas containing the impurities. The operational conditions for the line are defined and a description is given of the theoretical and experimental study of the various elements of which it is composed, i.e. the leak valve, the flow-meter, the chemical trap and the servo-mechanism making it possible to regulate and control the gas flow. The dynamic characteristics of the line's various components and the performance of the equipment in the case of the analyses considered are given. (author) [French] L'analyse en continu par spectrometrie de masse des impuretes contenus dans l'UF{sub 6} est possible industriellement moyennant certaines conditions imposees par la nature du gaz. La ligne d'introduction des gaz dans la source du spectrometre permet de regler le debit, de controler la pression d'introduction et de detruire selectivement le gaz porteur d'impuretes. Les conditions de fonctionnement de la ligne etant definies, on decrit l'etude theorique et experimentale des differents elements qui la composent, c'est-a-dire: le robinet a fuite, le debitmetre, le piege chimique et l'ensemble d'asservissement permettant la regulation et le controle du debit. On donne les caracteristiques dynamiques des differents constituants de la ligne et les performances de l'ensemble pour les analyses considerees. (auteur)

  10. Mass spectrometer introduction line: application to the analysis of impurities in uranium hexafluoride; Ligne d'introduction pour spectrometre de masse: application a l'analyse des impuretes contenues dans l'UF{sub 6}

    Energy Technology Data Exchange (ETDEWEB)

    Besson, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires. Departement de physico-chimie, service des isotopes stables, section de spectrometrie de masse

    1967-01-01

    The continuous mass-spectrometric analysis of impurities in UF{sub 6} is possible industrially if certain conditions imposed by the nature of the gas are respected. The gas introduction line arriving at the spectrometer's source makes it possible to fix the flow-rate, to control the inlet pressure and to selectively destroy the gas containing the impurities. The operational conditions for the line are defined and a description is given of the theoretical and experimental study of the various elements of which it is composed, i.e. the leak valve, the flow-meter, the chemical trap and the servo-mechanism making it possible to regulate and control the gas flow. The dynamic characteristics of the line's various components and the performance of the equipment in the case of the analyses considered are given. (author) [French] L'analyse en continu par spectrometrie de masse des impuretes contenus dans l'UF{sub 6} est possible industriellement moyennant certaines conditions imposees par la nature du gaz. La ligne d'introduction des gaz dans la source du spectrometre permet de regler le debit, de controler la pression d'introduction et de detruire selectivement le gaz porteur d'impuretes. Les conditions de fonctionnement de la ligne etant definies, on decrit l'etude theorique et experimentale des differents elements qui la composent, c'est-a-dire: le robinet a fuite, le debitmetre, le piege chimique et l'ensemble d'asservissement permettant la regulation et le controle du debit. On donne les caracteristiques dynamiques des differents constituants de la ligne et les performances de l'ensemble pour les analyses considerees. (auteur)

  11. Analysing Protocol Stacks for Services

    DEFF Research Database (Denmark)

    Gao, Han; Nielson, Flemming; Nielson, Hanne Riis

    2011-01-01

    We show an approach, CaPiTo, to model service-oriented applications using process algebras such that, on the one hand, we can achieve a certain level of abstraction without being overwhelmed by the underlying implementation details and, on the other hand, we respect the concrete industrial...... standards used for implementing the service-oriented applications. By doing so, we will be able to not only reason about applications at different levels of abstractions, but also to build a bridge between the views of researchers on formal methods and developers in industry. We apply our approach...... to the financial case study taken from Chapter 0-3. Finally, we develop a static analysis to analyse the security properties as they emerge at the level of concrete industrial protocols....

  12. Development of the Veritas plot and its application in cardiac surgery: an evidence-synthesis graphic tool for the clinician to assess multiple meta-analyses reporting on a common outcome.

    Science.gov (United States)

    Panesar, Sukhmeet S; Rao, Christopher; Vecht, Joshua A; Mirza, Saqeb B; Netuveli, Gopalakrishnan; Morris, Richard; Rosenthal, Joe; Darzi, Ara; Athanasiou, Thanos

    2009-10-01

    Meta-analyses may be prone to generating misleading results because of a paucity of experimental studies (especially in surgery); publication bias; and heterogeneity in study design, intervention and the patient population of included studies. When investigating a specific clinical or scientific question on which several relevant meta-analyses may have been published, value judgments must be applied to determine which analysis represents the most robust evidence. These value judgments should be specifically acknowledged. We designed the Veritas plot to explicitly explore important elements of quality and to facilitate decision-making by highlighting specific areas in which meta-analyses are found to be deficient. Furthermore, as a graphic tool, it may be more intuitive than when similar data are presented in a tabular or text format. The Veritas plot is an adaption of the radar plot, a graphic tool for the description of multiattribute data. Key elements of meta-analytical quality such as heterogeneity, publication bias and study design are assessed. Existing qualitative methods such as the Assessment of Multiple Systematic Reviews (AMSTAR) tool have been incorporated in addition to important considerations when interpreting surgical meta-analyses such as the year of publication and population characteristics. To demonstrate the potential of the Veritas plot to inform clinical practice, we apply the Veritas plot to the meta-analytical literature comparing the incidence of 30-day stroke in off-pump coronary artery bypass surgery and conventional coronary artery bypass surgery. We demonstrate that a visually-stimulating and practical evidence-synthesis tool can direct the clinician and scientist to a particular meta-analytical study to inform clinical practice. The Veritas plot is also cumulative and allowed us to assess the quality of evidence over time. We have presented a practical graphic application for scientists and clinicians to identify and interpret

  13. Binary analysis: 1. part: definitions and treatment of binary functions; 2. part: applications and functions of trans-coding; Analyse binaire: 1ere partie: definitions et traitements des fonctions binaires; 2eme partie: applications et fonctions de transcodage

    Energy Technology Data Exchange (ETDEWEB)

    Vallee, R L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1968-07-01

    The study of binary groups under their mathematical aspects constitutes the matter of binary analysis, the purpose of which consists in developing altogether simple, rigorous and practical methods needed by the technicians, the engineers and all those who may be mainly concerned by digital processing. This subject, fast extending if not determining, however tends actually to play a main part in nuclear electronics as well as in several other research areas. (authors) [French] L'analyse binaire a pour objet l'etude mathematique des proprietes d'ensembles binaires algebriques et pour but l'elaboration de methodes simples, rigoureuses et pratiques, destinees aux techniciens, aux ingenieurs et a tous ceux qu'interesse directement le traitement numerique de l'information, discipline en expansion rapide qui, deja, en electronique nucleaire comme dans de nombreux autres domaines de la recherche, tend a jouer un role essentiel sinon determinant. (auteurs)

  14. The economic analysis of power market architectures: application to real-time market design; L' analyse economique des architectures de marche electrique: application au market design du temps reel

    Energy Technology Data Exchange (ETDEWEB)

    Saguan, M

    2007-04-15

    This work contributes to the economic analysis of power market architectures. A modular framework is used to separate problems of market design in different modules. The work's goal is to study real-time market design. A two-stage market equilibrium model is used to analyse the two main real-time designs: the 'market' and the 'mechanism' (with penalty). Numerical simulations show that design applied in real-time is not neutral vis-a-vis of energy markets sequence and the competition dynamic. Designs using penalty (mechanisms) cause distortions, inefficiencies and can create barriers to entry. The size of distortions is given by the temporal position of the gate that closure the forward markets. This model has also allowed us to show the key role of real-time integration between zones and the importance of good harmonization between real-time designs of each zone. (author)

  15. NOAA's National Snow Analyses

    Science.gov (United States)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  16. Análise comparativa da aplicação do programa Seis Sigma em processos de manufatura e serviços Comparative analyses of Six-Sigma program application in manufacturing and services process

    Directory of Open Access Journals (Sweden)

    Luis Ricardo Galvani

    2013-01-01

    Full Text Available O programa Seis Sigma nasceu e evoluiu em ambiente de manufatura, mas também pode ser utilizado em processos de serviços. Porém sua utilização nesse ambiente tem sido feita de forma mais modesta, com menor participação de empresas e, consequentemente, menor número de casos e relatos divulgados. Este trabalho buscou comparar e analisar a aplicação do programa Seis Sigma em manufatura e serviços, por meio de revisão da literatura, análise de projetos vivenciados pelo autor e pesquisa de campo em empresas praticantes do programa. Apesar da limitação da amostra, os resultados mostram fortes indícios de diferenças significativas cuja melhor compreensão pode ajudar a obter melhores resultados em aplicações em serviços.The Six-sigma program has its roots in the manufacturing field, but it can be applied to a service process. However, this application has been done in a modest manner, with the participation of few companies and, as a consequence, few cases, projects, and papers have been exposed. This paper presents a comparative analysis of Six-sigma program application in manufacturing and services process by literature review, comparative analyses of Six-sigma projects experienced by the author, and case studies with companies that apply the program in the manufacturing and services field. Even with sample restriction, the results show key differences that can lead to benefits to services application.

  17. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

    2007-01-01

    the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating......The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  18. Argon activation analysis, application to dating by the potassium-argon method; Analyse par activation de l'argon. Application a la datation par la methode potassium-argon

    Energy Technology Data Exchange (ETDEWEB)

    Dumesnil, P. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1967-07-01

    Activation analysis using radiogenic argon-40 has been applied to rock-dating by the K-Ar method. The argon is extracted from the sample, purified, activated to saturation in a flux of 2 X 10{sup 13} neutrons/cm{sup 2} sec{sup -1} and measured by gamma spectroscopy. The sensitivity obtained is such that it is possible to measure amounts of argon corresponding to age of a few thousand years only. However since it has not been possible to measure the amount of pollution of radiogenic argon by atmospheric argon with any accuracy, the measurable age limit is in fact of the order of MY. The method has been applied to basalts from the Mont-Dore region. The results obtained are in fairly good agreement with geological, stratigraphic and paleomagnetic data. (author) [French] L'analyse par activation de l'argon 40 radiogenique a ete appliquee a la datation des roches par la methode K-Ar. L'argon est extrait de l'echantillon, purifie, active a saturation dans un flux de 2.10{sup 13} neutrons.cm{sup -2}.s{sup -1} et mesure en spectrometrie gamma. La sensibilite obtenue est telle qu'il est possible de mesurer des quantites d'argon correspondant a des ages de quelques milliers d'annees seulement. Cependant la correction de pollution de l'argon radiogenique par l'argon atmospherique n'ayant pu etre etablie avec precision, la limite d'age mesurable pratique est de l'ordre de 1 Ma. La methode a ete appliquee aux basaltes de la region du Mont-Dore. Les ages obtenus sont en assez bon accord avec les donnees geologiques, stratigraphiques et paleomagnetiques. (auteur)

  19. Argon activation analysis, application to dating by the potassium-argon method; Analyse par activation de l'argon. Application a la datation par la methode potassium-argon

    Energy Technology Data Exchange (ETDEWEB)

    Dumesnil, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1967-07-01

    Activation analysis using radiogenic argon-40 has been applied to rock-dating by the K-Ar method. The argon is extracted from the sample, purified, activated to saturation in a flux of 2 X 10{sup 13} neutrons/cm{sup 2} sec{sup -1} and measured by gamma spectroscopy. The sensitivity obtained is such that it is possible to measure amounts of argon corresponding to age of a few thousand years only. However since it has not been possible to measure the amount of pollution of radiogenic argon by atmospheric argon with any accuracy, the measurable age limit is in fact of the order of MY. The method has been applied to basalts from the Mont-Dore region. The results obtained are in fairly good agreement with geological, stratigraphic and paleomagnetic data. (author) [French] L'analyse par activation de l'argon 40 radiogenique a ete appliquee a la datation des roches par la methode K-Ar. L'argon est extrait de l'echantillon, purifie, active a saturation dans un flux de 2.10{sup 13} neutrons.cm{sup -2}.s{sup -1} et mesure en spectrometrie gamma. La sensibilite obtenue est telle qu'il est possible de mesurer des quantites d'argon correspondant a des ages de quelques milliers d'annees seulement. Cependant la correction de pollution de l'argon radiogenique par l'argon atmospherique n'ayant pu etre etablie avec precision, la limite d'age mesurable pratique est de l'ordre de 1 Ma. La methode a ete appliquee aux basaltes de la region du Mont-Dore. Les ages obtenus sont en assez bon accord avec les donnees geologiques, stratigraphiques et paleomagnetiques. (auteur)

  20. Contesting Citizenship: Comparative Analyses

    DEFF Research Database (Denmark)

    Siim, Birte; Squires, Judith

    2007-01-01

    importance of particularized experiences and multiple ineequality agendas). These developments shape the way citizenship is both practiced and analysed. Mapping neat citizenship modles onto distinct nation-states and evaluating these in relation to formal equality is no longer an adequate approach....... Comparative citizenship analyses need to be considered in relation to multipleinequalities and their intersections and to multiple governance and trans-national organisinf. This, in turn, suggests that comparative citizenship analysis needs to consider new spaces in which struggles for equal citizenship occur...

  1. Risico-analyse brandstofpontons

    NARCIS (Netherlands)

    Uijt de Haag P; Post J; LSO

    2001-01-01

    Voor het bepalen van de risico's van brandstofpontons in een jachthaven is een generieke risico-analyse uitgevoerd. Er is een referentiesysteem gedefinieerd, bestaande uit een betonnen brandstofponton met een relatief grote inhoud en doorzet. Aangenomen is dat de ponton gelegen is in een

  2. Fast multichannel analyser

    Energy Technology Data Exchange (ETDEWEB)

    Berry, A; Przybylski, M M; Sumner, I [Science Research Council, Daresbury (UK). Daresbury Lab.

    1982-10-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10/sup 7/ s/sup -1/ has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format.

  3. A fast multichannel analyser

    International Nuclear Information System (INIS)

    Berry, A.; Przybylski, M.M.; Sumner, I.

    1982-01-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10 7 s -1 has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format. (orig.)

  4. CFD analyses in regulatory practice

    International Nuclear Information System (INIS)

    Bloemeling, F.; Pandazis, P.; Schaffrath, A.

    2012-01-01

    Numerical software is used in nuclear regulatory procedures for many problems in the fields of neutron physics, structural mechanics, thermal hydraulics etc. Among other things, the software is employed in dimensioning and designing systems and components and in simulating transients and accidents. In nuclear technology, analyses of this kind must meet strict requirements. Computational Fluid Dynamics (CFD) codes were developed for computing multidimensional flow processes of the type occurring in reactor cooling systems or in containments. Extensive experience has been accumulated by now in selected single-phase flow phenomena. At the present time, there is a need for development and validation with respect to the simulation of multi-phase and multi-component flows. As insufficient input by the user can lead to faulty results, the validity of the results and an assessment of uncertainties are guaranteed only through consistent application of so-called Best Practice Guidelines. The authors present the possibilities now available to CFD analyses in nuclear regulatory practice. This includes a discussion of the fundamental requirements to be met by numerical software, especially the demands upon computational analysis made by nuclear rules and regulations. In conclusion, 2 examples are presented of applications of CFD analysis to nuclear problems: Determining deboration in the condenser reflux mode of operation, and protection of the reactor pressure vessel (RPV) against brittle failure. (orig.)

  5. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  6. Application of wildfire simulation models for risk analysis

    Science.gov (United States)

    Ager, A.; Finney, M.

    2009-04-01

    Wildfire simulation models are being widely used by fire and fuels specialists in the U.S. to support tactical and strategic decisions related to the mitigation of wildfire risk. Much of this application has resulted from the development of a minimum travel time (MTT) fire spread algorithm (M. Finney) that makes it computationally feasible to simulate thousands of fires and generate burn probability and intensity maps over large areas (10,000 - 2,000,000 ha). The MTT algorithm is parallelized for multi-threaded processing and is imbedded in a number of research and applied fire modeling applications. High performance computers (e.g., 32-way 64 bit SMP) are typically used for MTT simulations, although the algorithm is also implemented in the 32 bit desktop FlamMap3 program (www.fire.org). Extensive testing has shown that this algorithm can replicate large fire boundaries in the heterogeneous landscapes that typify much of the wildlands in the western U.S. In this paper, we describe the application of the MTT algorithm to understand spatial patterns of burn probability (BP), and to analyze wildfire risk to key human and ecological values. The work is focused on a federally-managed 2,000,000 ha landscape in the central interior region of Oregon State, USA. The fire-prone study area encompasses a wide array of topography and fuel types and a number of highly valued resources that are susceptible to fire. We quantitatively defined risk as the product of the probability of a fire and the resulting consequence. Burn probabilities at specific intensity classes were estimated for each 100 x 100 m pixel by simulating 100,000 wildfires under burn conditions that replicated recent severe wildfire events that occurred under conditions where fire suppression was generally ineffective (97th percentile, August weather). We repeated the simulation under milder weather (70th percentile, August weather) to replicate a "wildland fire use scenario" where suppression is minimized to

  7. Study of a micro-sublimation apparatus with removal of the vapours by pumping; application to the analysis of fluorinated products (1963); Etude d'un appareillage de microsublimation avec entrainement des vapeurs par pompage et application a l'analyse des produits fluores (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Delvalle, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    Micro-sublimation analysis presents definite advantages both from the qualitative and quantitative points of view. An automatic micro-sublimation analysis apparatus has been developed for the analysis of fluorinated products (ClF{sub 3}, HF, UF{sub 6}, etc.) but this is only one particular application of a method which has a far wider field of possible applications. We give first the most favorable conditions for the operation of such an apparatus. These conditions are the use of a detector which is linear and independent of the nature of the gas, the flow of the sublimed vapours in the conditions of molecular flow, and finally a reproducible and linear re-heating of the separating trap. The apparatus thus built has the advantage of yielding any analysis without prior calibration. It also makes possible the easy identification of an unknown product by the determination of its vapour pressure curve and its molecular weight. The analysis of fluorinated products with this apparatus has shown that the experimental results agree well with what is expected. (author) [French] L'analyse par microsublimation presente un interet certain tant au point de vue qualitatif que quantitatif. Nous avons mis au point un appareillage automatique d'analyse par microsublimation pour l'analyse des produits fluores (CIF{sub 3}, HF, UF{sub 6}, etc.) mais il ne s'agit que de l'application a un cas particulier d'une methode ayant un champ d'applications bien plus vaste. Nous exposons tout d'abord les conditions les plus favorables au bon fonctionnement d'un tel appareil. Ces conditions sont l'emploi d'un detecteur lineaire et independant de la nature du gaz, l'ecoulement des vapeurs sublimees en regime moleculaire et enfin un rechauffage reproductible et lineaire du piege separateur. L'appareil ainsi realise presente l'avantage d'effectuer une analyse quelconque sans etalonnage prealable. Il permet en outre d'identifier aisement un corps inconnu par la determination de sa courbe de tension

  8. AMS analyses at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, E.M. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia). Physics Division

    1998-03-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with {sup 14}C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for {sup 14}C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent`s indigenous Aboriginal peoples. (author)

  9. AMS analyses at ANSTO

    International Nuclear Information System (INIS)

    Lawson, E.M.

    1998-01-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with 14 C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for 14 C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent's indigenous Aboriginal peoples. (author)

  10. Analyses of MHD instabilities

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki

    1985-01-01

    In this article analyses of the MHD stabilities which govern the global behavior of a fusion plasma are described from the viewpoint of the numerical computation. First, we describe the high accuracy calculation of the MHD equilibrium and then the analysis of the linear MHD instability. The former is the basis of the stability analysis and the latter is closely related to the limiting beta value which is a very important theoretical issue of the tokamak research. To attain a stable tokamak plasma with good confinement property it is necessary to control or suppress disruptive instabilities. We, next, describe the nonlinear MHD instabilities which relate with the disruption phenomena. Lastly, we describe vectorization of the MHD codes. The above MHD codes for fusion plasma analyses are relatively simple though very time-consuming and parts of the codes which need a lot of CPU time concentrate on a small portion of the codes, moreover, the codes are usually used by the developers of the codes themselves, which make it comparatively easy to attain a high performance ratio on the vector processor. (author)

  11. Applied mediation analyses

    DEFF Research Database (Denmark)

    Lange, Theis; Hansen, Kim Wadt; Sørensen, Rikke

    2017-01-01

    In recent years, mediation analysis has emerged as a powerful tool to disentangle causal pathways from an exposure/treatment to clinically relevant outcomes. Mediation analysis has been applied in scientific fields as diverse as labour market relations and randomized clinical trials of heart...... disease treatments. In parallel to these applications, the underlying mathematical theory and computer tools have been refined. This combined review and tutorial will introduce the reader to modern mediation analysis including: the mathematical framework; required assumptions; and software implementation...

  12. A simple beam analyser

    International Nuclear Information System (INIS)

    Lemarchand, G.

    1977-01-01

    (ee'p) experiments allow to measure the missing energy distribution as well as the momentum distribution of the extracted proton in the nucleus versus the missing energy. Such experiments are presently conducted on SACLAY's A.L.S. 300 Linac. Electrons and protons are respectively analysed by two spectrometers and detected in their focal planes. Counting rates are usually low and include time coincidences and accidentals. Signal-to-noise ratio is dependent on the physics of the experiment and the resolution of the coincidence, therefore it is mandatory to get a beam current distribution as flat as possible. Using new technologies has allowed to monitor in real time the behavior of the beam pulse and determine when the duty cycle can be considered as being good with respect to a numerical basis

  13. EEG analyses with SOBI.

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R.; Tang, Akaysha (University of New Mexico, Albuquerque, NM)

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  14. Pathway-based analyses.

    Science.gov (United States)

    Kent, Jack W

    2016-02-03

    New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.

  15. Analysing Access Control Specifications

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2009-01-01

    When prosecuting crimes, the main question to answer is often who had a motive and the possibility to commit the crime. When investigating cyber crimes, the question of possibility is often hard to answer, as in a networked system almost any location can be accessed from almost anywhere. The most...... common tool to answer this question, analysis of log files, faces the problem that the amount of logged data may be overwhelming. This problems gets even worse in the case of insider attacks, where the attacker’s actions usually will be logged as permissible, standard actions—if they are logged at all....... Recent events have revealed intimate knowledge of surveillance and control systems on the side of the attacker, making it often impossible to deduce the identity of an inside attacker from logged data. In this work we present an approach that analyses the access control configuration to identify the set...

  16. Sapiness–sentiment analyser

    Directory of Open Access Journals (Sweden)

    Jánosi-Rancz Katalin Tünde

    2015-12-01

    Full Text Available In our ever-evolving world, the importance of social networks is bigger now than ever. The purpose of this paper is to develop a sentiment analyzer for the Hungarian language, which we can then use to analyze any text and conduct further experiments. One such experiment is an application which can interface with social networks, and run sentiment analysis on the logged-in users friends’ posts and comments, while the other experiment is the use of sentiment analysis in order to visualize the evolution of relationships between characters in a text.

  17. Seismic fragility analyses

    International Nuclear Information System (INIS)

    Kostov, Marin

    2000-01-01

    In the last two decades there is increasing number of probabilistic seismic risk assessments performed. The basic ideas of the procedure for performing a Probabilistic Safety Analysis (PSA) of critical structures (NUREG/CR-2300, 1983) could be used also for normal industrial and residential buildings, dams or other structures. The general formulation of the risk assessment procedure applied in this investigation is presented in Franzini, et al., 1984. The probability of failure of a structure for an expected lifetime (for example 50 years) can be obtained from the annual frequency of failure, β E determined by the relation: β E ∫[d[β(x)]/dx]P(flx)dx. β(x) is the annual frequency of exceedance of load level x (for example, the variable x may be peak ground acceleration), P(fI x) is the conditional probability of structure failure at a given seismic load level x. The problem leads to the assessment of the seismic hazard β(x) and the fragility P(fl x). The seismic hazard curves are obtained by the probabilistic seismic hazard analysis. The fragility curves are obtained after the response of the structure is defined as probabilistic and its capacity and the associated uncertainties are assessed. Finally the fragility curves are combined with the seismic loading to estimate the frequency of failure for each critical scenario. The frequency of failure due to seismic event is presented by the scenario with the highest frequency. The tools usually applied for probabilistic safety analyses of critical structures could relatively easily be adopted to ordinary structures. The key problems are the seismic hazard definitions and the fragility analyses. The fragility could be derived either based on scaling procedures or on the base of generation. Both approaches have been presented in the paper. After the seismic risk (in terms of failure probability) is assessed there are several approaches for risk reduction. Generally the methods could be classified in two groups. The

  18. Website-analyse

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2009-01-01

    eller blindgyder, når han/hun besøger sitet. Studier i design og analyse af de visuelle og æstetiske aspekter i planlægning og brug af websites har imidlertid kun i et begrænset omfang været under reflektorisk behandling. Det er baggrunden for dette kapitel, som indleder med en gennemgang af æstetikkens......Websitet er i stigende grad det foretrukne medie inden for informationssøgning,virksomhedspræsentation, e-handel, underholdning, undervisning og social kontakt. I takt med denne voksende mangfoldighed af kommunikationsaktiviteter på nettet, er der kommet mere fokus på at optimere design og...... planlægning af de funktionelle og indholdsmæssige aspekter ved websites. Der findes en stor mængde teori- og metodebøger, som har specialiseret sig i de tekniske problemstillinger i forbindelse med interaktion og navigation, samt det sproglige indhold på websites. Den danske HCI (Human Computer Interaction...

  19. A channel profile analyser

    International Nuclear Information System (INIS)

    Gobbur, S.G.

    1983-01-01

    It is well understood that due to the wide band noise present in a nuclear analog-to-digital converter, events at the boundaries of adjacent channels are shared. It is a difficult and laborious process to exactly find out the shape of the channels at the boundaries. A simple scheme has been developed for the direct display of channel shape of any type of ADC on a cathode ray oscilliscope display. This has been accomplished by sequentially incrementing the reference voltage of a precision pulse generator by a fraction of a channel and storing ADC data in alternative memory locations of a multichannel pulse height analyser. Alternative channels are needed due to the sharing at the boundaries of channels. In the flat region of the profile alternate memory locations are channels with zero counts and channels with the full scale counts. At the boundaries all memory locations will have counts. The shape of this is a direct display of the channel boundaries. (orig.)

  20. Methodological challenges in carbohydrate analyses

    Directory of Open Access Journals (Sweden)

    Mary Beth Hall

    2007-07-01

    may have to accept that there are currently no acceptable, widely applicable analyses that meet these criteria, and seek to fill this analytical gap. Overall, there are good possibilities for accomplishing the nutritional partitioning of carbohydrates. But to ensure that this is done well, as a discipline, we need to continue our discussions on the basis for partitioning, on what fractions are actually significant, and rigorously evaluate proposed analyses to verify that they are analytically sound.

  1. Applicability of ICRP principles for safety analysis of radioactive waste geological storage; Etude de l'applicabilite des principes de la CIPR a l'analyse de surete du stockage geologique des dechets radioactifs

    Energy Technology Data Exchange (ETDEWEB)

    Lombard, J; Hubert, P; Pages, P

    1987-07-01

    modalites de stockage) doit etre determine par un processus comparatif visant a selectionner le niveau conduisant au meilleur compromis entre es depenses de protection et les risques residuels. Ces recommandations, particulierement Ia seconde, s'ecartent des principes des analyses de surete suivis jusqu'a present en France et a l'etranger; cette etude envisage les avantages et inconvenients potentiels lies a l'introduction de ce second principe. Dans ce but on se base sur differentes analyses realisees au niveau international integrant partiellement au moms ces principes. II ressort de cet examen que ces principes sont applicables mais que leur mise en oeuvre necessite une reorientation de modeles d'evaluation en particulier pour s'affranchir d'hypotheses trop conservatrices. Cette mise en oeuvre necessite par ailleurs davantage d'informations que dans le cas d'une approche deterministe de Ia surete puisqu'il faut estimer les consequences et les probabilites de I'ensemble des evenements possibles et ce pour es diverses modalites de stockage que l'on se propose de comparer. L'integration d'evaluation de ce type est cependant plus riche d'enseignements et constitue de ce fait une aide a Ia decision qui peut etre feconde. On peut enfin souligner que cette approche permet un meilleur dialogue entre les differentes instances decisionnelles et plus generalement entre les acteurs du debat, dans Ia mesure ou elle apporte un langage de discussion commun. On notera d'ailleurs que ce souci de clarification s'exprime quelle que soit I'optique de surete adopte (deterministe ou probabiliste), comme en temoignent les interrogations sur le sens du 'scenario central' de I'etude KBS. Par ailleurs, ii semble que ces principes seront vraisemblablement suivis au plan international afin d'uniformiser en outre les modes de gestion de Ia radioprotection dans ses diff?rentes composantes. (author)

  2. Molecular ecological network analyses.

    Science.gov (United States)

    Deng, Ye; Jiang, Yi-Huei; Yang, Yunfeng; He, Zhili; Luo, Feng; Zhou, Jizhong

    2012-05-30

    Understanding the interaction among different species within a community and their responses to environmental changes is a central goal in ecology. However, defining the network structure in a microbial community is very challenging due to their extremely high diversity and as-yet uncultivated status. Although recent advance of metagenomic technologies, such as high throughout sequencing and functional gene arrays, provide revolutionary tools for analyzing microbial community structure, it is still difficult to examine network interactions in a microbial community based on high-throughput metagenomics data. Here, we describe a novel mathematical and bioinformatics framework to construct ecological association networks named molecular ecological networks (MENs) through Random Matrix Theory (RMT)-based methods. Compared to other network construction methods, this approach is remarkable in that the network is automatically defined and robust to noise, thus providing excellent solutions to several common issues associated with high-throughput metagenomics data. We applied it to determine the network structure of microbial communities subjected to long-term experimental warming based on pyrosequencing data of 16 S rRNA genes. We showed that the constructed MENs under both warming and unwarming conditions exhibited topological features of scale free, small world and modularity, which were consistent with previously described molecular ecological networks. Eigengene analysis indicated that the eigengenes represented the module profiles relatively well. In consistency with many other studies, several major environmental traits including temperature and soil pH were found to be important in determining network interactions in the microbial communities examined. To facilitate its application by the scientific community, all these methods and statistical tools have been integrated into a comprehensive Molecular Ecological Network Analysis Pipeline (MENAP), which is open

  3. Bayesian uncertainty analyses of probabilistic risk models

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  4. Chapter No.4. Safety analyses

    International Nuclear Information System (INIS)

    2002-01-01

    for NPP V-1 Bohunice and on review of the impact of the modelling of selected components to the results of calculation safety analysis (a sensitivity study for NPP Mochovce). In 2001 UJD joined a new European project Alternative Approaches to the Safety Performance Indicators. The project is aimed at the information collecting and determining of approaches and recommendations for implementation of the risk oriented indicators, identification of the impact of the safety culture level and organisational culture on safety and applying of indicators to the needs of regulators and operators. In frame of the PHARE project UJD participated in the task focused on severe accident mitigation for nuclear power plants with VVER-440/V213 units. The main results of the analyses of nuclear power plants responses to severe accidents were summarised and the state of their analytical base performed in the past was evaluated within the project. Possible severe accident mitigation and preventative measures were proposed and their applicability for the nuclear power plants with VVER-440/V213 was investigated. The obtained results will be used in assessment activities and accident management of UJD. UJD has been involved also in EVITA project which makes a part of the 5 th EC Framework Programme. The project aims at validation of the European computer code ASTEC dedicated for severe accidents modelling. In 2001 the ASTEC computer code was tested on different platforms. The results of the testing are summarised in the technical report of EC issued in September 2001. Further activities within this project were focused on performing of selected accident scenarios analyses and comparison of the obtained results with the analyses realised with the help of other computer codes. The work on the project will continue in 2002. In 2001 a groundwork on establishing the Centre for Nuclear Safety in Central and Eastern Europe (CENS), the seat of which is going to be in Bratislava, has continued. The

  5. German (GRS) approach to accident analysis (part I). German licensing basis for accident analyses. Applicants accident analyses in second part license for Konvoi-plants. Appendix 1. Assessor accident analyses in second part license for Konvoi-plants. Appendix 2. Reference list of DBA to be considered in the safety status analysis of a PSR. Appendix 3a. Reference list of special very rare and BDB plant conditions to be considered in the safety status analysis of a PSE. Appendix 3b

    International Nuclear Information System (INIS)

    Velkov, K.

    2002-01-01

    Appendix 1: The Safety Analysis Report (S.A.R.) is presented from 3 Handbooks - ECC Handbook (LOCA), Plant Dynamics Handbook (Transients incl. ATWS), and Core Design Handbook. The first one Conceived as Living handbook, Basis for design, catalogue of transients, specifications and licensing. Handbook contains LOCA in primary system, it contains also core damage analysis, and description of codes, description of essential plant data and code input data. The second one consists of Basis for design, commissioning, operation, and catalogue of transients, specifications and licensing, as well as specified operation, disturbed operation, incidents, non-LOCA, SS-procedures and Code description. The third book consists of Reactivity balance and reactivity coefficients, efficiency of shutdown systems. Calculation of burn up cycle, power density distribution, and critical boron concentration. Also Codes used, as SAV79A standard analysis methodology including FASER for nuclear data generation, MEDIUM and PANBOX for static and transient core calculations. Appendix 2: The three TUEV (Technical Inspection Agencies) responsible for the three individual plants of type KONVOI: TUEV Bayern for ISAR-2, TUV-Hanover for KKE, TUEV-Stuttgart for GKN-2 and GRS performed the safety assessment. TUV-Bayern for disturbance and failure of secondary heat sink without loss of coolant (failure of main heat sink, erroneous operation of valves in MS and in FW system, failure of MFW supply), long term LONOP, performance of selected SBLOCA analyses. TUV Hanover for disturbances due to failure of MCPs, short term LONOP, damages of SG tubes incl. SGTR, performance of selected LOCA analyses (blowdown phase of LBLOCA). TUV-Stuttgart for breaks and leaks in MS and FW system with and without leaks in SG tubes. GRS for ATWS, sub-cooling transients due to disturbances on secondary side, initial and boundary conditions for transients with opening of pressurizer valves with and without stuck-open, most of the

  6. An Application of Multithreaded Data Mining in Educational Leadership Research

    OpenAIRE

    Fikis, David; Wang, Yinying; Bowers, Alex

    2015-01-01

    This study aims to apply high-performance computing to educational leadership research. Specifically, we applied an array of data acquisition and analytical techniques to the field of educational leadership research, including text data mining, probabiblistic topic modeling, and the use of software (CasperJS, GNU utilities, R, etc.) as well as hardware (the VELA batch computer and the multi-threaded data mining environment).  

  7. Micromechanical Analyses of Sturzstroms

    Science.gov (United States)

    Imre, Bernd; Laue, Jan; Springman, Sarah M.

    2010-05-01

    have been made observable and reproducible within a physical and a distinct element numerical modelling environment (DEM). As link between field evidence gained from the deposits of natural sturzstroms, the physical model within the ETH Geotechnical Drum Centrifuge (Springman et al., 2001) and the numerical model PFC-3D (Cundall and Strack, 1979; Itasca, 2005), serves a deterministic fractal analytical comminution model (Sammis et al., 1987; Steacy and Sammis, 1991). This approach allowed studying the effects of dynamic fragmentation within sturzstroms at true (macro) scale within the distinct element model, by allowing for a micro-mechanical, distinct particle based, and cyclic description of fragmentation at the same time, without losing significant computational efficiency. Theses experiments indicate rock mass and boundary conditions, which allow an alternating fragmenting and dilating dispersive regime to evolve and to be sustained long enough to replicate the spreading and run out of sturzstroms. The fragmenting spreading model supported here is able to explain the run out of a dry granular flow, beyond the travel distance predicted by a Coulomb frictional sliding model, without resorting to explanations by mechanics that can only be valid for certain, specific of the boundary conditions. The implications derived suggest that a sturzstrom, because of its strong relation to internal fractal fragmentation and other inertial effects, constitutes a landslide category of its own. Its mechanics differ significantly from all other gravity driven mass flows. This proposition does not exclude the possible appearance of frictionites, Toma hills or suspension flows etc., but it considers them as secondary features. The application of a fractal comminution model to describe natural and experimental sturzstrom deposits turned out to be a useful tool for sturzstrom research. Implemented within the DEM, it allows simulating the key features of sturzstrom successfully and

  8. Theory, analysis and applications of the operation of the superconducting transformer supplying a direct current to a non-dissipative superconducting charge circuit; Theorie, analyse et applications du fonctionnement du transformateur supraconducteur alimentant en courant continu un circuit de charge supraconducteur non dissipatif

    Energy Technology Data Exchange (ETDEWEB)

    Sole, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1967-07-01

    The author derives the very simple equations governing the operation of a transformer with superconducting windings supplying direct current to a non-dissipative superconducting charge circuit. An analysis of the various possible modes of operation with direct or slowly varying current raises the problem of the magnetic core. The study. leads to a conclusion which a priori might be surprising: the elimination of the magnetic core and the use of a primary super-conductor. An example of a possible realization of such a transformer is given as an indication, and the present prospects for different applications are considered. (author) [French] L'auteur etablit les equations tres simples de fonctionnement du transformateur a enroulements supraconducteurs alimentant en courant continu un circuit de charge supraconducteur non dissipatif. L'analyse de divers modes de fonctionnement possibles en courant continu ou lentement variable souleve le probleme du noyau magnetique. L'etude aboutit a une conclusion qui a priori aurait pu surprendre: l'elimination du noyau magnetique et l'utilisation d'un primaire supraconducteur. Un exemple de realisation possible d'un tel transformateur est donne a titre indicatif et les perspectives d'applications actuelles sont envisagees. (auteur)

  9. Cost/benefit analyses of environmental impact

    International Nuclear Information System (INIS)

    Goldman, M.I.

    1974-01-01

    Various aspects of cost-benefit analyses are considered. Some topics discussed are: regulations of the National Environmental Policy Act (NEPA); statement of AEC policy and procedures for implementation of NEPA; Calvert Cliffs decision; AEC Regulatory Guide; application of risk-benefit analysis to nuclear power; application of the as low as practicable (ALAP) rule to radiation discharges; thermal discharge restrictions proposed by EPA under the 1972 Amendment to the Water Pollution Control Act; estimates of somatic and genetic insult per unit population exposure; occupational exposure; EPA Point Source Guidelines for Discharges from Steam Electric Power Plants; and costs of closed-cycle cooling using cooling towers. (U.S.)

  10. Thermal and stress analyses with ANSYS program

    International Nuclear Information System (INIS)

    Kanoo, Iwao; Kawaguchi, Osamu; Asakura, Junichi.

    1975-03-01

    Some analyses of the heat conduction and elastic/inelastic stresses, carried out in Power Reactor and Nuclear Fuel Development Corporation (PNC) in fiscal 1973 using ANSYS (Engineering Analysis System) program, are summarized. In chapter I, the present state of structural analysis programs available for a FBR (fast breeder reactor) in PNC is explained. Chapter II is a brief description of the ANSYS current status. In chapter III are presented 8 examples of the steady-state and transient thermal analyses for fast-reactor plant components, and in chapter IV 5 examples of the inelastic structural analysis. With the advance in the field of finite element method, its applications in design study should extend progressively in the future. The present report, it is hoped, will contribute as references in similar analyses and at the same time help to understand the deformation and strain behaviors of structures. (Mori, K.)

  11. Kinematic gait analyses in healthy Golden Retrievers

    OpenAIRE

    Silva, Gabriela C.A.; Cardoso, Mariana Trés; Gaiad, Thais P.; Brolio, Marina P.; Oliveira, Vanessa C.; Assis Neto, Antonio; Martins, Daniele S.; Ambrósio, Carlos E.

    2014-01-01

    Kinematic analysis relates to the relative movement between rigid bodies and finds application in gait analysis and other body movements, interpretation of their data when there is change, determines the choice of treatment to be instituted. The objective of this study was to standardize the march of Dog Golden Retriever Healthy to assist in the diagnosis and treatment of musculoskeletal disorders. We used a kinematic analysis system to analyse the gait of seven dogs Golden Retriever, female,...

  12. Evaluation of periodic safety status analyses

    International Nuclear Information System (INIS)

    Faber, C.; Staub, G.

    1997-01-01

    In order to carry out the evaluation of safety status analyses by the safety assessor within the periodical safety reviews of nuclear power plants safety goal oriented requirements have been formulated together with complementary evaluation criteria. Their application in an inter-disciplinary coopertion covering the subject areas involved facilitates a complete safety goal oriented assessment of the plant status. The procedure is outlined briefly by an example for the safety goal 'reactivity control' for BWRs. (orig.) [de

  13. Analysing harmonic motions with an iPhone’s magnetometer

    Science.gov (United States)

    Yavuz, Ahmet; Kağan Temiz, Burak

    2016-05-01

    In this paper, we propose an experiment for analysing harmonic motion using an iPhone’s (or iPad’s) magnetometer. This experiment consists of the detection of magnetic field variations obtained from an iPhone’s magnetometer sensor. A graph of harmonic motion is directly displayed on the iPhone’s screen using the Sensor Kinetics application. Data from this application was analysed with Eureqa software to establish the equation of the harmonic motion. Analyses show that the use of an iPhone’s magnetometer to analyse harmonic motion is a practical and effective method for small oscillations and frequencies less than 15-20 Hz.

  14. SWPS3 – fast multi-threaded vectorized Smith-Waterman for IBM Cell/B.E. and ×86/SSE2

    Directory of Open Access Journals (Sweden)

    Krähenbühl Philipp

    2008-10-01

    Full Text Available Abstract Background We present swps3, a vectorized implementation of the Smith-Waterman local alignment algorithm optimized for both the Cell/BE and ×86 architectures. The paper describes swps3 and compares its performances with several other implementations. Findings Our benchmarking results show that swps3 is currently the fastest implementation of a vectorized Smith-Waterman on the Cell/BE, outperforming the only other known implementation by a factor of at least 4: on a Playstation 3, it achieves up to 8.0 billion cell-updates per second (GCUPS. Using the SSE2 instruction set, a quad-core Intel Pentium can reach 15.7 GCUPS. We also show that swps3 on this CPU is faster than a recent GPU implementation. Finally, we note that under some circumstances, alignments are computed at roughly the same speed as BLAST, a heuristic method. Conclusion The Cell/BE can be a powerful platform to align biological sequences. Besides, the performance gap between exact and heuristic methods has almost disappeared, especially for long protein sequences.

  15. SWPS3 – fast multi-threaded vectorized Smith-Waterman for IBM Cell/B.E. and ×86/SSE2

    Science.gov (United States)

    Szalkowski, Adam; Ledergerber, Christian; Krähenbühl, Philipp; Dessimoz, Christophe

    2008-01-01

    Background We present swps3, a vectorized implementation of the Smith-Waterman local alignment algorithm optimized for both the Cell/BE and ×86 architectures. The paper describes swps3 and compares its performances with several other implementations. Findings Our benchmarking results show that swps3 is currently the fastest implementation of a vectorized Smith-Waterman on the Cell/BE, outperforming the only other known implementation by a factor of at least 4: on a Playstation 3, it achieves up to 8.0 billion cell-updates per second (GCUPS). Using the SSE2 instruction set, a quad-core Intel Pentium can reach 15.7 GCUPS. We also show that swps3 on this CPU is faster than a recent GPU implementation. Finally, we note that under some circumstances, alignments are computed at roughly the same speed as BLAST, a heuristic method. Conclusion The Cell/BE can be a powerful platform to align biological sequences. Besides, the performance gap between exact and heuristic methods has almost disappeared, especially for long protein sequences. PMID:18959793

  16. SWPS3 - fast multi-threaded vectorized Smith-Waterman for IBM Cell/B.E. and x86/SSE2.

    Science.gov (United States)

    Szalkowski, Adam; Ledergerber, Christian; Krähenbühl, Philipp; Dessimoz, Christophe

    2008-10-29

    We present swps3, a vectorized implementation of the Smith-Waterman local alignment algorithm optimized for both the Cell/BE and x86 architectures. The paper describes swps3 and compares its performances with several other implementations. Our benchmarking results show that swps3 is currently the fastest implementation of a vectorized Smith-Waterman on the Cell/BE, outperforming the only other known implementation by a factor of at least 4: on a Playstation 3, it achieves up to 8.0 billion cell-updates per second (GCUPS). Using the SSE2 instruction set, a quad-core Intel Pentium can reach 15.7 GCUPS. We also show that swps3 on this CPU is faster than a recent GPU implementation. Finally, we note that under some circumstances, alignments are computed at roughly the same speed as BLAST, a heuristic method. The Cell/BE can be a powerful platform to align biological sequences. Besides, the performance gap between exact and heuristic methods has almost disappeared, especially for long protein sequences.

  17. Structural connectivity allows for multi-threading during rest: the structure of the cortex leads to efficient alternation between resting state exploratory behavior and default mode processing.

    Science.gov (United States)

    Senden, Mario; Goebel, Rainer; Deco, Gustavo

    2012-05-01

    Despite the absence of stimulation or task conditions the cortex exhibits highly structured spatio-temporal activity patterns. These patterns are known as resting state networks (RSNs) and emerge as low-frequency fluctuations (rest. We are interested in the relationship between structural connectivity of the cortex and the fluctuations exhibited during resting conditions. We are especially interested in the effect of degree of connectivity on resting state dynamics as the default mode network (DMN) is highly connected. We find in experimental resting fMRI data that the DMN is the functional network that is most frequently active and for the longest time. In large-scale computational simulations of the cortex based on the corresponding underlying DTI/DSI based neuroanatomical connectivity matrix, we additionally find a strong correlation between the mean degree of functional networks and the proportion of time they are active. By artificially modifying different types of neuroanatomical connectivity matrices in the model, we were able to demonstrate that only models based on structural connectivity containing hubs give rise to this relationship. We conclude that, during rest, the cortex alternates efficiently between explorations of its externally oriented functional repertoire and internally oriented processing as a consequence of the DMN's high degree of connectivity. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Bench top and portable mineral analysers, borehole core analysers and in situ borehole logging

    International Nuclear Information System (INIS)

    Howarth, W.J.; Watt, J.S.

    1982-01-01

    Bench top and portable mineral analysers are usually based on balanced filter techniques using scintillation detectors or on low resolution proportional detectors. The application of radioisotope x-ray techniques to in situ borehole logging is increasing, and is particularly suited for logging for tin and higher atomic number elements

  19. Hydrogeologic characterization and evolution of the 'excavation damaged zone' by statistical analyses of pressure signals: application to galleries excavated at the clay-stone sites of Mont Terri (Ga98) and Tournemire (Ga03)

    International Nuclear Information System (INIS)

    Fatmi, H.; Ababou, R.; Matray, J.M.; Joly, C.

    2010-01-01

    Document available in extended abstract form only. This paper presents methods of statistical analysis and interpretation of hydrogeological signals in clayey formations, e.g., pore water pressure and atmospheric pressure. The purpose of these analyses is to characterize the hydraulic behaviour of this type of formation in the case of a deep repository of Mid- Level/High-Level and Long-lived radioactive wastes, and to study the evolution of the geologic formation and its EDZ (Excavation Damaged Zone) during the excavation of galleries. We focus on galleries Ga98 and Ga03 in the sites of Mont Terri (Jura, Switzerland) and Tournemire (France, Aveyron), through data collected in the BPP- 1 and PH2 boreholes, respectively. The Mont Terri site, crossing the Aalenian Opalinus clay-stone, is an underground laboratory managed by an international consortium, namely the Mont Terri project (Switzerland). The Tournemire site, crossing the Toarcian clay-stone, is an Underground Research facility managed by IRSN (France). We have analysed pore water and atmospheric pressure signals at these sites, sometimes in correlation with other data. The methods of analysis are based on the theory of stationary random signals (correlation functions, Fourier spectra, transfer functions, envelopes), and on multi-resolution wavelet analysis (adapted to nonstationary and evolutionary signals). These methods are also combined with filtering techniques, and they can be used for single signals as well as pairs of signals (cross-analyses). The objective of this work is to exploit pressure measurements in selected boreholes from the two compacted clay sites, in order to: - evaluate phenomena affecting the measurements (earth tides, barometric pressures..); - estimate hydraulic properties (specific storage..) of the clay-stones prior to excavation works and compare them with those estimated by pulse or slug tests on shorter time scales; - analyze the effects of drift excavation on pore pressures

  20. Application of multi-criteria methods to compare different solutions of supplying buildings in electricity from photovoltaic systems

    Directory of Open Access Journals (Sweden)

    Mendecka Barbara

    2016-01-01

    Full Text Available Nowadays, the technologies of electricity generation in distributed systems are usually associated with Renewable Energy Sources (RES. The choice of the construction site depends mainly on the availability of the power system. However, energy planning, especially in case of RES, is a complex process involving multiple and often conflicting objectives. The complexity of the selection of the electricity system is typically addressed with the use of multi-criteria tools, involving all of the considered criteria and also different methods of their aggregation. The result is a final ranking of the available alternatives. This paper describes the application of a multi-criteria decision tool for the comparative analysis of the use of alternative options of the PV technology for electricity production. Four decision variants are considered, including the different construction of solar farms (static and movable structure and different types of configuration of individual installation (off and on-grid. The construction of each new sources of electricity generation, including PV, is the multi-threaded and multi-dimensional decision problem. The criteria used in the analysis combine economic, environmental and social issues. The first of the considered criterion is the Net Present Value (NPV which determines the economic viability of the project. The second criterion, thermo-ecological cost (TEC, connecting energy and environmental issues. Finally, the Land Use (LU is considered as a social criterion. As aggregation function, the Weighted Sum Method (WSM is used. The sensitivity analysis of the criteria weights was performed with the use of a novel method involving Monte Carlo simulation and a method of data reconciliation.

  1. Méthodologie d’analyse des signaux et caractérisation hydrogéologique : application aux chroniques de données obtenues aux laboratoires souterrains du Mont Terri, Tournemire et Meuse/Haute-Marne

    OpenAIRE

    Fatmi, Hassane

    2009-01-01

    Ce rapport présente des méthodes de prétraitement, d'analyse statistique et d'interprétation de chroniques hydrogéologiques de massifs peu perméables (argilites) dans le cadre d'études sur le stockage profond de déchets radioactifs. Les séries temporelles analysées sont la pression interstitielle et la pression atmosphérique, en relation avec différents phénomènes (marées terrestres, effet barométrique, évolution de l'excavation des galeries). Les pré-traitements permetten...

  2. Soil analyses by ICP-MS (Review)

    International Nuclear Information System (INIS)

    Yamasaki, Shin-ichi

    2000-01-01

    Soil analyses by inductively coupled plasma mass spectrometry (ICP-MS) are reviewed. The first half of the paper is devoted to the development of techniques applicable to soil analyses, where diverse analytical parameters are carefully evaluated. However, the choice of soil samples is somewhat arbitrary, and only a limited number of samples (mostly reference materials) are examined. In the second half, efforts are mostly concentrated on the introduction of reports, where a large number of samples and/or very precious samples have been analyzed. Although the analytical techniques used in these reports are not necessarily novel, valuable information concerning such topics as background levels of elements in soils, chemical forms of elements in soils and behavior of elements in soil ecosystems and the environment can be obtained. The major topics discussed are total elemental analysis, analysis of radionuclides with long half-lives, speciation, leaching techniques, and isotope ratio measurements. (author)

  3. Sorption analyses in materials science: selected oxides

    International Nuclear Information System (INIS)

    Fuller, E.L. Jr.; Condon, J.B.; Eager, M.H.; Jones, L.L.

    1981-01-01

    Physical adsorption studies have been shown to be extremely valuable in studying the chemistry and structure of dispersed materials. Many processes rely on the access to the large amount of surface made available by the high degree of dispersion. Conversely, there are many applications where consolidation of the dispersed solids is required. Several systems (silica gel, alumina catalysts, mineralogic alumino-silicates, and yttrium oxide plasters) have been studied to show the type and amount of chemical and structural information that can be obtained. Some review of current theories is given and additional concepts are developed based on statistical and thermodynamic arguments. The results are applied to sorption data to show that detailed sorption analyses are extremely useful and can provide valuable information that is difficult to obtain by any other means. Considerable emphasis has been placed on data analyses and interpretation of a nonclassical nature to show the potential of such studies that is often not recognized nor utilized

  4. Pratique de l'analyse fonctionelle

    CERN Document Server

    Tassinari, Robert

    1997-01-01

    Mettre au point un produit ou un service qui soit parfaitement adapté aux besoins et aux exigences du client est indispensable pour l'entreprise. Pour ne rien laisser au hasard, il s'agit de suivre une méthodologie rigoureuse : celle de l'analyse fonctionnelle. Cet ouvrage définit précisément cette méthode ainsi que ses champs d'application. Il décrit les méthodes les plus performantes en termes de conception de produit et de recherche de qualité et introduit la notion d'analyse fonctionnelle interne. Un ouvrage clé pour optimiser les processus de conception de produit dans son entreprise. -- Idées clés, par Business Digest

  5. Educational and Scientific Applications of Climate Model Diagnostic Analyzer

    Science.gov (United States)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Zhang, J.; Bao, Q.

    2016-12-01

    Climate Model Diagnostic Analyzer (CMDA) is a web-based information system designed for the climate modeling and model analysis community to analyze climate data from models and observations. CMDA provides tools to diagnostically analyze climate data for model validation and improvement, and to systematically manage analysis provenance for sharing results with other investigators. CMDA utilizes cloud computing resources, multi-threading computing, machine-learning algorithms, web service technologies, and provenance-supporting technologies to address technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. As CMDA infrastructure and technology have matured, we have developed the educational and scientific applications of CMDA. Educationally, CMDA supported the summer school of the JPL Center for Climate Sciences for three years since 2014. In the summer school, the students work on group research projects where CMDA provide datasets and analysis tools. Each student is assigned to a virtual machine with CMDA installed in Amazon Web Services. A provenance management system for CMDA is developed to keep track of students' usages of CMDA, and to recommend datasets and analysis tools for their research topic. The provenance system also allows students to revisit their analysis results and share them with their group. Scientifically, we have developed several science use cases of CMDA covering various topics, datasets, and analysis types. Each use case developed is described and listed in terms of a scientific goal, datasets used, the analysis tools used, scientific results discovered from the use case, an analysis result such as output plots and data files, and a link to the exact analysis service call with all the input arguments filled. For example, one science use case is the evaluation of NCAR CAM5 model with MODIS total cloud fraction. The analysis service used is Difference Plot Service of

  6. Sample preparation in foodomic analyses.

    Science.gov (United States)

    Martinović, Tamara; Šrajer Gajdošik, Martina; Josić, Djuro

    2018-04-16

    Representative sampling and adequate sample preparation are key factors for successful performance of further steps in foodomic analyses, as well as for correct data interpretation. Incorrect sampling and improper sample preparation can be sources of severe bias in foodomic analyses. It is well known that both wrong sampling and sample treatment cannot be corrected anymore. These, in the past frequently neglected facts, are now taken into consideration, and the progress in sampling and sample preparation in foodomics is reviewed here. We report the use of highly sophisticated instruments for both high-performance and high-throughput analyses, as well as miniaturization and the use of laboratory robotics in metabolomics, proteomics, peptidomics and genomics. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    Science.gov (United States)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  8. Advancing the application of systems thinking in health: analysing the contextual and social network factors influencing the use of sustainability indicators in a health system--a comparative study in Nepal and Somaliland.

    Science.gov (United States)

    Blanchet, Karl; Palmer, Jennifer; Palanchowke, Raju; Boggs, Dorothy; Jama, Ali; Girois, Susan

    2014-08-26

    Health systems strengthening is becoming a key component of development agendas for low-income countries worldwide. Systems thinking emphasizes the role of diverse stakeholders in designing solutions to system problems, including sustainability. The objective of this paper is to compare the definition and use of sustainability indicators developed through the Sustainability Analysis Process in two rehabilitation sectors, one in Nepal and one in Somaliland, and analyse the contextual factors (including the characteristics of system stakeholder networks) influencing the use of sustainability data. Using the Sustainability Analysis Process, participants collectively clarified the boundaries of their respective systems, defined sustainability, and identified sustainability indicators. Baseline indicator data was gathered, where possible, and then researched again 2 years later. As part of the exercise, system stakeholder networks were mapped at baseline and at the 2-year follow-up. We compared stakeholder networks and interrelationships with baseline and 2-year progress toward self-defined sustainability goals. Using in-depth interviews and observations, additional contextual factors affecting the use of sustainability data were identified. Differences in the selection of sustainability indicators selected by local stakeholders from Nepal and Somaliland reflected differences in the governance and structure of the present rehabilitation system. At 2 years, differences in the structure of social networks were more marked. In Nepal, the system stakeholder network had become more dense and decentralized. Financial support by an international organization facilitated advancement toward self-identified sustainability goals. In Somaliland, the small, centralised stakeholder network suffered a critical rupture between the system's two main information brokers due to competing priorities and withdrawal of international support to one of these. Progress toward self

  9. Descriptive Analyses of Mechanical Systems

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Hansen, Claus Thorp

    2003-01-01

    Forord Produktanalyse og teknologianalyse kan gennmføres med et bredt socio-teknisk sigte med henblik på at forstå kulturelle, sociologiske, designmæssige, forretningsmæssige og mange andre forhold. Et delområde heri er systemisk analyse og beskrivelse af produkter og systemer. Nærværende kompend...

  10. Analysing and Comparing Encodability Criteria

    Directory of Open Access Journals (Sweden)

    Kirstin Peters

    2015-08-01

    Full Text Available Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes.

  11. Analysing Children's Drawings: Applied Imagination

    Science.gov (United States)

    Bland, Derek

    2012-01-01

    This article centres on a research project in which freehand drawings provided a richly creative and colourful data source of children's imagined, ideal learning environments. Issues concerning the analysis of the visual data are discussed, in particular, how imaginative content was analysed and how the analytical process was dependent on an…

  12. Impact analyses after pipe rupture

    International Nuclear Information System (INIS)

    Chun, R.C.; Chuang, T.Y.

    1983-01-01

    Two of the French pipe whip experiments are reproduced with the computer code WIPS. The WIPS results are in good agreement with the experimental data and the French computer code TEDEL. This justifies the use of its pipe element in conjunction with its U-bar element in a simplified method of impact analyses

  13. Analyser of sweeping electron beam

    International Nuclear Information System (INIS)

    Strasser, A.

    1993-01-01

    The electron beam analyser has an array of conductors that can be positioned in the field of the sweeping beam, an electronic signal treatment system for the analysis of the signals generated in the conductors by the incident electrons and a display for the different characteristics of the electron beam

  14. Study of failure criterion applicable to elastic-plastic finite element analyses of wall-thinned pipes subjected to multi-axial loading. Case for groove type flaw under combined internal pressure and bending loading

    International Nuclear Information System (INIS)

    Mori, Kosuke; Meshii, Toshiyuki

    2015-01-01

    In this paper, a failure criterion applicable to large-strain finite element analysis (FEA) results was studied to predict the limit bending load M_c of the groove shaped wall-thinned pipes, under combined internal pressure and bending load, that experienced cracking. In our previous studies, Meshii and Ito (2012) considered cracking of pipes with groove shaped flaw (small axial length δ_z in Fig. 1) was due to the plastic instability at the wall-thinned section and proposed the Domain Collapse Criterion (DCC). The DCC could predict M_c of cracking for small δ_z by comparing the von Mises stress σ_M_i_s_e_s with the true tensile strength σ_B. Because the discrepancy in prediction of the M_c in the case of cracking was within 15%, it was considered that the predictability was could be improved further. Thus, in this work, attempt was made to improve the accuracy of M_c prediction with a perspective that multi-axial stress state might affect this plastic instability at the wall-thinned section. As a result of examination of the various failure criteria based on multi-axial stress, it was confirmed that the limit bending load of the groove flawed pipe that experienced cracking in experiment (Hereafter, it was expressed 'flawed pipe that experienced cracking') could be predicted within 5% accuracy by applying Hill's plastic instability onset criterion (Hill, 1952) to the outer surface of the crack penetration section. The accuracy of the predicted limit bending load was improved from DCC's within 15% to within 5%. (author)

  15. Data Reduction of Laser Ablation Split-Stream (LASS) Analyses Using Newly Developed Features Within Iolite: With Applications to Lu-Hf + U-Pb in Detrital Zircon and Sm-Nd +U-Pb in Igneous Monazite

    Science.gov (United States)

    Fisher, Christopher M.; Paton, Chad; Pearson, D. Graham; Sarkar, Chiranjeeb; Luo, Yan; Tersmette, Daniel B.; Chacko, Thomas

    2017-12-01

    A robust platform to view and integrate multiple data sets collected simultaneously is required to realize the utility and potential of the Laser Ablation Split-Stream (LASS) method. This capability, until now, has been unavailable and practitioners have had to laboriously process each data set separately, making it challenging to take full advantage of the benefits of LASS. We describe a new program for handling multiple mass spectrometric data sets collected simultaneously, designed specifically for the LASS technique, by which a laser aerosol is been split into two or more separate "streams" to be measured on separate mass spectrometers. New features within Iolite (https://iolite-software.com) enable the capability of loading, synchronizing, viewing, and reducing two or more data sets acquired simultaneously, as multiple DRSs (data reduction schemes) can be run concurrently. While this version of Iolite accommodates any combination of simultaneously collected mass spectrometer data, we demonstrate the utility using case studies where U-Pb and Lu-Hf isotope composition of zircon, and U-Pb and Sm-Nd isotope composition of monazite were analyzed simultaneously, in crystals showing complex isotopic zonation. These studies demonstrate the importance of being able to view and integrate simultaneously acquired data sets, especially for samples with complicated zoning and decoupled isotope systematics, in order to extract accurate and geologically meaningful isotopic and compositional data. This contribution provides instructions and examples for handling simultaneously collected laser ablation data. An instructional video is also provided. The updated Iolite software will help to fully develop the applications of both LASS and multi-instrument mass spectrometric measurement capabilities.

  16. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  17. Mitogenomic analyses from ancient DNA

    DEFF Research Database (Denmark)

    Paijmans, Johanna L. A.; Gilbert, Tom; Hofreiter, Michael

    2013-01-01

    The analysis of ancient DNA is playing an increasingly important role in conservation genetic, phylogenetic and population genetic analyses, as it allows incorporating extinct species into DNA sequence trees and adds time depth to population genetics studies. For many years, these types of DNA...... analyses (whether using modern or ancient DNA) were largely restricted to the analysis of short fragments of the mitochondrial genome. However, due to many technological advances during the past decade, a growing number of studies have explored the power of complete mitochondrial genome sequences...... yielded major progress with regard to both the phylogenetic positions of extinct species, as well as resolving population genetics questions in both extinct and extant species....

  18. Recriticality analyses for CAPRA cores

    International Nuclear Information System (INIS)

    Maschek, W.; Thiem, D.

    1995-01-01

    The first scoping calculation performed show that the energetics levels from recriticalities in CAPRA cores are in the same range as in conventional cores. However, considerable uncertainties exist and further analyses are necessary. Additional investigations are performed for the separation scenarios of fuel/steel/inert and matrix material as a large influence of these processes on possible ramp rates and kinetics parameters was detected in the calculations. (orig./HP)

  19. Recriticality analyses for CAPRA cores

    Energy Technology Data Exchange (ETDEWEB)

    Maschek, W.; Thiem, D.

    1995-08-01

    The first scoping calculation performed show that the energetics levels from recriticalities in CAPRA cores are in the same range as in conventional cores. However, considerable uncertainties exist and further analyses are necessary. Additional investigations are performed for the separation scenarios of fuel/steel/inert and matrix material as a large influence of these processes on possible ramp rates and kinetics parameters was detected in the calculations. (orig./HP)

  20. Methodology of cost benefit analyses

    International Nuclear Information System (INIS)

    Patrik, M.; Babic, P.

    2000-10-01

    The report addresses financial aspects of proposed investments and other steps which are intended to contribute to nuclear safety. The aim is to provide introductory insight into the procedures and potential of cost-benefit analyses as a routine guide when making decisions on costly provisions as one of the tools to assess whether a particular provision is reasonable. The topic is applied to the nuclear power sector. (P.A.)

  1. Analysing the Wrongness of Killing

    DEFF Research Database (Denmark)

    Di Nucci, Ezio

    2014-01-01

    This article provides an in-depth analysis of the wrongness of killing by comparing different versions of three influential views: the traditional view that killing is always wrong; the liberal view that killing is wrong if and only if the victim does not want to be killed; and Don Marquis‟ future...... of value account of the wrongness of killing. In particular, I illustrate the advantages that a basic version of the liberal view and a basic version of the future of value account have over competing alternatives. Still, ultimately none of the views analysed here are satisfactory; but the different...

  2. Theorising and Analysing Academic Labour

    Directory of Open Access Journals (Sweden)

    Thomas Allmer

    2018-01-01

    Full Text Available The aim of this article is to contextualise universities historically within capitalism and to analyse academic labour and the deployment of digital media theoretically and critically. It argues that the post-war expansion of the university can be considered as medium and outcome of informational capitalism and as a dialectical development of social achievement and advanced commodification. The article strives to identify the class position of academic workers, introduces the distinction between academic work and labour, discusses the connection between academic, information and cultural work, and suggests a broad definition of university labour. It presents a theoretical model of working conditions that helps to systematically analyse the academic labour process and to provide an overview of working conditions at universities. The paper furthermore argues for the need to consider the development of education technologies as a dialectics of continuity and discontinuity, discusses the changing nature of the forces and relations of production, and the impact on the working conditions of academics in the digital university. Based on Erik Olin Wright’s inclusive approach of social transformation, the article concludes with the need to bring together anarchist, social democratic and revolutionary strategies for establishing a socialist university in a commons-based information society.

  3. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  4. Application of tailings flow analyses to field conditions

    International Nuclear Information System (INIS)

    Bryant, S.M.

    1983-01-01

    Catastrophic failures of tailings impoundments, in which liquefied tailings flow over substantial distances, pose severe hazards to the health and safety of people in downstream areas, and have a potential for economic and environmental devastation. The purpose of this study, an extension of prior investigations, was to develop procedures to measure Bingham flow parameters for mine tailings. In addition, the analytical procedures developed by Lucia (1981) and Jeyapalan (1980) for predicting the consequences of tailings flow failures were evaluated and applied to the Tenmile Tailings Pond at Climax, Colorado. Revisions in the simplified equilibrium procedure, developed by Lucia (1981), make it more compatible with infinite slope solutions. Jeyapalan's model was evaluated using a simple rheological analogy, and it appears there are some numerical difficulties with the operation of the computer program TFLOW used to model the displacements and velocities of flow slides. Comparable flow distances can be determined using either model if the flow volume used in the simplified equilibrium procedure is estimated properly. When both analytical procedures were applied to the Tenmile Pond, it was concluded there was no potential for a flow slide at the site

  5. Analysing and Enriching Focused Semantic Web Archives for Parliament Applications

    Directory of Open Access Journals (Sweden)

    Elena Demidova

    2014-07-01

    Full Text Available The web and the social web play an increasingly important role as an information source for Members of Parliament and their assistants, journalists, political analysts and researchers. It provides important and crucial background information, like reactions to political events and comments made by the general public. The case study presented in this paper is driven by two European parliaments (the Greek and the Austrian parliament and targets an effective exploration of political web archives. In this paper, we describe semantic technologies deployed to ease the exploration of the archived web and social web content and present evaluation results.

  6. Imaging data analyses for hazardous waste applications. Final report

    International Nuclear Information System (INIS)

    David, N.; Ginsberg, I.W.

    1995-12-01

    The paper presents some examples of the use of remote sensing products for characterization of hazardous waste sites. The sites are located at the Los Alamos National Laboratory (LANL) where materials associated with past weapons testing are buried. Problems of interest include delineation of strata for soil sampling, detection and delineation of buried trenches containing contaminants, seepage from capped areas and old septic drain fields, and location of faults and fractures relative to hazardous waste areas. Merging of site map and other geographic information with imagery was found by site managers to produce useful products. Merging of hydrographic and soil contaminant data aided soil sampling strategists. Overlays of suspected trench on multispectral and thermal images showed correlation between image signatures and trenches. Overlays of engineering drawings on recent and historical photos showed error in trench location and extent. A thermal image showed warm anomalies suspected to be areas of water seepage through an asphalt cap. Overlays of engineering drawings on multispectral and thermal images showed correlation between image signatures and drain fields. Analysis of aerial photography and spectral signatures of faults/fractures improved geologic maps of mixed waste areas

  7. Design and analyses of porous concrete for safety applications

    NARCIS (Netherlands)

    Agar Ozbek, A.S.

    2016-01-01

    In an explosion taking place close to or inside a concrete structure, apart from the dangers of the explosive itself, the hazard due to the large debris originating from the concrete structure is an important threat. Protective structures with high probability to experience such extreme loadings

  8. Analysing Use of High Privileges in Android Applications

    OpenAIRE

    Meng, Huasong

    2018-01-01

    The number of Android smartphone and tablet users has experienced a rapid growth in the past few years and it raises users' awareness on the privacy and security of their mobile devices. The features of openness and extensibility make Android unique, attractive and competitive but meanwhile vulnerable to malicious attack. There are lots of users rooting their Android devices for some useful functions, which are not originally provided to developers and users, such as backup and taking screens...

  9. Application of Pareto's Method in Analysing Postal Service Quality

    Directory of Open Access Journals (Sweden)

    Elizabeta Kovač-Striko

    2012-10-01

    Full Text Available The basic aim of control in postal traffic is to insure high-quality se1vices for customers. The paper presents the analysisof quality control in collecting postal items, based on the dataobtained during the control performed by the internal ControlService in the postal centre for international traffic Zagreb. Thepaper also offers some measures for the improvement of thequality of services.

  10. Late neolithic pottery standardization: Application of statistical analyses

    Directory of Open Access Journals (Sweden)

    Vuković Jasna

    2011-01-01

    Full Text Available This paper defines the notion of standardization, presents the methodological approach to analysis, points to the problems and limitation arising in examination of materials from archaeological excavations, and presents the results of the analysis of coefficients of variation of metric parameters of the Late Neolithic vessels recovered at the sites of Vinča and Motel Slatina. [Projekat Ministarstva nauke Republike Srbije, br. 177012: Society, the spiritual and material culture and communications in prehistory and early history of the Balkans

  11. Eigenvalue-dependent neutron energy spectra: Definitions, analyses, and applications

    International Nuclear Information System (INIS)

    Cacuci, D.G.; Ronen, Y.; Shayer, Z.; Wagschal, J.J.; Yeivin, Y.

    1982-01-01

    A general qualitative analysis of spectral effects that arise from solving the kappa-, α-, γ-, and sigma-eigenvalue formulations of the neutron transport equation for nuclear systems that deviate (to first order) from criticality is presented. Hierarchies of neutron spectra softness are established and expressed concisely in terms of the newly introduced spatialdependent local spectral indices for the core and for the reflector. It is shown that each hierarchy is preserved, regardless of the nature of the specific physical mechanism that cause the system to deviate from criticality. Qualitative conclusions regarding the general behavior of the spectrum-dependent integral spectral indices and ICRs corresponding to the kappa-, α-, γ-, and sigma-eigenvalue formalisms are also presented. By defining spectral indices separately for the core and for the reflector, it is possible to account for the characteristics of neutron spectra in both the core and the reflector. The distinctions between the spectra in the core and in the reflector could not have been accounted for by using a single type of spectral index (e.g., a spectral index for the entire system or a spectral index solely for the core)

  12. Severe accident recriticality analyses (SARA)

    DEFF Research Database (Denmark)

    Frid, W.; Højerup, C.F.; Lindholm, I.

    2001-01-01

    with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality-both super-prompt power bursts and quasi steady-state power......Recriticality in a BWR during reflooding of an overheated partly degraded core, i.e. with relocated control rods, has been studied for a total loss of electric power accident scenario. In order to assess the impact of recriticality on reactor safety, including accident management strategies......, which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal g(-1), was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate of 2000 kg s(-1). In most cases, however, the predicted energy deposition was smaller, below...

  13. Hydrogen Analyses in the EPR

    International Nuclear Information System (INIS)

    Worapittayaporn, S.; Eyink, J.; Movahed, M.

    2008-01-01

    In severe accidents with core melting large amounts of hydrogen may be released into the containment. The EPR provides a combustible gas control system to prevent hydrogen combustion modes with the potential to challenge the containment integrity due to excessive pressure and temperature loads. This paper outlines the approach for the verification of the effectiveness and efficiency of this system. Specifically, the justification is a multi-step approach. It involves the deployment of integral codes, lumped parameter containment codes and CFD codes and the use of the sigma criterion, which provides the link to the broad experimental data base for flame acceleration (FA) and deflagration to detonation transition (DDT). The procedure is illustrated with an example. The performed analyses show that hydrogen combustion at any time does not lead to pressure or temperature loads that threaten the containment integrity of the EPR. (authors)

  14. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  15. The hemispherical deflector analyser revisited

    Energy Technology Data Exchange (ETDEWEB)

    Benis, E.P. [Institute of Electronic Structure and Laser, P.O. Box 1385, 71110 Heraklion, Crete (Greece)], E-mail: benis@iesl.forth.gr; Zouros, T.J.M. [Institute of Electronic Structure and Laser, P.O. Box 1385, 71110 Heraklion, Crete (Greece); Department of Physics, University of Crete, P.O. Box 2208, 71003 Heraklion, Crete (Greece)

    2008-04-15

    Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R{sub 0} and the nominal value of the potential V(R{sub 0}) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD.

  16. The hemispherical deflector analyser revisited

    International Nuclear Information System (INIS)

    Benis, E.P.; Zouros, T.J.M.

    2008-01-01

    Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R 0 and the nominal value of the potential V(R 0 ) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD

  17. Analysing performance through value creation

    Directory of Open Access Journals (Sweden)

    Adrian TRIFAN

    2015-12-01

    Full Text Available This paper draws a parallel between measuring financial performance in 2 variants: the first one using data offered by accounting, which lays emphasis on maximizing profit, and the second one which aims to create value. The traditional approach to performance is based on some indicators from accounting data: ROI, ROE, EPS. The traditional management, based on analysing the data from accounting, has shown its limits, and a new approach is needed, based on creating value. The evaluation of value based performance tries to avoid the errors due to accounting data, by using other specific indicators: EVA, MVA, TSR, CVA. The main objective is shifted from maximizing the income to maximizing the value created for shareholders. The theoretical part is accompanied by a practical analysis regarding the creation of value and an analysis of the main indicators which evaluate this concept.

  18. Reliability and safety analyses under fuzziness

    International Nuclear Information System (INIS)

    Onisawa, T.; Kacprzyk, J.

    1995-01-01

    Fuzzy theory, for example possibility theory, is compatible with probability theory. What is shown so far is that probability theory needs not be replaced by fuzzy theory, but rather that the former works much better in applications if it is combined with the latter. In fact, it is said that there are two essential uncertainties in the field of reliability and safety analyses: One is a probabilistic uncertainty which is more relevant for mechanical systems and the natural environment, and the other is fuzziness (imprecision) caused by the existence of human beings in systems. The classical probability theory alone is therefore not sufficient to deal with uncertainties in humanistic system. In such a context this collection of works will put a milestone in the arguments of probability theory and fuzzy theory. This volume covers fault analysis, life time analysis, reliability, quality control, safety analysis and risk analysis. (orig./DG). 106 figs

  19. Thermal hydraulic reactor safety analyses and experiments

    International Nuclear Information System (INIS)

    Holmstroem, H.; Eerikaeinen, L.; Kervinen, T.; Kilpi, K.; Mattila, L.; Miettinen, J.; Yrjoelae, V.

    1989-04-01

    The report introduces the results of the thermal hydraulic reactor safety research performed in the Nuclear Engineering Laboratory of the Technical Research Centre of Finland (VTT) during the years 1972-1987. Also practical applications i.e. analyses for the safety authorities and power companies are presented. The emphasis is on description of the state-of-the-art know how. The report describes VTT's most important computer codes, both those of foreign origin and those developed at VTT, and their assessment work, VTT's own experimental research, as well as international experimental projects and other forms of cooperation VTT has participated in. Appendix 8 contains a comprehensive list of the most important publications and technical reports produced. They present the content and results of the research in detail.(orig.)

  20. Attitude stability analyses for small artificial satellites

    International Nuclear Information System (INIS)

    Silva, W R; Zanardi, M C; Formiga, J K S; Cabette, R E S; Stuchi, T J

    2013-01-01

    The objective of this paper is to analyze the stability of the rotational motion of a symmetrical spacecraft, in a circular orbit. The equilibrium points and regions of stability are established when components of the gravity gradient torque acting on the spacecraft are included in the equations of rotational motion, which are described by the Andoyer's variables. The nonlinear stability of the equilibrium points of the rotational motion is analysed here by the Kovalev-Savchenko theorem. With the application of the Kovalev-Savchenko theorem, it is possible to verify if they remain stable under the influence of the terms of higher order of the normal Hamiltonian. In this paper, numerical simulations are made for a small hypothetical artificial satellite. Several stable equilibrium points were determined and regions around these points have been established by variations in the orbital inclination and in the spacecraft principal moment of inertia. The present analysis can directly contribute in the maintenance of the spacecraft's attitude

  1. Analyser Framework to Verify Software Components

    Directory of Open Access Journals (Sweden)

    Rolf Andreas Rasenack

    2009-01-01

    Full Text Available Today, it is important for software companies to build software systems in a short time-interval, to reduce costs and to have a good market position. Therefore well organized and systematic development approaches are required. Reusing software components, which are well tested, can be a good solution to develop software applications in effective manner. The reuse of software components is less expensive and less time consuming than a development from scratch. But it is dangerous to think that software components can be match together without any problems. Software components itself are well tested, of course, but even if they composed together problems occur. Most problems are based on interaction respectively communication. Avoiding such errors a framework has to be developed for analysing software components. That framework determines the compatibility of corresponding software components. The promising approach discussed here, presents a novel technique for analysing software components by applying an Abstract Syntax Language Tree (ASLT. A supportive environment will be designed that checks the compatibility of black-box software components. This article is concerned to the question how can be coupled software components verified by using an analyzer framework and determines the usage of the ASLT. Black-box Software Components and Abstract Syntax Language Tree are the basis for developing the proposed framework and are discussed here to provide the background knowledge. The practical implementation of this framework is discussed and shows the result by using a test environment.

  2. Mediation Analyses in the Real World

    DEFF Research Database (Denmark)

    Lange, Theis; Starkopf, Liis

    2016-01-01

    The paper by Nguyen et al.1 published in this issue of Epidemiology presents a comparison of the recently suggested inverse odds ratio approach for addressing mediation and a more conventional Baron and Kenny-inspired method. Interestingly, the comparison is not done through a discussion of restr......The paper by Nguyen et al.1 published in this issue of Epidemiology presents a comparison of the recently suggested inverse odds ratio approach for addressing mediation and a more conventional Baron and Kenny-inspired method. Interestingly, the comparison is not done through a discussion...... it simultaneously ensures that the comparison is based on properties, which matter in actual applications, and makes the comparison accessible for a broader audience. In a wider context, the choice to stay close to real-life problems mirrors a general trend within the literature on mediation analysis namely to put...... applications using the inverse odds ration approach, as it simply has not had enough time to move from theoretical concept to published applied paper, we do expect to be able to judge the willingness of authors and journals to employ the causal inference-based approach to mediation analyses. Our hope...

  3. Validating experimental and theoretical Langmuir probe analyses

    Science.gov (United States)

    Pilling, L. S.; Carnegie, D. A.

    2007-08-01

    Analysis of Langmuir probe characteristics contains a paradox in that it is unknown a priori which theory is applicable before it is applied. Often theories are assumed to be correct when certain criteria are met although they may not validate the approach used. We have analysed the Langmuir probe data from cylindrical double and single probes acquired from a dc discharge plasma over a wide variety of conditions. This discharge contains a dual-temperature distribution and hence fitting a theoretically generated curve is impractical. To determine the densities, an examination of the current theories was necessary. For the conditions where the probe radius is the same order of magnitude as the Debye length, the gradient expected for orbital-motion limited (OML) is approximately the same as the radial-motion gradients. An analysis of the 'gradients' from the radial-motion theory was able to resolve the differences from the OML gradient value of two. The method was also able to determine whether radial or OML theories applied without knowledge of the electron temperature, or separation of the ion and electron contributions. Only the value of the space potential is necessary to determine the applicable theory.

  4. Proteins analysed as virtual knots

    Science.gov (United States)

    Alexander, Keith; Taylor, Alexander J.; Dennis, Mark R.

    2017-02-01

    Long, flexible physical filaments are naturally tangled and knotted, from macroscopic string down to long-chain molecules. The existence of knotting in a filament naturally affects its configuration and properties, and may be very stable or disappear rapidly under manipulation and interaction. Knotting has been previously identified in protein backbone chains, for which these mechanical constraints are of fundamental importance to their molecular functionality, despite their being open curves in which the knots are not mathematically well defined; knotting can only be identified by closing the termini of the chain somehow. We introduce a new method for resolving knotting in open curves using virtual knots, which are a wider class of topological objects that do not require a classical closure and so naturally capture the topological ambiguity inherent in open curves. We describe the results of analysing proteins in the Protein Data Bank by this new scheme, recovering and extending previous knotting results, and identifying topological interest in some new cases. The statistics of virtual knots in protein chains are compared with those of open random walks and Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important.

  5. Digital image analyser for autoradiography

    International Nuclear Information System (INIS)

    Muth, R.A.; Plotnick, J.

    1985-01-01

    The most critical parameter in quantitative autoradiography for assay of tissue concentrations of tracers is the ability to obtain precise and accurate measurements of optical density of the images. Existing high precision systems for image analysis, rotating drum densitometers, are expensive, suffer from mechanical problems and are slow. More moderately priced and reliable video camera based systems are available, but their outputs generally do not have the uniformity and stability necessary for high resolution quantitative autoradiography. The authors have designed and constructed an image analyser optimized for quantitative single and multiple tracer autoradiography which the authors refer to as a memory-mapped charged-coupled device scanner (MM-CCD). The input is from a linear array of CCD's which is used to optically scan the autoradiograph. Images are digitized into 512 x 512 picture elements with 256 gray levels and the data is stored in buffer video memory in less than two seconds. Images can then be transferred to RAM memory by direct memory-mapping for further processing. Arterial blood curve data and optical density-calibrated standards data can be entered and the optical density images can be converted automatically to tracer concentration or functional images. In double tracer studies, images produced from both exposures can be stored and processed in RAM to yield ''pure'' individual tracer concentration or functional images. Any processed image can be transmitted back to the buffer memory to be viewed on a monitor and processed for region of interest analysis

  6. PC based uranium enrichment analyser

    International Nuclear Information System (INIS)

    Madan, V.K.; Gopalakrishana, K.R.; Bairi, B.R.

    1991-01-01

    It is important to measure enrichment of unirradiated nuclear fuel elements during production as a quality control measure. An IBM PC based system has recently been tested for enrichment measurements for Nuclear Fuel Complex (NFC), Hyderabad. As required by NFC, the system has ease of calibration. It is easy to switch the system from measuring enrichment of fuel elements to pellets and also automatically store the data and the results. The system uses an IBM PC plug in card to acquire data. The card incorporates programmable interval timers (8253-5). The counter/timer devices are executed by I/O mapped I/O's. A novel algorithm has been incorporated to make the system more reliable. The application software has been written in BASIC. (author). 9 refs., 1 fig

  7. An Appraisal of Analytical Methods for Plutonium and their Applications to the Analysis of Nuclear Materials; Evaluation des Methodes Analytiques de Dosage du Plutonium et de Leur Application a l'Analyse des Matieres Nucleaires; Otsenka analiticheskikh metodov opredeleniya plutoniya i ikh primenenie dlya analiza yadernykh materialov; Metodos Analiticos de Determinacion del Plutonio y su Empleo en el Analisis de Materiales Nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Milner, G. W.C.; Phillips, G. [Atomic Energy Research Establishment, Harwell, Berks. (United Kingdom)

    1966-02-15

    programmes for new nuclear fuels. (author) [French] Il existe plusieurs methodes de dosage de la teneur en plutonium des matieres nucleaires. Pour les quantites de l'ordre du milligramme, les methodes utilisables sont la spectro- photometiie differentielle fondee sur la couleur de Pu (III), la gravimetrie fondee sur PuO{sub 2}, le comptage gamma et les methodes de reduction/oxydation comprenant les titrages poientiometriques et amperemetriques et la coulombmetrie a potentiel constant. Pour les quantites de Tordre du microgramme, le comptage alpha, la dilution isotopique et les methodes polarigraphiques sont a utiliser. Certaines methodes conviennent mieux que d'autres a des types determines d'echantillons et l'analyste soucieux d'obtenir les meilleurs resultats se heurte a un choix difficile. Les auteurs exposent les avantages et les inconvenients des methodes citees tels qu'ils se sont degages de l'experience acquise au cours des annees a l'Atomic Energy Research Establishment, et ils discutent l'exactitude, la precision, la sensibilite de ces methodes, et d'autres caracteristiques presentant un interet particulier. Certaines methodes ne peuvent etre utilisees si l'on n'a, dans une certaine mesure, separe le plutonium des autres constituants de l'echantillon et le memoire commente l'experience acquise avec l'echange d'anions et les procedes de chromatographie a phase inversee utilises a cette fin, en insistant surtout sur la mesure dans laquelle cette methode convient aux echantillons radioactifs. Les auteurs etudient en outre les nombreux problemes qui se sont poses lors de l'application (d'ailleurs couronnee de succes) de ces methodes a l'analyse des alliages de plutonium, des ceramiques et des cermets dans differentes combinaisons contenant de l'uranium, du thorium, du fer, du chrome, du molybdene, du cerium et du cobalt. Us exposent les difficultes de la dissolution des echantillons et de la reduction du plutonium a l'etat de valence voulu, ainsi que les avantages

  8. Security and Privacy Analyses of Internet of Things Toys

    OpenAIRE

    Chu, Gordon; Apthorpe, Noah; Feamster, Nick

    2018-01-01

    This paper investigates the security and privacy of Internet-connected children's smart toys through case studies of three commercially-available products. We conduct network and application vulnerability analyses of each toy using static and dynamic analysis techniques, including application binary decompilation and network monitoring. We discover several publicly undisclosed vulnerabilities that violate the Children's Online Privacy Protection Rule (COPPA) as well as the toys' individual pr...

  9. Severe Accident Recriticality Analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hoejerup, F. [Risoe National Lab. (Denmark); Lindholm, I.; Miettinen, J.; Puska, E.K. [VTT Energy, Helsinki (Finland); Nilsson, Lars [Studsvik Eco and Safety AB, Nykoeping (Sweden); Sjoevall, H. [Teoliisuuden Voima Oy (Finland)

    1999-11-01

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B{sub 4}C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding

  10. Severe accident recriticality analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. E-mail: wiktor.frid@ski.se; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Nilsson, L.; Puska, E.K.; Sjoevall, H

    2001-11-01

    Recriticality in a BWR during reflooding of an overheated partly degraded core, i.e. with relocated control rods, has been studied for a total loss of electric power accident scenario. In order to assess the impact of recriticality on reactor safety, including accident management strategies, the following issues have been investigated in the SARA project: (1) the energy deposition in the fuel during super-prompt power burst; (2) the quasi steady-state reactor power following the initial power burst; and (3) containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality--both super-prompt power bursts and quasi steady-state power generation--for the range of parameters studied, i.e. with core uncovering and heat-up to maximum core temperatures of approximately 1800 K, and water flow rates of 45-2000 kg s{sup -1} injected into the downcomer. Since recriticality takes place in a small fraction of the core, the power densities are high, which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal g{sup -1}, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate of 2000 kg s{sup -1}. In most cases, however, the predicted energy deposition was smaller, below the regulatory limits for fuel failure, but close to or above recently observed thresholds for fragmentation and dispersion of high burn-up fuel. The highest calculated

  11. Severe accident recriticality analyses (SARA)

    International Nuclear Information System (INIS)

    Frid, W.; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Nilsson, L.; Puska, E.K.; Sjoevall, H.

    2001-01-01

    Recriticality in a BWR during reflooding of an overheated partly degraded core, i.e. with relocated control rods, has been studied for a total loss of electric power accident scenario. In order to assess the impact of recriticality on reactor safety, including accident management strategies, the following issues have been investigated in the SARA project: (1) the energy deposition in the fuel during super-prompt power burst; (2) the quasi steady-state reactor power following the initial power burst; and (3) containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality--both super-prompt power bursts and quasi steady-state power generation--for the range of parameters studied, i.e. with core uncovering and heat-up to maximum core temperatures of approximately 1800 K, and water flow rates of 45-2000 kg s -1 injected into the downcomer. Since recriticality takes place in a small fraction of the core, the power densities are high, which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal g -1 , was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate of 2000 kg s -1 . In most cases, however, the predicted energy deposition was smaller, below the regulatory limits for fuel failure, but close to or above recently observed thresholds for fragmentation and dispersion of high burn-up fuel. The highest calculated quasi steady

  12. Severe Accident Recriticality Analyses (SARA)

    International Nuclear Information System (INIS)

    Frid, W.; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Puska, E.K.; Nilsson, Lars; Sjoevall, H.

    1999-11-01

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B 4 C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding

  13. Portable memory consistency for software managed distributed memory in many-core SoC

    NARCIS (Netherlands)

    Rutgers, J.H.; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2013-01-01

    Porting software to different platforms can require modifications of the application. One of the issues is that the targeted hardware supports another memory consistency model. As a consequence, the completion order of reads and writes in a multi-threaded application can change, which may result in

  14. Zamak samples analyses using EDXRF

    Energy Technology Data Exchange (ETDEWEB)

    Assis, J.T. de; Lima, I.; Monin, V., E-mail: joaquim@iprj.uerj.b, E-mail: inaya@iprj.uerj.b, E-mail: monin@iprj.uerj.b [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico. Dept. de Engenharia Mecanica e Energia; Anjos, M. dos; Lopes, R.T., E-mail: ricardo@lin.ufrj.b [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Lab. de Instrumentacao Nuclear; Alves, H., E-mail: marcelin@uerj.b, E-mail: haimon.dlafis@gmail.co [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Inst. de Fisica. Dept. de Fisica Aplicada e Termodinamica

    2009-07-01

    Zamak is a family of alloys with a base metal of zinc and alloying elements of aluminium, magnesium and copper. Among all non-ferrous metal alloys, Zamak is one that has more applications, for their physical, mechanical properties and easy ability to electrodeposition. It has good resistance to corrosion, traction, shock and wear. Its low melting point (approximately 400 deg C) allows greater durability of the mold, allowing greater production of melted series parts. Zamak can be used in several kinds of areas, such as, to produce residential and industrial locks, construction and carpentry components, refrigerators hinges and so on. It is observed that in some cases the quality of these products is not very good. The problem should be the quality of Zamak alloy purchased by the industries. One possible technique that can be used to investigate the quality of these alloys is Energy Dispersive X-ray fluorescence. In this paper we present results of eight samples of Zamak alloy by this technique and it was possible to classify Zamak alloy and verify some irregularity on these alloys. (author)

  15. Zamak samples analyses using EDXRF

    International Nuclear Information System (INIS)

    Assis, J.T. de; Lima, I.; Monin, V.; Anjos, M. dos; Lopes, R.T.; Alves, H.

    2009-01-01

    Zamak is a family of alloys with a base metal of zinc and alloying elements of aluminium, magnesium and copper. Among all non-ferrous metal alloys, Zamak is one that has more applications, for their physical, mechanical properties and easy ability to electrodeposition. It has good resistance to corrosion, traction, shock and wear. Its low melting point (approximately 400 deg C) allows greater durability of the mold, allowing greater production of melted series parts. Zamak can be used in several kinds of areas, such as, to produce residential and industrial locks, construction and carpentry components, refrigerators hinges and so on. It is observed that in some cases the quality of these products is not very good. The problem should be the quality of Zamak alloy purchased by the industries. One possible technique that can be used to investigate the quality of these alloys is Energy Dispersive X-ray fluorescence. In this paper we present results of eight samples of Zamak alloy by this technique and it was possible to classify Zamak alloy and verify some irregularity on these alloys. (author)

  16. Using Inequality Measures to Incorporate Environmental Justice into Regulatory Analyses

    Science.gov (United States)

    Harper, Sam; Ruder, Eric; Roman, Henry A.; Geggel, Amelia; Nweke, Onyemaechi; Payne-Sturges, Devon; Levy, Jonathan I.

    2013-01-01

    Formally evaluating how specific policy measures influence environmental justice is challenging, especially in the context of regulatory analyses in which quantitative comparisons are the norm. However, there is a large literature on developing and applying quantitative measures of health inequality in other settings, and these measures may be applicable to environmental regulatory analyses. In this paper, we provide information to assist policy decision makers in determining the viability of using measures of health inequality in the context of environmental regulatory analyses. We conclude that quantification of the distribution of inequalities in health outcomes across social groups of concern, considering both within-group and between-group comparisons, would be consistent with both the structure of regulatory analysis and the core definition of environmental justice. Appropriate application of inequality indicators requires thorough characterization of the baseline distribution of exposures and risks, leveraging data generally available within regulatory analyses. Multiple inequality indicators may be applicable to regulatory analyses, and the choice among indicators should be based on explicit value judgments regarding the dimensions of environmental justice of greatest interest. PMID:23999551

  17. Laser Beam Caustic Measurement with Focal Spot Analyser

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove; Gong, Hui; Bagger, Claus

    2005-01-01

    In industrial applications of high power CO2-lasers the caustic characteristics of the laser beam have great effects on the performance of the lasers. A welldefined high intense focused spot is essential for reliable production results. This paper presents a focal spot analyser that is developed...

  18. Pawnee Nation Energy Option Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-21

    Pawnee Nation of Oklahoma Energy Option Analyses In 2003, the Pawnee Nation leadership identified the need for the tribe to comprehensively address its energy issues. During a strategic energy planning workshop a general framework was laid out and the Pawnee Nation Energy Task Force was created to work toward further development of the tribe’s energy vision. The overarching goals of the “first steps” project were to identify the most appropriate focus for its strategic energy initiatives going forward, and to provide information necessary to take the next steps in pursuit of the “best fit” energy options. Description of Activities Performed The research team reviewed existing data pertaining to the availability of biomass (focusing on woody biomass, agricultural biomass/bio-energy crops, and methane capture), solar, wind and hydropower resources on the Pawnee-owned lands. Using these data, combined with assumptions about costs and revenue streams, the research team performed preliminary feasibility assessments for each resource category. The research team also reviewed available funding resources and made recommendations to Pawnee Nation highlighting those resources with the greatest potential for financially-viable development, both in the near-term and over a longer time horizon. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor

  19. Scanning electron microscopy and micro-analyses

    International Nuclear Information System (INIS)

    Brisset, F.; Repoux, L.; Ruste, J.; Grillon, F.; Robaut, F.

    2008-01-01

    Scanning electron microscopy (SEM) and the related micro-analyses are involved in extremely various domains, from the academic environments to the industrial ones. The overall theoretical bases, the main technical characteristics, and some complements of information about practical usage and maintenance are developed in this book. high-vacuum and controlled-vacuum electron microscopes are thoroughly presented, as well as the last generation of EDS (energy dispersive spectrometer) and WDS (wavelength dispersive spectrometer) micro-analysers. Beside these main topics, other analysis or observation techniques are approached, such as EBSD (electron backscattering diffraction), 3-D imaging, FIB (focussed ion beams), Monte-Carlo simulations, in-situ tests etc.. This book, in French language, is the only one which treats of this subject in such an exhaustive way. It represents the actualized and totally updated version of a previous edition of 1979. It gathers the lectures given in 2006 at the summer school of Saint Martin d'Heres (France). Content: 1 - electron-matter interactions; 2 - characteristic X-radiation, Bremsstrahlung; 3 - electron guns in SEM; 4 - elements of electronic optics; 5 - vacuum techniques; 6 - detectors used in SEM; 7 - image formation and optimization in SEM; 7a - SEM practical instructions for use; 8 - controlled pressure microscopy; 8a - applications; 9 - energy selection X-spectrometers (energy dispersive spectrometers - EDS); 9a - EDS analysis; 9b - X-EDS mapping; 10 - technological aspects of WDS; 11 - processing of EDS and WDS spectra; 12 - X-microanalysis quantifying methods; 12a - quantitative WDS microanalysis of very light elements; 13 - statistics: precision and detection limits in microanalysis; 14 - analysis of stratified samples; 15 - crystallography applied to EBSD; 16 - EBSD: history, principle and applications; 16a - EBSD analysis; 17 - Monte Carlo simulation; 18 - insulating samples in SEM and X-ray microanalysis; 18a - insulating

  20. Pathway analyses implicate glial cells in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Laramie E Duncan

    Full Text Available The quest to understand the neurobiology of schizophrenia and bipolar disorder is ongoing with multiple lines of evidence indicating abnormalities of glia, mitochondria, and glutamate in both disorders. Despite high heritability estimates of 81% for schizophrenia and 75% for bipolar disorder, compelling links between findings from neurobiological studies, and findings from large-scale genetic analyses, are only beginning to emerge.Ten publically available gene sets (pathways related to glia, mitochondria, and glutamate were tested for association to schizophrenia and bipolar disorder using MAGENTA as the primary analysis method. To determine the robustness of associations, secondary analyses were performed with: ALIGATOR, INRICH, and Set Screen. Data from the Psychiatric Genomics Consortium (PGC were used for all analyses. There were 1,068,286 SNP-level p-values for schizophrenia (9,394 cases/12,462 controls, and 2,088,878 SNP-level p-values for bipolar disorder (7,481 cases/9,250 controls.The Glia-Oligodendrocyte pathway was associated with schizophrenia, after correction for multiple tests, according to primary analysis (MAGENTA p = 0.0005, 75% requirement for individual gene significance and also achieved nominal levels of significance with INRICH (p = 0.0057 and ALIGATOR (p = 0.022. For bipolar disorder, Set Screen yielded nominally and method-wide significant associations to all three glial pathways, with strongest association to the Glia-Astrocyte pathway (p = 0.002.Consistent with findings of white matter abnormalities in schizophrenia by other methods of study, the Glia-Oligodendrocyte pathway was associated with schizophrenia in our genomic study. These findings suggest that the abnormalities of myelination observed in schizophrenia are at least in part due to inherited factors, contrasted with the alternative of purely environmental causes (e.g. medication effects or lifestyle. While not the primary purpose of our study

  1. The role of CFD computer analyses in hydrogen safety management

    International Nuclear Information System (INIS)

    Komen, E.M.J; Visser, D.C; Roelofs, F.; Te Lintelo, J.G.T

    2014-01-01

    The risks of hydrogen release and combustion during a severe accident in a light water reactor have attracted considerable attention after the Fukushima accident in Japan. Reliable computer analyses are needed for the optimal design of hydrogen mitigation systems, like e.g. passive autocatalytic recombiners (PARs), and for the assessment of the associated residual risk of hydrogen combustion. Traditionally, so-called Lumped Parameter (LP) computer codes are being used for these purposes. In the last decade, significant progress has been made in the development, validation, and application of more detailed, three-dimensional Computational Fluid Dynamics (CFD) simulations for hydrogen safety analyses. The objective of the current paper is to address the following questions: - When are CFD computer analyses needed complementary to the traditional LP code analyses for hydrogen safety management? - What is the validation status of the CFD computer code for hydrogen distribution, mitigation, and combustion analyses? - Can CFD computer analyses nowadays be executed in practical and reliable way for full scale containments? The validation status and reliability of CFD code simulations will be illustrated by validation analyses performed for experiments executed in the PANDA, THAI, and ENACCEF facilities. (authors)

  2. Summary of Electric Distribution System Analyses with a focus on DERs

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Yingying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Homer, Juliet S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McDermott, Thomas E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Coddington, Michael [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sigrin, Benjamin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mather, Barry [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-04-28

    The purpose of this document is to summarize types of electric distribution system analyses along with their application and relative maturity. Particular emphasis is placed on analyses associated with distributed energy resources (DERs). Analyses are separated into the categories of power flow, power quality, fault analysis, dynamic analysis and market analysis. Studies associated with DERs are called out in a separate section.

  3. WIND SPEED AND ENERGY POTENTIAL ANALYSES

    Directory of Open Access Journals (Sweden)

    A. TOKGÖZLÜ

    2013-01-01

    Full Text Available This paper provides a case study on application of wavelet techniques to analyze wind speed and energy (renewable and environmental friendly energy. Solar and wind are main sources of energy that allows farmers to have the potential for transferring kinetic energy captured by the wind mill for pumping water, drying crops, heating systems of green houses, rural electrification's or cooking. Larger wind turbines (over 1 MW can pump enough water for small-scale irrigation. This study tried to initiate data gathering process for wavelet analyses, different scale effects and their role on wind speed and direction variations. The wind data gathering system is mounted at latitudes: 37° 50" N; longitude 30° 33" E and height: 1200 m above mean sea level at a hill near Süleyman Demirel University campus. 10 minutes average values of two levels wind speed and direction (10m and 30m above ground level have been recorded by a data logger between July 2001 and February 2002. Wind speed values changed between the range of 0 m/s and 54 m/s. Annual mean speed value is 4.5 m/s at 10 m ground level. Prevalent wind

  4. Contribution to the study of influences in emission spectrography on solutions. Application to a general analysis method for stainless steels (1961); Contribution a l'etude des influences en spectographie d'emission sur solution. Application a une methode generale d'analyse des aciers inoxydables (1961)

    Energy Technology Data Exchange (ETDEWEB)

    Baudin, G [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires

    1961-11-15

    In order to establish a general method of analysis of stainless steels, by means of spark spectroscopy on solutions, a systematic study has been made of the factors involved. The variations in acidity of the solutions, or in the ratio of concentrations of two acids at constant pH, lead to a displacement of the calibration curve. Simple relations have been established between the concentration of the extraneous elements, and the effects produced, for the constituents Fe, Ti, Ni, Cr, Mn; a general method using abacus is proposed for steels containing only these elements. The interactions in the case of the elements Mo, Nb, Ta, W, were more complex, so that the simultaneous separation was studied with the help of ion-exchange resins. A general method of analysis is proposed for stainless steels. (author) [French] En vue d'etablir une methode generale d'analyse des aciers inoxydables par spectrographie d'etincelles sur solution, on a effectue une etude systematique des influences. Les variations de l'acidite des solutions ou du rapport des concentrations de deux acides a pH constant, entrainent un deplacement des courbes d'etalonnage. On a etabli des relations simples entre la teneur des tiers elements et les effets produits pour les constituants Fe, Ti, Ni, Cr, Mn; une methode generale avec abaques est proposee pour les aciers contenant ces seuls elements. Les influences dans le cas des elements Mo, Nb, Ta, W etant plus complexes, on eut a etudier la separation simultanee a l'aide de resines echangeuses d'ions. On propose une methode generale d'analyse des aciers inoxydables. (auteur)

  5. Improving word coverage using unsupervised morphological analyser

    Indian Academy of Sciences (India)

    To enable a computer to process information in human languages, ... vised morphological analyser (UMA) would learn how to analyse a language just by looking ... result for English, but they did remarkably worse for Finnish and Turkish.

  6. Techniques for Analysing Problems in Engineering Projects

    DEFF Research Database (Denmark)

    Thorsteinsson, Uffe

    1998-01-01

    Description of how CPM network can be used for analysing complex problems in engineering projects.......Description of how CPM network can be used for analysing complex problems in engineering projects....

  7. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  8. Design basis event consequence analyses for the Yucca Mountain project

    International Nuclear Information System (INIS)

    Orvis, D.D.; Haas, M.N.; Martin, J.H.

    1997-01-01

    Design basis event (DBE) definition and analysis is an ongoing and integrated activity among the design and analysis groups of the Yucca Mountain Project (YMP). DBE's are those that potentially lead to breach of the waste package and waste form (e.g., spent fuel rods) with consequent release of radionuclides to the environment. A Preliminary Hazards Analysis (PHA) provided a systematic screening of external and internal events that were candidate DBE's that will be subjected to analyses for radiological consequences. As preparation, pilot consequence analyses for the repository subsurface and surface facilities have been performed to define the methodology, data requirements, and applicable regulatory limits

  9. Improvements in the stopping power library libdEdx and release of the web GUI dedx.au.dk

    DEFF Research Database (Denmark)

    Toftegaard, Jakob; Lühr, Armin; Sobolevsky, Nikolai

    2014-01-01

    is programmed in the language C to provide broad portability and high performance. A clean API provides full access to the underlying functions and thread safety in multi-threaded applications. The possibility to define arbitrary materials complements the list of predefined ICRU materials. Furthermore, we...

  10. High performance liquid chromatography in pharmaceutical analyses

    Directory of Open Access Journals (Sweden)

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  11. High perfomance liquid chromatography in pharmaceutical analyses.

    Science.gov (United States)

    Nikolin, Branko; Imamović, Belma; Medanhodzić-Vuk, Saira; Sober, Miroslav

    2004-05-01

    In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatography replaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography (HPLC) analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1) Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or serum contains numerous endogenous

  12. Construction of a bi parametric analysis system. Application to the study of the decay of the 16.11 MeV level of {sup 12}C; Realisation d'un ensemble d'analyse biparametrique. Application a l'etude de la desintegration du niveau de 16,11 MeV du {sup 12}C

    Energy Technology Data Exchange (ETDEWEB)

    Engelhardt, H D [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires

    1966-11-15

    A baroreceptor analysis system has been assembled to measure {alpha}-{alpha} spectra in coincidence (resolution 2 {tau} = 10 ns). The correlations in energy and angle of the {alpha}-particles emitted from the reaction {sup 11}B(p,{alpha}) have been studied using 163 keV protons produced by a Van-de-Graaff accelerator. Evidence has been obtained for the sequential decay of {sup 12}C{sup *} (16.11 MeV) via the 0{sup +} and 2{sup +} states of {sup 8}Be. Contributions from the {sup 8}Be{sup *}(4{sup +}) level or from the simultaneous break-up of {sup 12}C{sup *} (16.11 MeV) into three {alpha}-particles cannot be excluded. (author) [French] On a mis au point un dispositif d'analyse biparametrique de spectres {alpha}-{alpha} en coincidences (resolution 2 {tau} = 10 ns). On a etudie, a l'aide de ce dispositif et d'un accelerateur Van de Graaff la correlation d'energie et angulaire entre les particules a de la reaction {sup 11}B(p,{alpha}) a une energie de protons E{sub p} = 163 keV. On a mis en evidence la desintegration sequentielle du {sup 12}C{sup *} (16.11 MeV) par les etats (0{sup +} et 2{sup +}) du {sup 8}Be. La contribution d'un niveau (4{sup +}) du {sup 8}Be{sup *} ou d'une desintegration simultanee du {sup 12}C{sup *} (16,11 MeV) en 3 {alpha} ne peut pas etre exclue. (auteur)

  13. The application of {beta}-ray excitation fluorescence to the measurement of the thickness of deposits and to analysis; Applications de la fluorescence excitee au moyen des rayons {beta} a la mesure des epaisseurs des depots et a l'analyse

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Seibel, G [Institut de Recherches de la Siderurgie Francaise (IRSID), 78 - Saint-Germain-en-Laye (France)

    1961-07-01

    Principles of the method are first outlined and the instrumentation used is described. The different types of radiation detectors are subject of a detailed study. As a source of {beta}-radiation {sup 90}(Sr + Y) was used as well as {sup 147}Pm. Great care was taken to eliminate back-diffused electrons by deflection by a strong permanent magnet. The method was applied to the measurement of the thickness of deposits of Cr, Zn, Sn, Cd and Cu on iron as well as Zn, Cr, Ag and Au on copper and the results obtained are discussed. An attempt was made, to use {beta}-X-ray fluorescence for the analysis of minerals, iron ore and glass and for routine control of Si-Mn, Si-Ca, Fe-Mn and Fe-W. Finally the method of {beta}-X-ray fluorescence is compared with normal-X-ray fluorescence and possibilities of further development are cited. (author) [French] Les principes de la methode et l'instrumentation utilisee sont presentes. On decrit en particulier les detecteurs de rayonnement utilises. Comme source de rayonnement on utilise {sup 90}(Sr + Y) et {sup 147}Pm. Pour eliminer les electrons retrodiffuses on utilise un aimant permanent place sur le trajet du faisceau. La methode est appliquee a la mesure des epaisseurs des depots metalliques tels que le Cr, Zn, Sn, Cd et Cu sur fer et le Zn, Cr, Ag et Au sur cuivre. D'autre part, la fluorescence {beta}-X etait utilisee pour l'analyse des minerais et des verres et pour le controle des alliages Fe-Mn, Fe-W, Si-Mn, Si-Ca. Enfin, on passe a une comparaison entre la fluorescence {beta}-X et la fluorescence X normale et on discute les possibilites d'un developpement futur. (auteur)

  14. Application of multivariable analysis methods to the quantitative detection of gas by tin dioxide micro-sensors; Application des methodes d'analyse multivariables a la detection quantitative de gaz par microcapteurs a base de dioxyde d'etain

    Energy Technology Data Exchange (ETDEWEB)

    Perdreau, N.

    2000-01-17

    The electric conductivity of tin dioxide depends on the temperature of the material and on the nature and environment of the surrounding gas. This work shows that the treatment by multivariable analysis methods of electric conductance signals of one sensor allows to determine concentrations of binary or ternary mixtures of ethanol (0-80 ppm), carbon monoxide (0-300 ppm) and methane (0-1000 ppm). A part of this study has consisted of the design and the implementation of an automatic testing bench allowing to acquire the electric conductance of four sensors in thermal cycle and under gaseous cycles. It has also revealed some disturbing effects (humidity,..) of the measurement. Two techniques of sensor fabrication have been used to obtain conductances (depending of temperature) distinct for each gas, reproducible for the different sensors and enough stable with time to allow the exploitation of the signals by multivariable analysis methods (tin dioxide under the form of thin layers obtained by reactive evaporation or under the form of sintered powder bars). In a last part, it has been shown that the quantitative determination of gas by the application of chemo-metry methods is possible although the relation between the electric conductances in one part and the temperatures and concentrations in another part is non linear. Moreover, the modelling with the 'Partial Least Square' method and a pretreatment allows to obtain performance data comparable to those obtained with neural networks. (O.M.)

  15. Finite element analyses of a linear-accelerator electron gun

    Science.gov (United States)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  16. Finite element analyses of a linear-accelerator electron gun

    Energy Technology Data Exchange (ETDEWEB)

    Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Wasy, A. [Department of Mechanical Engineering, Changwon National University, Changwon 641773 (Korea, Republic of); Islam, G. U. [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Zhou, Z. [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2014-02-15

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  17. Finite element analyses of a linear-accelerator electron gun

    International Nuclear Information System (INIS)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-01-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator

  18. Quality assurance requirements for the computer software and safety analyses

    International Nuclear Information System (INIS)

    Husarecek, J.

    1992-01-01

    The requirements are given as placed on the development, procurement, maintenance, and application of software for the creation or processing of data during the design, construction, operation, repair, maintenance and safety-related upgrading of nuclear power plants. The verification and validation processes are highlighted, and the requirements put on the software documentation are outlined. The general quality assurance principles applied to safety analyses are characterized. (J.B.). 1 ref

  19. Freefem++ in THM analyses of KBS-3 deposition hole

    International Nuclear Information System (INIS)

    Lempinen, A.

    2006-12-01

    The applicability of Freefem++ as a software for thermo-hydro-mechanical analysis of KBS-3V deposition hole was evaluated. Freefem++ is software for multiphysical simulations with finite element method. A set of previously performed analyses were successfully repeated with Freefem++. The only significant problem was to impose unique values for variables at the canister surface. This problem can be circumvented with an iterative method, and it can possibly be solved later, since Freefem++ is opensource software. (orig.)

  20. Fracture analyses of WWER reactor pressure vessels

    International Nuclear Information System (INIS)

    Sievers, J.; Liu, X.

    1997-01-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab

  1. Fracture analyses of WWER reactor pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Sievers, J; Liu, X [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    1997-09-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab.

  2. Analyse quantitative détaillée des distillats moyens par couplage CG/SM. Application à l'étude des schémas réactionnels du procédé d'hydrotraitement Quantitative Analysis of Middle Distillats by Gc/Ms Coupling. Application to Hydrotreatment Process Mechanisms

    Directory of Open Access Journals (Sweden)

    Fafet A.

    2006-11-01

    Full Text Available L'analyse détaillée des distillats moyens est une étape indispensable à la compréhension des mécanismes réactionnels et à la cinétique de certains procédés de raffinage comme l'hydrotraitement. Une nouvelle méthode associant, d'une part un couplage chromatographie en phase gazeuse/spectrométrie de masse (CG/SM et, d'autre part une analyse quantitative par famille chimique par spectrométrie de masse a été développée. La chromatographie en phase gazeuse, réalisée sur une colonne apolaire, effectue la distillation des composés présents dans le gazole et la spectrométrie de masse quantifie les familles chimiques par intervalle de nombre d'atomes de carbone ou de point d'ébullition. Elle permet d'accéder ainsi à la répartition par nombre d'atomes de carbone de chaque famille chimique (alcanes, cycloalcanes, hydrocarbures aromatiques à un ou plusieurs noyaux, hydrocarbures aromatiques soufrés. Cette méthode a été validée et appliquée à une charge et à une recette d'hydrotraitement. A detailed analysis of middle distillates is essential for understanding the reaction mechanism and for studying the kinetics of refining processes such hydrotreatment. In fact, when we see the complexity of saturated and aromatic hydrocarbon mixtures appearing in gas oil, we realize that it's necessary to have a very detailed analysis of those cuts to understand the mechanisms involved in refining processes and to be able to describe their kinetics. Each gas oil has a very different composition and therefore a specific reactivity. That is why we have tried to develop predictive kinetic models to avoid experimenting in pilot plants, which is very expensive. But, even if all the compounds of a gasoline (PI-200°C have now been identified and quantified, using gas chromatography (1, such is not the case for heavier cuts. Only an overall characterization can be made, by chemical family. The techniques employed are, for example, HPLC (3,4 or

  3. [Anne Arold. Kontrastive Analyse...] / Paul Alvre

    Index Scriptorium Estoniae

    Alvre, Paul, 1921-2008

    2001-01-01

    Arvustus: Arold, Anne. Kontrastive analyse der Wortbildungsmuster im Deutschen und im Estnischen (am Beispiel der Aussehensadjektive). Tartu, 2000. (Dissertationes philologiae germanicae Universitatis Tartuensis)

  4. An MDE Approach for Modular Program Analyses

    NARCIS (Netherlands)

    Yildiz, Bugra Mehmet; Bockisch, Christoph; Aksit, Mehmet; Rensink, Arend

    Program analyses are an important tool to check if a system fulfills its specification. A typical implementation strategy for program analyses is to use an imperative, general-purpose language like Java, and access the program to be analyzed through libraries that offer an API for reading, writing

  5. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  6. Diversity of primary care systems analysed.

    NARCIS (Netherlands)

    Kringos, D.; Boerma, W.; Bourgueil, Y.; Cartier, T.; Dedeu, T.; Hasvold, T.; Hutchinson, A.; Lember, M.; Oleszczyk, M.; Pavlick, D.R.

    2015-01-01

    This chapter analyses differences between countries and explains why countries differ regarding the structure and process of primary care. The components of primary care strength that are used in the analyses are health policy-making, workforce development and in the care process itself (see Fig.

  7. Approximate analyses of inelastic effects in pipework

    International Nuclear Information System (INIS)

    Jobson, D.A.

    1983-01-01

    This presentation shows figures concerned with analyses of inelastic effects in pipework as follows: comparison of experimental and calculated simplified analyses results for free end rotation and for circumferential strain; interrupted stress relaxation; regenerated relaxation caused by reversed yield; buckling of straight pipe under combined bending and torsion; results of fatigues test of pipe bend

  8. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  9. SU-F-BRD-02: Application of ARCHERRT-- A GPU-Based Monte Carlo Dose Engine for Radiation Therapy -- to Tomotherapy and Patient-Independent IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Su, L; Du, X; Liu, T; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Yang, Y; Bednarz, B [University of Wisconsin - Madison, Madison, Wisconsin (United States); Sterpin, E [Universite catholique de Louvain, Brussels, Brussels (Belgium)

    2014-06-15

    Purpose: As a module of ARCHER -- Accelerated Radiation-transport Computations in Heterogeneous EnviRonments, ARCHER{sub RT} is designed for RadioTherapy (RT) dose calculation. This paper describes the application of ARCHERRT on patient-dependent TomoTherapy and patient-independent IMRT. It also conducts a 'fair' comparison of different GPUs and multicore CPU. Methods: The source input used for patient-dependent TomoTherapy is phase space file (PSF) generated from optimized plan. For patient-independent IMRT, the open filed PSF is used for different cases. The intensity modulation is simulated by fluence map. The GEANT4 code is used as benchmark. DVH and gamma index test are employed to evaluate the accuracy of ARCHER{sub RT} code. Some previous studies reported misleading speedups by comparing GPU code with serial CPU code. To perform a fairer comparison, we write multi-thread code with OpenMP to fully exploit computing potential of CPU. The hardware involved in this study are a 6-core Intel E5-2620 CPU and 6 NVIDIA M2090 GPUs, a K20 GPU and a K40 GPU. Results: Dosimetric results from ARCHER{sub RT} and GEANT4 show good agreement. The 2%/2mm gamma test pass rates for different clinical cases are 97.2% to 99.7%. A single M2090 GPU needs 50~79 seconds for the simulation to achieve a statistical error of 1% in the PTV. The K40 card is about 1.7∼1.8 times faster than M2090 card. Using 6 M2090 card, the simulation can be finished in about 10 seconds. For comparison, Intel E5-2620 needs 507∼879 seconds for the same simulation. Conclusion: We successfully applied ARCHER{sub RT} to Tomotherapy and patient-independent IMRT, and conducted a fair comparison between GPU and CPU performance. The ARCHER{sub RT} code is both accurate and efficient and may be used towards clinical applications.

  10. Level II Ergonomic Analyses, Dover AFB, DE

    Science.gov (United States)

    1999-02-01

    IERA-RS-BR-TR-1999-0002 UNITED STATES AIR FORCE IERA Level II Ergonomie Analyses, Dover AFB, DE Andrew Marcotte Marilyn Joyce The Joyce...Project (070401881, Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 4. TITLE AND SUBTITLE Level II Ergonomie Analyses, Dover...1.0 INTRODUCTION 1-1 1.1 Purpose Of The Level II Ergonomie Analyses : 1-1 1.2 Approach 1-1 1.2.1 Initial Shop Selection and Administration of the

  11. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  12. Cost-Benefit Analyses of Transportation Investments

    DEFF Research Database (Denmark)

    Næss, Petter

    2006-01-01

    This paper discusses the practice of cost-benefit analyses of transportation infrastructure investment projects from the meta-theoretical perspective of critical realism. Such analyses are based on a number of untenable ontological assumptions about social value, human nature and the natural......-to-pay investigations. Accepting the ontological and epistemological assumptions of cost-benefit analysis involves an implicit acceptance of the ethical and political values favoured by these assumptions. Cost-benefit analyses of transportation investment projects tend to neglect long-term environmental consequences...

  13. Comparison with Russian analyses of meteor impact

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-06-01

    The inversion model for meteor impacts is used to discuss Russian analyses and compare principal results. For common input parameters, the models produce consistent estimates of impactor parameters. Directions for future research are discussed and prioritized.

  14. 7 CFR 94.102 - Analyses available.

    Science.gov (United States)

    2010-01-01

    ... analyses for total ash, fat by acid hydrolysis, moisture, salt, protein, beta-carotene, catalase... glycol, SLS, and zeolex. There are also be tests for starch, total sugars, sugar profile, whey, standard...

  15. Anthocyanin analyses of Vaccinium fruit dietary supplements

    Science.gov (United States)

    Vaccinium fruit ingredients within dietary supplements were identified by comparisons with anthocyanin analyses of known Vaccinium profiles (demonstration of anthocyanin fingerprinting). Available Vaccinium supplements were purchased and analyzed; their anthocyanin profiles (based on HPLC separation...

  16. Analyse of Maintenance Cost in ST

    CERN Document Server

    Jenssen, B W

    2001-01-01

    An analyse has been carried out in ST concerning the total costs for the division. Even though the target was the maintenance costs in ST, the global budget over has been analysed. This has been done since there is close relation between investments & consolidation and the required level for maintenance. The purpose of the analyse was to focus on maintenance cost in ST as a ratio of total maintenance costs over the replacement value of the equipment, and to make some comparisons with other industries and laboratories. Families of equipment have been defined and their corresponding ratios calculated. This first approach gives us some "quantitative" measurements. This analyse should be combined with performance indicators (more "qualitative" measurements) that are telling us how well we are performing. This will help us in defending our budget, make better priorities, and we will satisfy the requirements from our external auditors.

  17. A History of Rotorcraft Comprehensive Analyses

    Science.gov (United States)

    Johnson, Wayne

    2013-01-01

    A history of the development of rotorcraft comprehensive analyses is presented. Comprehensive analyses are digital computer programs that calculate the aeromechanical behavior of the rotor and aircraft, bringing together the most advanced models of the geometry, structure, dynamics, and aerodynamics available in rotary wing technology. The development of the major codes of the last five decades from industry, government, and universities is described. A number of common themes observed in this history are discussed.

  18. Safety analyses for reprocessing and waste processing

    International Nuclear Information System (INIS)

    1983-03-01

    Presentation of an incident analysis of process steps of the RP, simplified considerations concerning safety, and safety analyses of the storage and solidification facilities of the RP. A release tree method is developed and tested. An incident analysis of process steps, the evaluation of the SRL-study and safety analyses of the storage and solidification facilities of the RP are performed in particular. (DG) [de

  19. Risk analyses of nuclear power plants

    International Nuclear Information System (INIS)

    Jehee, J.N.T.; Seebregts, A.J.

    1991-02-01

    Probabilistic risk analyses of nuclear power plants are carried out by systematically analyzing the possible consequences of a broad spectrum of causes of accidents. The risk can be expressed in the probabilities for melt down, radioactive releases, or harmful effects for the environment. Following risk policies for chemical installations as expressed in the mandatory nature of External Safety Reports (EVRs) or, e.g., the publication ''How to deal with risks'', probabilistic risk analyses are required for nuclear power plants

  20. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  1. A review of bioinformatic methods for forensic DNA analyses.

    Science.gov (United States)

    Liu, Yao-Yuan; Harbison, SallyAnn

    2018-03-01

    Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  3. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  4. Physiological and enzymatic analyses of pineapple subjected to ionizing radiation

    International Nuclear Information System (INIS)

    Silva, Josenilda Maria da; Silva, Juliana Pizarro; Spoto, Marta Helena Fillet

    2007-01-01

    The physiological and enzymatic post-harvest characteristics of the pineapple cultivar Smooth Cayenne were evaluated after the fruits were gamma-irradiated with doses of 100 and 150 Gy and the fruits were stored for 10, 20 and 30 days at 12 deg C (±1) and relative humidity of 85% (±5). Physiological and enzymatic analyses were made for each storage period to evaluate the alterations resulting from the application of ionizing radiation. Control specimens showed higher values of soluble pectins, total pectins, reducing sugars, sucrose and total sugars and lower values of polyphenyloxidase and polygalacturonase enzyme activities. All the analyses indicated that storage time is a significantly influencing factor. The 100 Gy dosage and 20-day storage period presented the best results from the standpoint of maturation and conservation of the fruits quality. (author)

  5. Process for carrying out analyses based on concurrent reactions

    Energy Technology Data Exchange (ETDEWEB)

    Glover, J S; Shepherd, B P

    1980-01-03

    The invention refers to a process for carrying out analyses based on concurrent reactions. A part of a compound to be analysed is subjected with a standard quantity of this compound in a labelled form to a common reaction with a standard quantity of a reagent, which must be less than the sum of the two parts of the reacting compound. The parts of the marked reaction compound and the labelled final compound resulting from the concurrence are separated in a tube (e.g. by centrifuging) after forced phase change (precipitation, absorption etc.) and the radio-activity of both phases in contact is measured separately. The shielded measuring device developed for this and suitable for centrifuge tubes of known dimensions is also included in the patent claims. The insulin concentration of a defined serum is measured as an example of the applications of the method (Radioimmunoassay).

  6. Report of analyses for light hydrocarbons in ground water

    International Nuclear Information System (INIS)

    Dromgoole, E.L.

    1982-04-01

    This report contains on microfiche the results of analyses for methane, ethane, propane, and butane in 11,659 ground water samples collected in 47 western and three eastern 1 0 x 2 0 quadrangles of the National Topographic Map Series (Figures 1 and 2), along with a brief description of the analytical technique used and some simple, descriptive statistics. The ground water samples were collected as part of the National Uranium Resource Evaluation (NURE) hydrogeochemical and stream sediment reconnaissance. Further information on the ground water samples can be obtained by consulting the NURE data reports for the individual quadrangles. This information includes (1) measurements characterizing water samples (pH, conductivity, and alkalinity), (2) physical measurements, where applicable (water temperature, well description, and other measurements), and (3) elemental analyses

  7. The moral economy of austerity: analysing UK welfare reform.

    Science.gov (United States)

    Morris, Lydia

    2016-03-01

    This paper notes the contemporary emergence of 'morality' in both sociological argument and political rhetoric, and analyses its significance in relation to ongoing UK welfare reforms. It revisits the idea of 'moral economy' and identifies two strands in its contemporary application; that all economies depend on an internal moral schema, and that some external moral evaluation is desirable. UK welfare reform is analysed as an example of the former, with reference to three distinct orientations advanced in the work of Freeden (1996), Laclau (2014), and Lockwood (1996). In this light, the paper then considers challenges to the reform agenda, drawn from third sector and other public sources. It outlines the forms of argument present in these challenges, based respectively on rationality, legality, and morality, which together provide a basis for evaluation of the welfare reforms and for an alternative 'moral economy'. © London School of Economics and Political Science 2016.

  8. Mass separated neutral particle energy analyser

    International Nuclear Information System (INIS)

    Takeuchi, Hiroshi; Matsuda, Toshiaki; Miura, Yukitoshi; Shiho, Makoto; Maeda, Hikosuke; Hashimoto, Kiyoshi; Hayashi, Kazuo.

    1983-09-01

    A mass separated neutral particle energy analyser which could simultaneously measure hydrogen and deuterium atoms emitted from tokamak plasma was constructed. The analyser was calibrated for the energy and mass separation in the energy range from 0.4 keV to 9 keV. In order to investigate the behavior of deuteron and proton in the JFT-2 tokamak plasma heated with ion cyclotron wave and neutral beam injection, this analyser was installed in JFT-2 tokamak. It was found that the energy spectrum could be determined with sufficient accuracy. The obtained ion temperature and ratio of deuteron and proton density from the energy spectrum were in good agreement with the value deduced from Doppler broadening of TiXIV line and the line intensities of H sub(α) and D sub(α) respectively. (author)

  9. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  10. Periodic safety analyses; Les essais periodiques

    Energy Technology Data Exchange (ETDEWEB)

    Gouffon, A; Zermizoglou, R

    1990-12-01

    The IAEA Safety Guide 50-SG-S8 devoted to 'Safety Aspects of Foundations of Nuclear Power Plants' indicates that operator of a NPP should establish a program for inspection of safe operation during construction, start-up and service life of the plant for obtaining data needed for estimating the life time of structures and components. At the same time the program should ensure that the safety margins are appropriate. Periodic safety analysis are an important part of the safety inspection program. Periodic safety reports is a method for testing the whole system or a part of the safety system following the precise criteria. Periodic safety analyses are not meant for qualification of the plant components. Separate analyses are devoted to: start-up, qualification of components and materials, and aging. All these analyses are described in this presentation. The last chapter describes the experience obtained for PWR-900 and PWR-1300 units from 1986-1989.

  11. A Simple, Reliable Precision Time Analyser

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, B. V.; Nargundkar, V. R.; Subbarao, K.; Kamath, M. S.; Eligar, S. K. [Atomic Energy Establishment Trombay, Bombay (India)

    1966-06-15

    A 30-channel time analyser is described. The time analyser was designed and built for pulsed neutron research but can be applied to other uses. Most of the logic is performed by means of ferrite memory core and transistor switching circuits. This leads to great versatility, low power consumption, extreme reliability and low cost. The analyser described provides channel Widths from 10 {mu}s to 10 ms; arbitrarily wider channels are easily obtainable. It can handle counting rates up to 2000 counts/min in each channel with less than 1% dead time loss. There is a provision for an initial delay equal to 100 channel widths. An input pulse de-randomizer unit using tunnel diodes ensures exactly equal channel widths. A brief description of the principles involved in core switching circuitry is given. The core-transistor transfer loop is compared with the usual core-diode loops and is shown to be more versatile and better adapted to the making of a time analyser. The circuits derived from the basic loop are described. These include the scale of ten, the frequency dividers and the delay generator. The current drivers developed for driving the cores are described. The crystal-controlled clock which controls the width of the time channels and synchronizes the operation of the various circuits is described. The detector pulse derandomizer unit using tunnel diodes is described. The scheme of the time analyser is then described showing how the various circuits can be integrated together to form a versatile time analyser. (author)

  12. Analysing Old Testament poetry: Basic issues in contemporary exegesis

    Directory of Open Access Journals (Sweden)

    G. T. M. Prinsloo

    1991-08-01

    Full Text Available The wealth of publications on matters relating to Old Testament poetry is witness to the fact that this subject has become a focal point in Old Testament studies. In this paper, an overview of contemporary publications is given. The basic issues, both on the level of poetic theory and practical application, are pointed out. A tendency towards a comprehensive literary approach is definitely present and should be encouraged. Only when a poem is analysed on all levels and by all means, will the richness of its meaning be appreciated.

  13. SCALE Graphical Developments for Improved Criticality Safety Analyses

    International Nuclear Information System (INIS)

    Barnett, D.L.; Bowman, S.M.; Horwedel, J.E.; Petrie, L.M.

    1999-01-01

    New computer graphic developments at Oak Ridge National Ridge National Laboratory (ORNL) are being used to provide visualization of criticality safety models and calculational results as well as tools for criticality safety analysis input preparation. The purpose of this paper is to present the status of current development efforts to continue to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system. Applications for criticality safety analysis in the areas of 3-D model visualization, input preparation and execution via a graphical user interface (GUI), and two-dimensional (2-D) plotting of results are discussed

  14. Fundamental data analyses for measurement control

    International Nuclear Information System (INIS)

    Campbell, K.; Barlich, G.L.; Fazal, B.; Strittmatter, R.B.

    1987-02-01

    A set of measurment control data analyses was selected for use by analysts responsible for maintaining measurement quality of nuclear materials accounting instrumentation. The analyses consist of control charts for bias and precision and statistical tests used as analytic supplements to the control charts. They provide the desired detection sensitivity and yet can be interpreted locally, quickly, and easily. The control charts provide for visual inspection of data and enable an alert reviewer to spot problems possibly before statistical tests detect them. The statistical tests are useful for automating the detection of departures from the controlled state or from the underlying assumptions (such as normality). 8 refs., 3 figs., 5 tabs

  15. A theoretical framework for analysing preschool teaching

    DEFF Research Database (Denmark)

    Chaiklin, Seth

    2014-01-01

    This article introduces a theoretical framework for analysing preschool teaching as a historically-grounded societal practice. The aim is to present a unified framework that can be used to analyse and compare both historical and contemporary examples of preschool teaching practice within and across...... national traditions. The framework has two main components, an analysis of preschool teaching as a practice, formed in relation to societal needs, and an analysis of the categorical relations which necessarily must be addressed in preschool teaching activity. The framework is introduced and illustrated...

  16. Power System Oscillatory Behaviors: Sources, Characteristics, & Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Follum, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dosiek, Luke A. [Union College, Schenectady, NY (United States); Pierre, John W. [Univ. of Wyoming, Laramie, WY (United States)

    2017-05-17

    This document is intended to provide a broad overview of the sources, characteristics, and analyses of natural and forced oscillatory behaviors in power systems. These aspects are necessarily linked. Oscillations appear in measurements with distinguishing characteristics derived from the oscillation’s source. These characteristics determine which analysis methods can be appropriately applied, and the results from these analyses can only be interpreted correctly with an understanding of the oscillation’s origin. To describe oscillations both at their source within a physical power system and within measurements, a perspective from the boundary between power system and signal processing theory has been adopted.

  17. Value and cost analyses for solar thermal-storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Luft, W.; Copeland, R.J.

    1983-04-01

    Value and cost data for thermal energy storage are presented for solar thermal central receiver systems for which thermal energy storage appears to be attractive. Both solar thermal electric power and industrial process heat applications are evaluated. The value of storage is based on the cost for fossil fuel and solar thermal collector systems in 1990. The costing uses a standard lifetime methodology with the storage capacity as a parameter. Both value and costs are functions of storage capacity. However, the value function depends on the application. Value/cost analyses for first-generation storage concepts for five central receiver systems (molten salt, water/steam, organic fluid, air, and liquid metal) established the reference against which new systems were compared. Some promising second-generation energy storage concepts have been identified, and some more advanced concepts have also been evaluated.

  18. RBMK-LOCA-Analyses with the ATHLET-Code

    Energy Technology Data Exchange (ETDEWEB)

    Petry, A. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH Kurfuerstendamm, Berlin (Germany); Domoradov, A.; Finjakin, A. [Research and Development Institute of Power Engineering, Moscow (Russian Federation)

    1995-09-01

    The scientific technical cooperation between Germany and Russia includes the area of adaptation of several German codes for the Russian-designed RBMK-reactor. One point of this cooperation is the adaptation of the Thermal-Hydraulic code ATHLET (Analyses of the Thermal-Hydraulics of LEaks and Transients), for RBMK-specific safety problems. This paper contains a short description of a RBMK-1000 reactor circuit. Furthermore, the main features of the thermal-hydraulic code ATHLET are presented. The main assumptions for the ATHLET-RBMK model are discussed. As an example for the application, the results of test calculations concerning a guillotine type rupture of a distribution group header are presented and discussed, and the general analysis conditions are described. A comparison with corresponding RELAP-calculations is given. This paper gives an overview on some problems posed and experience by application of Western best-estimate codes for RBMK-calculations.

  19. Application of SNMP on CATV

    Science.gov (United States)

    Huang, Hong-bin; Liu, Wei-ping; Chen, Shun-er; Zheng, Liming

    2005-02-01

    A new type of CATV network management system developed by universal MCU, which supports SNMP, is proposed in this paper. From the point of view in both hardware and software, the function and method of every modules inside the system, which include communications in the physical layer, protocol process, data process, and etc, are analyzed. In our design, the management system takes IP MAN as data transmission channel and every controlled object in the management structure has a SNMP agent. In the SNMP agent developed, there are four function modules, including physical layer communication module, protocol process module, internal data process module and MIB management module. In the paper, the structure and function of every module are designed and demonstrated while the related hardware circuit, software flow as well as the experimental results are tested. Furthermore, by introducing RTOS into the software programming, the universal MCU procedure can conducts such multi-thread management as fast Ethernet controller driving, TCP/IP process, serial port signal monitoring and so on, which greatly improves efficiency of CPU.

  20. Application of annual ring analyses to the determination of smoke damage. II. Contribution to the evaluation of annual ring analyses

    Energy Technology Data Exchange (ETDEWEB)

    Vins, B

    1962-01-01

    The most emission-endangered forested areas of Czechoslovakia are Krusne Hory and of Decinsky Sneznik. The condition of the forest cover today is such that not only the productivity of the forests but also their hydrological and ecological functions are in jeopardy. Measurements by annual ring analysis were made on trees at 102 selected experimental sites to determine the growth gain decrease due to air pollution. The decrease in growth gains was first noted in 1952. Up to 1958 this decrease was differentiated according to the degree of damage caused in different areas. Thus, forests in the Chomutov area exhibited a 30% damage while forests in the Most and Teplice areas exhibited a 90% damage. In the Decinsky Sneznik area the growth gain drop was already noted in 1947 and from then on the annual growth impairment was about 15%. From 1953 on, this area suffered growth gain damage of the same magnitude as the Krusne Hory area. During 1954-1958, the growth gain decreases in four areas exposed to pollution of different severity were 23, 37, 40, and 70% respectively, compared to normal growth gains of trees grown in an unpolluted environment.

  1. 10 CFR 61.13 - Technical analyses.

    Science.gov (United States)

    2010-01-01

    ... air, soil, groundwater, surface water, plant uptake, and exhumation by burrowing animals. The analyses... processes such as erosion, mass wasting, slope failure, settlement of wastes and backfill, infiltration through covers over disposal areas and adjacent soils, and surface drainage of the disposal site. The...

  2. Analysing Simple Electric Motors in the Classroom

    Science.gov (United States)

    Yap, Jeff; MacIsaac, Dan

    2006-01-01

    Electromagnetic phenomena and devices such as motors are typically unfamiliar to both teachers and students. To better visualize and illustrate the abstract concepts (such as magnetic fields) underlying electricity and magnetism, we suggest that students construct and analyse the operation of a simply constructed Johnson electric motor. In this…

  3. En kvantitativ metode til analyse af radio

    Directory of Open Access Journals (Sweden)

    Christine Lejre

    2014-06-01

    Full Text Available I den danske såvel som den internationale radiolitteratur er bud på metoder til analyse af radiomediet sparsomme. Det skyldes formentlig, at radiomediet er svært at analysere, fordi det er et medie, der ikke er visualiseret i form af billeder eller understøttet af printet tekst. Denne artikel har til formål at beskrive en ny kvantitativ metode til analyse af radio, der tager særligt hensyn til radiomediets modalitet – lyd struktureret som et lineært forløb i tid. Metoden understøtter dermed både radiomediet som et medie i tid og som et blindt medie. Metoden er udviklet i forbindelse med en komparativ analyse af kulturprogrammer på P1 og Radio24syv lavet for Danmarks Radio. Artiklen peger på, at metoden er velegnet til analyse af ikke kun radio, men også andre medieplatforme samt forskellige journalistiske stofområder.

  4. Analysing User Lifetime in Voluntary Online Collaboration

    DEFF Research Database (Denmark)

    McHugh, Ronan; Larsen, Birger

    2010-01-01

    This paper analyses persuasion in online collaboration projects. It introduces a set of heuristics that can be applied to such projects and combines these with a quantitative analysis of user activity over time. Two example sites are studies, Open Street Map and The Pirate Bay. Results show that ...

  5. Analyses of hydraulic performance of velocity caps

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Degn Eskesen, Mark Chr.; Buhrkall, Jeppe

    2014-01-01

    The hydraulic performance of a velocity cap has been investigated. Velocity caps are often used in connection with offshore intakes. CFD (computational fluid dynamics) examined the flow through the cap openings and further down into the intake pipes. This was combined with dimension analyses...

  6. Quantitative analyses of shrinkage characteristics of neem ...

    African Journals Online (AJOL)

    Quantitative analyses of shrinkage characteristics of neem (Azadirachta indica A. Juss.) wood were carried out. Forty five wood specimens were prepared from the three ecological zones of north eastern Nigeria, viz: sahel savanna, sudan savanna and guinea savanna for the research. The results indicated that the wood ...

  7. UMTS signal measurements with digital spectrum analysers

    International Nuclear Information System (INIS)

    Licitra, G.; Palazzuoli, D.; Ricci, A. S.; Silvi, A. M.

    2004-01-01

    The launch of the Universal Mobile Telecommunications System (UNITS), the most recent mobile telecommunications standard has imposed the requirement of updating measurement instrumentation and methodologies. In order to define the most reliable measurement procedure, which is aimed at assessing the exposure to electromagnetic fields, modern spectrum analysers' features for correct signal characterisation has been reviewed. (authors)

  8. Hybrid Logical Analyses of the Ambient Calculus

    DEFF Research Database (Denmark)

    Bolander, Thomas; Hansen, Rene Rydhof

    2010-01-01

    In this paper, hybrid logic is used to formulate three control flow analyses for Mobile Ambients, a process calculus designed for modelling mobility. We show that hybrid logic is very well-suited to express the semantic structure of the ambient calculus and how features of hybrid logic can...

  9. Micromechanical photothermal analyser of microfluidic samples

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention relates to a micromechanical photothermal analyser of microfluidic samples comprising an oblong micro-channel extending longitudinally from a support element, the micro-channel is made from at least two materials with different thermal expansion coefficients, wherein...

  10. Systematic review and meta-analyses

    DEFF Research Database (Denmark)

    Dreier, Julie Werenberg; Andersen, Anne-Marie Nybo; Berg-Beckhoff, Gabriele

    2014-01-01

    1990 were excluded. RESULTS: The available literature supported an increased risk of adverse offspring health in association with fever during pregnancy. The strongest evidence was available for neural tube defects, congenital heart defects, and oral clefts, in which meta-analyses suggested between a 1...

  11. Secundaire analyses organisatiebeleid psychosociale arbeidsbelasting (PSA)

    NARCIS (Netherlands)

    Kraan, K.O.; Houtman, I.L.D.

    2016-01-01

    Hoe het organisatiebeleid rond psychosociale arbeidsbelasting (PSA) eruit ziet anno 2014 en welke samenhang er is met ander beleid en uitkomstmaten, zijn de centrale vragen in dit onderzoek. De resultaten van deze verdiepende analyses kunnen ten goede komen aan de lopende campagne ‘Check je

  12. Exergoeconomic and environmental analyses of CO

    NARCIS (Netherlands)

    Mosaffa, A. H.; Garousi Farshi, L; Infante Ferreira, C.A.; Rosen, M. A.

    2016-01-01

    Exergoeconomic and environmental analyses are presented for two CO2/NH3 cascade refrigeration systems equipped with (1) two flash tanks and (2) a flash tank along with a flash intercooler with indirect subcooler. A comparative study is performed for the proposed systems, and

  13. Meta-analyses on viral hepatitis

    DEFF Research Database (Denmark)

    Gluud, Lise L; Gluud, Christian

    2009-01-01

    This article summarizes the meta-analyses of interventions for viral hepatitis A, B, and C. Some of the interventions assessed are described in small trials with unclear bias control. Other interventions are supported by large, high-quality trials. Although attempts have been made to adjust...

  14. Multivariate differential analyses of adolescents' experiences of ...

    African Journals Online (AJOL)

    Aggression is reasoned to be dependent on aspects such as self-concept, moral reasoning, communication, frustration tolerance and family relationships. To analyse the data from questionnaires of 101 families (95 adolescents, 95 mothers and 91 fathers) Cronbach Alpha, various consecutive first and second order factor ...

  15. Chromosomal evolution and phylogenetic analyses in Tayassu ...

    Indian Academy of Sciences (India)

    Chromosome preparation and karyotype description. The material analysed consists of chromosome preparations of the tayassuid species T. pecari (three individuals) and. P. tajacu (four individuals) and were made from short-term lymphocyte cultures of whole blood samples using standard protocols (Chaves et al. 2002).

  16. Database communication protocol analyses and security detection

    International Nuclear Information System (INIS)

    Luo Qun; Liu Qiushi

    2003-01-01

    In this paper we introduced the analysis of TDS protocol in the communication application between Client and Server about SYBASE and MICROSOFT SQL SERVER and do some test for some bugs existed in the protocol. (authors)

  17. Dynamic analyses, FPGA implementation and engineering ...

    Indian Academy of Sciences (India)

    QIANG LAI

    2017-12-14

    Dec 14, 2017 ... the model of the generalised Sprott C system and anal- yses its equilibria. ..... the Matlab simulations. ... RNG design processes are given in Algorithm 1 as the ... RNG applications (simulation, modelling, arts, data hiding ...

  18. Grey literature in meta-analyses.

    Science.gov (United States)

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  19. Diagnostic Comparison of Meteorological Analyses during the 2002 Antarctic Winter

    Science.gov (United States)

    Manney, Gloria L.; Allen, Douglas R.; Kruger, Kirstin; Naujokat, Barbara; Santee, Michelle L.; Sabutis, Joseph L.; Pawson, Steven; Swinbank, Richard; Randall, Cora E.; Simmons, Adrian J.; hide

    2005-01-01

    studies (including polar process modeling) and studies involving synoptic evolution in the upper stratosphere. The operational assimilated datasets are better suited for most applications than the NCEP/CPC objective analyses and the reanalysis datasets.

  20. Thermal analyses. Information on the expected baking process; Thermische analyses. Informatie over een te verwachten bakgedrag

    Energy Technology Data Exchange (ETDEWEB)

    Van Wijck, H. [Stichting Technisch Centrum voor de Keramische Industrie TCKI, Velp (Netherlands)

    2009-09-01

    The design process and the drying process for architectural ceramics and pottery partly determine the characteristics of the final product, but the largest changes occur during the baking process. An overview is provided of the different thermal analyses and how the information from these analyses can predict the process in practice. (mk) [Dutch] Het vormgevingsproces en het droogproces voor bouwkeramische producten en aardewerk bepalen voor een deel de eigenschappen van de eindproducten, maar de grootste veranderingen treden op bij het bakproces. Een overzicht wordt gegeven van de verschillende thermische analyses en hoe de informatie uit deze analyses het in de praktijk te verwachten gedrag kan voorspellen.

  1. Preliminary analyses of AP600 using RELAP5

    International Nuclear Information System (INIS)

    Modro, S.M.; Beelman, R.J.; Fisher, J.E.

    1991-01-01

    This paper presents results of preliminary analyses of the proposed Westinghouse Electric Corporation AP600 design. AP600 is a two loop, 600 MW (e) pressurized water reactor (PWR) arranged in a two hot leg, four cold leg nuclear steam supply system (NSSS) configuration. In contrast to the present generation of PWRs it is equipped with passive emergency core coolant (ECC) systems. Also, the containment and the safety systems of the AP600 interact with the reactor coolant system and each other in a more integral fashion than present day PWRs. The containment in this design is the ultimate heat sink for removal of decay heat to the environment. Idaho National Engineering Laboratory (INEL) has studied applicability of the RELAP5 code to AP600 safety analysis and has developed a model of the AP600 for the Nuclear Regulatory Commission. The model incorporates integral modeling of the containment, NSSS and passive safety systems. Best available preliminary design data were used. Nodalization sensitivity studies were conducted to gain experience in modeling of systems and conditions which are beyond the applicability of previously established RELAP5 modeling guidelines or experience. Exploratory analyses were then undertaken to investigate AP600 system response during postulated accident conditions. Four small break LOCA calculations and two large break LOCA calculations were conducted

  2. Analyses and characterization of double shell tank

    Energy Technology Data Exchange (ETDEWEB)

    1994-10-04

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams.

  3. DCH analyses using the CONTAIN code

    International Nuclear Information System (INIS)

    Hong, Sung Wan; Kim, Hee Dong

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of 'DCH issue resolution for ice condenser plants' which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author)

  4. DCH analyses using the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Sung Wan; Kim, Hee Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of `DCH issue resolution for ice condenser plants` which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author).

  5. Analyses and characterization of double shell tank

    International Nuclear Information System (INIS)

    1994-01-01

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams

  6. Standardized analyses of nuclear shipping containers

    International Nuclear Information System (INIS)

    Parks, C.V.; Hermann, O.W.; Petrie, L.M.; Hoffman, T.J.; Tang, J.S.; Landers, N.F.; Turner, W.D.

    1983-01-01

    This paper describes improved capabilities for analyses of nuclear fuel shipping containers within SCALE -- a modular code system for Standardized Computer Analyses for Licensing Evaluation. Criticality analysis improvements include the new KENO V, a code which contains an enhanced geometry package and a new control module which uses KENO V and allows a criticality search on optimum pitch (maximum k-effective) to be performed. The SAS2 sequence is a new shielding analysis module which couples fuel burnup, source term generation, and radial cask shielding. The SAS5 shielding sequence allows a multidimensional Monte Carlo analysis of a shipping cask with code generated biasing of the particle histories. The thermal analysis sequence (HTAS1) provides an easy-to-use tool for evaluating a shipping cask response to the accident capability of the SCALE system to provide the cask designer or evaluator with a computational system that provides the automated procedures and easy-to-understand input that leads to standarization

  7. Quantitative Analyse und Visualisierung der Herzfunktionen

    Science.gov (United States)

    Sauer, Anne; Schwarz, Tobias; Engel, Nicole; Seitel, Mathias; Kenngott, Hannes; Mohrhardt, Carsten; Loßnitzer, Dirk; Giannitsis, Evangelos; Katus, Hugo A.; Meinzer, Hans-Peter

    Die computergestützte bildbasierte Analyse der Herzfunktionen ist mittlerweile Standard in der Kardiologie. Die verfügbaren Produkte erfordern meist ein hohes Maß an Benutzerinteraktion und somit einen erhöhten Zeitaufwand. In dieser Arbeit wird ein Ansatz vorgestellt, der dem Kardiologen eine größtenteils automatische Analyse der Herzfunktionen mittels MRT-Bilddaten ermöglicht und damit Zeitersparnis schafft. Hierbei werden alle relevanten herzphysiologsichen Parameter berechnet und mithilfe von Diagrammen und Graphen visualisiert. Diese Berechnungen werden evaluiert, indem die ermittelten Werte mit manuell vermessenen verglichen werden. Der hierbei berechnete mittlere Fehler liegt mit 2,85 mm für die Wanddicke und 1,61 mm für die Wanddickenzunahme immer noch im Bereich einer Pixelgrösse der verwendeten Bilder.

  8. Exergetic and thermoeconomic analyses of power plants

    International Nuclear Information System (INIS)

    Kwak, H.-Y.; Kim, D.-J.; Jeon, J.-S.

    2003-01-01

    Exergetic and thermoeconomic analyses were performed for a 500-MW combined cycle plant. In these analyses, mass and energy conservation laws were applied to each component of the system. Quantitative balances of the exergy and exergetic cost for each component, and for the whole system was carefully considered. The exergoeconomic model, which represented the productive structure of the system considered, was used to visualize the cost formation process and the productive interaction between components. The computer program developed in this study can determine the production costs of power plants, such as gas- and steam-turbines plants and gas-turbine cogeneration plants. The program can be also be used to study plant characteristics, namely, thermodynamic performance and sensitivity to changes in process and/or component design variables

  9. Kinetic stability analyses in a bumpy cylinder

    International Nuclear Information System (INIS)

    Dominguez, R.R.; Berk, H.L.

    1981-01-01

    Recent interest in the ELMO Bumpy Torus (EBT) has prompted a number of stability analyses of both the hot electron rings and the toroidal plasma. Typically these works employ the local approximation, neglecting radial eigenmode structure and ballooning effects to perform the stability analysis. In the present work we develop a fully kinetic formalism for performing nonlocal stability analyses in a bumpy cylinder. We show that the Vlasov-Maxwell integral equations (with one ignorable coordinate) are self-adjoint and hence amenable to analysis using numerical techniques developed for self-adjoint systems of equations. The representation we obtain for the kernel of the Vlasov-Maxwell equations is a differential operator of arbitrarily high order. This form leads to a manifestly self-adjoint system of differential equations for long wavelength modes

  10. Sectorial Group for Incident Analyses (GSAI)

    International Nuclear Information System (INIS)

    Galles, Q.; Gamo, J. M.; Jorda, M.; Sanchez-Garrido, P.; Lopez, F.; Asensio, L.; Reig, J.

    2013-01-01

    In 2008, the UNESA Nuclear Energy Committee (CEN) proposed the creation of a working group formed by experts from all Spanish NPPs with the purpose of jointly analyze relevant incidents occurred in each one of the plants. This initiative was a response to a historical situation in which the exchange of information on incidents between the Spanish NPP's was below the desired level. In june 2009, UNESA's Guide CEN-29 established the performance criteria for the so called Sectorial Group for Incident Analyses (GSAI), whose activity would be coordinated by the UNESA's Group for Incident Analyses (GSAI), whose activity would be coordinated by the UNESA's Group of Operating Experience, under the Operations Commission (COP). (Author)

  11. Analyses of cavitation instabilities in ductile metals

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2007-01-01

    Cavitation instabilities have been predicted for a single void in a ductile metal stressed under high triaxiality conditions. In experiments for a ceramic reinforced by metal particles a single dominant void has been observed on the fracture surface of some of the metal particles bridging a crack......, and also tests for a thin ductile metal layer bonding two ceramic blocks have indicated rapid void growth. Analyses for these material configurations are discussed here. When the void radius is very small, a nonlocal plasticity model is needed to account for observed size-effects, and recent analyses......, while the surrounding voids are represented by a porous ductile material model in terms of a field quantity that specifies the variation of the void volume fraction in the surrounding metal....

  12. Analysing organic transistors based on interface approximation

    International Nuclear Information System (INIS)

    Akiyama, Yuto; Mori, Takehiko

    2014-01-01

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region

  13. New environmental metabarcodes for analysing soil DNA

    DEFF Research Database (Denmark)

    Epp, Laura S.; Boessenkool, Sanne; Bellemain, Eva P.

    2012-01-01

    was systematically evaluated by (i) in silico PCRs using all standard sequences in the EMBL public database as templates, (ii) in vitro PCRs of DNA extracts from surface soil samples from a site in Varanger, northern Norway and (iii) in vitro PCRs of DNA extracts from permanently frozen sediment samples of late......Metabarcoding approaches use total and typically degraded DNA from environmental samples to analyse biotic assemblages and can potentially be carried out for any kinds of organisms in an ecosystem. These analyses rely on specific markers, here called metabarcodes, which should be optimized...... for taxonomic resolution, minimal bias in amplification of the target organism group and short sequence length. Using bioinformatic tools, we developed metabarcodes for several groups of organisms: fungi, bryophytes, enchytraeids, beetles and birds. The ability of these metabarcodes to amplify the target groups...

  14. Visuelle Analyse von E-mail-Verkehr

    OpenAIRE

    Mansmann, Florian

    2003-01-01

    Diese Arbeit beschreibt Methoden zur visuellen geographischen Analyse von E-mail Verkehr.Aus dem Header einer E-mail können Hostadressen und IP-Adressen herausgefiltert werden. Anhand einer Datenbank werden diesen Host- und IP-Adressen geographische Koordinaten zugeordnet.Durch eine Visualisierung werden in übersichtlicher Art und Weise mehrere tausend E-mail Routen dargestellt. Zusätzlich dazu wurden interktive Manipulationsmöglichkeiten vorgestellt, welche eine visuelle Exploration der Date...

  15. BWR core melt progression phenomena: Experimental analyses

    International Nuclear Information System (INIS)

    Ott, L.J.

    1992-01-01

    In the BWR Core Melt in Progression Phenomena Program, experimental results concerning severe fuel damage and core melt progression in BWR core geometry are used to evaluate existing models of the governing phenomena. These include control blade eutectic liquefaction and the subsequent relocation and attack on the channel box structure; oxidation heating and hydrogen generation; Zircaloy melting and relocation; and the continuing oxidation of zirconium with metallic blockage formation. Integral data have been obtained from the BWR DF-4 experiment in the ACRR and from BWR tests in the German CORA exreactor fuel-damage test facility. Additional integral data will be obtained from new CORA BWR test, the full-length FLHT-6 BWR test in the NRU test reactor, and the new program of exreactor experiments at Sandia National Laboratories (SNL) on metallic melt relocation and blockage formation. an essential part of this activity is interpretation and use of the results of the BWR tests. The Oak Ridge National Laboratory (ORNL) has developed experiment-specific models for analysis of the BWR experiments; to date, these models have permitted far more precise analyses of the conditions in these experiments than has previously been available. These analyses have provided a basis for more accurate interpretation of the phenomena that the experiments are intended to investigate. The results of posttest analyses of BWR experiments are discussed and significant findings from these analyses are explained. The ORNL control blade/canister models with materials interaction, relocation and blockage models are currently being implemented in SCDAP/RELAP5 as an optional structural component

  16. En Billig GPS Data Analyse Platform

    DEFF Research Database (Denmark)

    Andersen, Ove; Christiansen, Nick; Larsen, Niels T.

    2011-01-01

    Denne artikel præsenterer en komplet software platform til analyse af GPS data. Platformen er bygget udelukkende vha. open-source komponenter. De enkelte komponenter i platformen beskrives i detaljer. Fordele og ulemper ved at bruge open-source diskuteres herunder hvilke IT politiske tiltage, der...... organisationer med et digitalt vejkort og GPS data begynde at lave trafikanalyser på disse data. Det er et krav, at der er passende IT kompetencer tilstede i organisationen....

  17. Neuronal network analyses: premises, promises and uncertainties

    OpenAIRE

    Parker, David

    2010-01-01

    Neuronal networks assemble the cellular components needed for sensory, motor and cognitive functions. Any rational intervention in the nervous system will thus require an understanding of network function. Obtaining this understanding is widely considered to be one of the major tasks facing neuroscience today. Network analyses have been performed for some years in relatively simple systems. In addition to the direct insights these systems have provided, they also illustrate some of the diffic...

  18. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  19. An analyser for power plant operations

    International Nuclear Information System (INIS)

    Rogers, A.E.; Wulff, W.

    1990-01-01

    Safe and reliable operation of power plants is essential. Power plant operators need a forecast of what the plant will do when its current state is disturbed. The in-line plant analyser provides precisely this information at relatively low cost. The plant analyser scheme uses a mathematical model of the dynamic behaviour of the plant to establish a numerical simulation. Over a period of time, the simulation is calibrated with measurements from the particular plant in which it is used. The analyser then provides a reference against which to evaluate the plant's current behaviour. It can be used to alert the operator to any atypical excursions or combinations of readings that indicate malfunction or off-normal conditions that, as the Three Mile Island event suggests, are not easily recognised by operators. In a look-ahead mode, it can forecast the behaviour resulting from an intended change in settings or operating conditions. Then, when such changes are made, the plant's behaviour can be tracked against the forecast in order to assure that the plant is behaving as expected. It can be used to investigate malfunctions that have occurred and test possible adjustments in operating procedures. Finally, it can be used to consider how far from the limits of performance the elements of the plant are operating. Then by adjusting settings, the required power can be generated with as little stress as possible on the equipment. (6 figures) (Author)

  20. Finite element analyses for seismic shear wall international standard problem

    International Nuclear Information System (INIS)

    Park, Y.J.; Hofmayer, C.H.

    1998-04-01

    Two identical reinforced concrete (RC) shear walls, which consist of web, flanges and massive top and bottom slabs, were tested up to ultimate failure under earthquake motions at the Nuclear Power Engineering Corporation's (NUPEC) Tadotsu Engineering Laboratory, Japan. NUPEC provided the dynamic test results to the OECD (Organization for Economic Cooperation and Development), Nuclear Energy Agency (NEA) for use as an International Standard Problem (ISP). The shear walls were intended to be part of a typical reactor building. One of the major objectives of the Seismic Shear Wall ISP (SSWISP) was to evaluate various seismic analysis methods for concrete structures used for design and seismic margin assessment. It also offered a unique opportunity to assess the state-of-the-art in nonlinear dynamic analysis of reinforced concrete shear wall structures under severe earthquake loadings. As a participant of the SSWISP workshops, Brookhaven National Laboratory (BNL) performed finite element analyses under the sponsorship of the U.S. Nuclear Regulatory Commission (USNRC). Three types of analysis were performed, i.e., monotonic static (push-over), cyclic static and dynamic analyses. Additional monotonic static analyses were performed by two consultants, F. Vecchio of the University of Toronto (UT) and F. Filippou of the University of California at Berkeley (UCB). The analysis results by BNL and the consultants were presented during the second workshop in Yokohama, Japan in 1996. A total of 55 analyses were presented during the workshop by 30 participants from 11 different countries. The major findings on the presented analysis methods, as well as engineering insights regarding the applicability and reliability of the FEM codes are described in detail in this report. 16 refs., 60 figs., 16 tabs