WorldWideScience

Sample records for scale analyse technique

  1. Techniques for Scaling Up Analyses Based on Pre-interpretations

    DEFF Research Database (Denmark)

    Gallagher, John Patrick; Henriksen, Kim Steen; Banda, Gourinath

    2005-01-01

    a variety of analyses, both generic (such as mode analysis) and program-specific (with respect to a type describing some particular property of interest). Previous work demonstrated the approach using pre-interpretations over small domains. In this paper we present techniques that allow the method...

  2. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  3. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  4. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger

    2013-05-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash. We apply an edge list partitioning technique, designed to accommodate high-degree vertices (hubs) that create scaling challenges when processing scale-free graphs. In addition to partitioning hubs, we use ghost vertices to represent the hubs to reduce communication hotspots. We present a scaling study with three important graph algorithms: Breadth-First Search (BFS), K-Core decomposition, and Triangle Counting. We also demonstrate scalability on BG/P Intrepid by comparing to best known Graph500 results. We show results on two clusters with local NVRAM storage that are capable of traversing trillion-edge scale-free graphs. By leveraging node-local NAND Flash, our approach can process thirty-two times larger datasets with only a 39% performance degradation in Traversed Edges Per Second (TEPS). © 2013 IEEE.

  5. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger; Gokhale, Maya; Amato, Nancy M.

    2013-01-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash

  6. Lightweight and Statistical Techniques for Petascale PetaScale Debugging

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton

    2014-06-30

    This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errors in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis

  7. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1994-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  8. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  9. Tests and analyses of 1/4-scale upgraded nine-bay reinforced concrete basement models

    International Nuclear Information System (INIS)

    Woodson, S.C.

    1983-01-01

    Two nine-bay prototype structures, a flat plate and two-way slab with beams, were designed in accordance with the 1977 ACI code. A 1/4-scale model of each prototype was constructed, upgraded with timber posts, and statically tested. The development of the timber posts placement scheme was based upon yield-line analyses, punching shear evaluation, and moment-thrust interaction diagrams of the concrete slab sections. The flat plate model and the slab with beams model withstood approximate overpressures of 80 and 40 psi, respectively, indicating that required hardness may be achieved through simple upgrading techniques

  10. Iterative categorization (IC): a systematic technique for analysing qualitative data

    Science.gov (United States)

    2016-01-01

    Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155

  11. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  12. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  13. Iterative categorization (IC): a systematic technique for analysing qualitative data.

    Science.gov (United States)

    Neale, Joanne

    2016-06-01

    The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  14. [Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].

    Science.gov (United States)

    Uesawa, Yoshihiro

    2018-01-01

     Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.

  15. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  17. Quantifying Shapes: Mathematical Techniques for Analysing Visual Representations of Sound and Music

    Directory of Open Access Journals (Sweden)

    Genevieve L. Noyce

    2013-12-01

    Full Text Available Research on auditory-visual correspondences has a long tradition but innovative experimental paradigms and analytic tools are sparse. In this study, we explore different ways of analysing real-time visual representations of sound and music drawn by both musically-trained and untrained individuals. To that end, participants' drawing responses captured by an electronic graphics tablet were analysed using various regression, clustering, and classification techniques. Results revealed that a Gaussian process (GP regression model with a linear plus squared-exponential covariance function was able to model the data sufficiently, whereas a simpler GP was not a good fit. Spectral clustering analysis was the best of a variety of clustering techniques, though no strong groupings are apparent in these data. This was confirmed by variational Bayes analysis, which only fitted one Gaussian over the dataset. Slight trends in the optimised hyperparameters between musically-trained and untrained individuals allowed for the building of a successful GP classifier that differentiated between these two groups. In conclusion, this set of techniques provides useful mathematical tools for analysing real-time visualisations of sound and can be applied to similar datasets as well.

  18. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  19. A study of residence time distribution using radiotracer technique in the large scale plant facility

    Science.gov (United States)

    Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.

    2017-06-01

    As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.

  20. Novel hybrid Monte Carlo/deterministic technique for shutdown dose rate analyses of fusion energy systems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2014-01-01

    Highlights: •Develop the novel Multi-Step CADIS (MS-CADIS) hybrid Monte Carlo/deterministic method for multi-step shielding analyses. •Accurately calculate shutdown dose rates using full-scale Monte Carlo models of fusion energy systems. •Demonstrate the dramatic efficiency improvement of the MS-CADIS method for the rigorous two step calculations of the shutdown dose rate in fusion reactors. -- Abstract: The rigorous 2-step (R2S) computational system uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the R2S neutron transport calculation. However, the prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their ability to accurately predict the SDDR in fusion energy systems using full-scale modeling of an entire fusion plant. This paper describes a novel hybrid Monte Carlo/deterministic methodology that uses the Consistent Adjoint Driven Importance Sampling (CADIS) method but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) methodology speeds up the R2S neutron Monte Carlo calculation using an importance function that represents the neutron importance to the final SDDR. Using a simplified example, preliminary results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the efficiency enhancement compared to analog Monte Carlo is higher than a factor of 10,000

  1. SCALE Graphical Developments for Improved Criticality Safety Analyses

    International Nuclear Information System (INIS)

    Barnett, D.L.; Bowman, S.M.; Horwedel, J.E.; Petrie, L.M.

    1999-01-01

    New computer graphic developments at Oak Ridge National Ridge National Laboratory (ORNL) are being used to provide visualization of criticality safety models and calculational results as well as tools for criticality safety analysis input preparation. The purpose of this paper is to present the status of current development efforts to continue to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system. Applications for criticality safety analysis in the areas of 3-D model visualization, input preparation and execution via a graphical user interface (GUI), and two-dimensional (2-D) plotting of results are discussed

  2. Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method

    International Nuclear Information System (INIS)

    Cadinu, F.; Kozlowski, T.; Dinh, T.N.

    2007-01-01

    Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)

  3. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  4. QuickRNASeq lifts large-scale RNA-seq data analyses to the next level of automation and interactive visualization.

    Science.gov (United States)

    Zhao, Shanrong; Xi, Li; Quan, Jie; Xi, Hualin; Zhang, Ying; von Schack, David; Vincent, Michael; Zhang, Baohong

    2016-01-08

    RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. The high degree

  5. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    International Nuclear Information System (INIS)

    Williams, Paul T.; Yin, Shengjun; Klasky, Hilda B.; Bass, Bennett Richard

    2011-01-01

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current status of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite

  6. Mathematical analysis of the dimensional scaling technique for the Schroedinger equation with power-law potentials

    International Nuclear Information System (INIS)

    Ding Zhonghai; Chen, Goong; Lin, Chang-Shou

    2010-01-01

    The dimensional scaling (D-scaling) technique is an innovative asymptotic expansion approach to study the multiparticle systems in molecular quantum mechanics. It enables the calculation of ground and excited state energies of quantum systems without having to solve the Schroedinger equation. In this paper, we present a mathematical analysis of the D-scaling technique for the Schroedinger equation with power-law potentials. By casting the D-scaling technique in an appropriate variational setting and studying the corresponding minimization problem, the D-scaling technique is justified rigorously. A new asymptotic dimensional expansion scheme is introduced to compute asymptotic expansions for ground state energies.

  7. Development of a Scaling Technique for Sociometric Data.

    Science.gov (United States)

    Peper, John B.; Chansky, Norman M.

    This study explored the stability and interjudge agreements of a sociometric scaling device to which children could easily respond, which teachers could easily administer and score, and which provided scores that researchers could use in parametric statistical analyses. Each student was paired with every other member of his class. He voted on each…

  8. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    Science.gov (United States)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  9. Use of the modal superposition technique for piping system blowdown analyses

    International Nuclear Information System (INIS)

    Ware, A.G.; Macek, R.W.

    1983-01-01

    A standard method of solving for the seismic response of piping systems is the modal superposition technique. Only a limited number of structural modes are considered (typically those up to 33 Hz in the U.S.), since the effect on the calculated response due to higher modes is generally small, and the method can result in considerable computer cost savings over the direct integration method. The modal superposition technique has also been applied to piping response problems in which the forcing functions are due to fluid excitation. Application of the technique to this case is somewhat more difficult, because a well defined cutoff frequency for determining structural modes to be included has not been established. This paper outlines a method for higher mode corrections, and suggests methods to determine suitable cutoff frequencies for piping system blowdown analyses. A numerical example illustrates how uncorrected modal superposition results can produce erroneous stress results

  10. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  11. A reduced scale two loop PWR core designed with particle swarm optimization technique

    International Nuclear Information System (INIS)

    Lima Junior, Carlos A. Souza; Pereira, Claudio M.N.A; Lapa, Celso M.F.; Cunha, Joao J.; Alvim, Antonio C.M.

    2007-01-01

    Reduced scale experiments are often employed in engineering projects because they are much cheaper than real scale testing. Unfortunately, designing reduced scale thermal-hydraulic circuit or equipment, with the capability of reproducing, both accurately and simultaneously, all physical phenomena that occur in real scale and at operating conditions, is a difficult task. To solve this problem, advanced optimization techniques, such as Genetic Algorithms, have been applied. Following this research line, we have performed investigations, using the Particle Swarm Optimization (PSO) Technique, to design a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power and non accidental operating conditions. Obtained results show that the proposed methodology is a promising approach for forced flow reduced scale experiments. (author)

  12. Advanced techniques for energy-efficient industrial-scale continuous chromatography

    Energy Technology Data Exchange (ETDEWEB)

    DeCarli, J.P. II (Dow Chemical Co., Midland, MI (USA)); Carta, G. (Virginia Univ., Charlottesville, VA (USA). Dept. of Chemical Engineering); Byers, C.H. (Oak Ridge National Lab., TN (USA))

    1989-11-01

    Continuous annular chromatography (CAC) is a developing technology that allows truly continuous chromatographic separations. Previous work has demonstrated the utility of this technology for the separation of various materials by isocratic elution on a bench scale. Novel applications and improved operation of the process were studied in this work, demonstrating that CAC is a versatile apparatus which is capable of separations at high throughput. Three specific separation systems were investigated. Pilot-scale separations at high loadings were performed using an industrial sugar mixture as an example of scale-up for isocratic separations. Bench-scale experiments of a low concentration metal ion mixture were performed to demonstrate stepwise elution, a chromatographic technique which decreases dilution and increases sorbent capacity. Finally, the separation of mixtures of amino acids by ion exchange was investigated to demonstrate the use of displacement development on the CAC. This technique, which perhaps has the most potential, when applied to the CAC allowed simultaneous separation and concentration of multicomponent mixtures on a continuous basis. Mathematical models were developed to describe the CAC performance and optimize the operating conditions. For all the systems investigated, the continuous separation performance of the CAC was found to be very nearly the same as the batchwise performance of conventional chromatography. the technology appears, thus, to be very promising for industrial applications. 43 figs., 9 tabs.

  13. The Visual Analogue Scale for Rating, Ranking and Paired-Comparison (VAS-RRP): A new technique for psychological measurement.

    Science.gov (United States)

    Sung, Yao-Ting; Wu, Jeng-Shin

    2018-04-17

    Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.

  14. An efficient permeability scaling-up technique applied to the discretized flow equations

    Energy Technology Data Exchange (ETDEWEB)

    Urgelli, D.; Ding, Yu [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    Grid-block permeability scaling-up for numerical reservoir simulations has been discussed for a long time in the literature. It is now recognized that a full permeability tensor is needed to get an accurate reservoir description at large scale. However, two major difficulties are encountered: (1) grid-block permeability cannot be properly defined because it depends on boundary conditions; (2) discretization of flow equations with a full permeability tensor is not straightforward and little work has been done on this subject. In this paper, we propose a new method, which allows us to get around both difficulties. As the two major problems are closely related, a global approach will preserve the accuracy. So, in the proposed method, the permeability up-scaling technique is integrated in the discretized numerical scheme for flow simulation. The permeability is scaled-up via the transmissibility term, in accordance with the fluid flow calculation in the numerical scheme. A finite-volume scheme is particularly studied, and the transmissibility scaling-up technique for this scheme is presented. Some numerical examples are tested for flow simulation. This new method is compared with some published numerical schemes for full permeability tensor discretization where the full permeability tensor is scaled-up through various techniques. Comparing the results with fine grid simulations shows that the new method is more accurate and more efficient.

  15. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  16. Regional scales of fire danger rating in the forest: improved technique

    Directory of Open Access Journals (Sweden)

    A. V. Volokitina

    2017-04-01

    Full Text Available Wildland fires distribute unevenly in time and over area under the influence of weather and other factors. It is unfeasible to air patrol the whole forest area daily during a fire season as well as to keep all fire suppression forces constantly alert. Daily work and preparedness of forest fire protection services is regulated by the level of fire danger according to weather conditions (Nesterov’s index. PV-1 index, fire hazard class (Melekhov’s scale, regional scales (earlier called local scales. Unfortunately, there is still no unified comparable technique of making regional scales. As a result, it is difficult to maneuver forest fire protection resources, since the techniques currently used are not approved and not tested for their performance. They give fire danger rating incomparable even for neighboring regions. The paper analyzes the state-of-the-art in Russia and abroad. It is stated the irony is that with factors of fire danger measured quantitatively, the fire danger itself as a function has no quantitative expression. Thus, selection of an absolute criteria is of high importance for improvement of daily fire danger rating. On the example of the Chunsky forest ranger station (Krasnoyarsk Krai, an improved technique is suggested of making comparable local scales of forest fire danger rating based on an absolute criterion of fire danger rating – a probable density of active fires per million ha. A method and an algorithm are described of automatized local scales of fire danger that should facilitate effective creation of similar scales for any forest ranger station or aviation regional office using a database on forest fires and weather conditions. The information system of distant monitoring by Federal Forestry Agency of Russia is analyzed for its application in making local scales. To supplement the existing weather station net it is suggested that automatic compact weather stations or, if the latter is not possible, simple

  17. Are Moral Disengagement, Neutralization Techniques, and Self-Serving Cognitive Distortions the Same? Developing a Unified Scale of Moral Neutralization of Aggression

    Directory of Open Access Journals (Sweden)

    Denis Ribeaud

    2010-12-01

    Full Text Available

    Can the three concepts of Neutralization Techniques, Moral Disengagement, and Secondary Self-Serving Cognitive Distortions be conceived theoretically and empirically
    as capturing the same cognitive processes and thus be measured with one single scale of Moral Neutralization? First, we show how the different approaches overlap conceptually. Second, in Study 1, we verify that four scales derived from the three conceptions of Moral Neutralization are correlated in such a way that they can be conceived as measuring the same phenomenon. Third, building on the results of Study 1, we derive a unified scale of Moral Neutralization which specifically focuses on the neutralization of aggression and test it in a large general population sample of preadolescents (Study 2. Confirmatory factor analyses suggest a good internal consistency and acceptable cross-gender factorial invariance. Correlation analyses with related behavioral and cognitive constructs corroborate the scale’s criterion and convergent validity. In the final section we present a possible integration of Moral Neutralization in a broader framework of crime causation.

  18. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  19. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files

  20. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  1. Medium scale test study of chemical cleaning technique for secondary side of SG in PWR

    International Nuclear Information System (INIS)

    Zhang Mengqin; Zhang Shufeng; Yu Jinghua; Hou Shufeng

    1997-08-01

    The medium scale test study of chemical cleaning technique for removing corrosion product (Fe 3 O 4 ) in secondary side of SG in PWR has been completed. The test has been carried out in a medium scale test loop. The medium scale test evaluated the effect of the chemical cleaning technique (temperature, flow rate, cleaning time, cleaning process), the state of corrosion product deposition on magnetite (Fe 3 O 4 ) solubility and safety of materials of SG in cleaning process. The inhibitor component of chemical cleaning agent has been improved by electrochemical linear polarization method, the effect of inhibitor on corrosion resistance of materials have been examined in the medium scale test loop, the most components of chemical cleaning agent have been obtained, the EDTA is main component in cleaning agent. The electrochemical method for monitor corrosion of materials during cleaning process has been completed in the laboratory. The study of the medium scale test of chemical cleaning technique have had the optimum chemical cleaning technique for remove corrosion product in SG secondary side of PWR. (9 refs., 4 figs., 11 tabs.)

  2. Recent Regional Climate State and Change - Derived through Downscaling Homogeneous Large-scale Components of Re-analyses

    Science.gov (United States)

    Von Storch, H.; Klehmet, K.; Geyer, B.; Li, D.; Schubert-Frisius, M.; Tim, N.; Zorita, E.

    2015-12-01

    Global re-analyses suffer from inhomogeneities, as they process data from networks under development. However, the large-scale component of such re-analyses is mostly homogeneous; additional observational data add in most cases to a better description of regional details and less so on large-scale states. Therefore, the concept of downscaling may be applied to homogeneously complementing the large-scale state of the re-analyses with regional detail - wherever the condition of homogeneity of the large-scales is fulfilled. Technically this can be done by using a regional climate model, or a global climate model, which is constrained on the large scale by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional risks - in particular marine risks - was identified. While the data density in Europe is considerably better than in most other regions of the world, even here insufficient spatial and temporal coverage is limiting risk assessments. Therefore, downscaled data-sets are frequently used by off-shore industries. We have run this system also in regions with reduced or absent data coverage, such as the Lena catchment in Siberia, in the Yellow Sea/Bo Hai region in East Asia, in Namibia and the adjacent Atlantic Ocean. Also a global (large scale constrained) simulation has been. It turns out that spatially detailed reconstruction of the state and change of climate in the three to six decades is doable for any region of the world.The different data sets are archived and may freely by used for scientific purposes. Of course, before application, a careful analysis of the quality for the intended application is needed, as sometimes unexpected changes in the quality of the description of large-scale driving states prevail.

  3. SRM Internal Flow Tests and Computational Fluid Dynamic Analysis. Volume 2; CFD RSRM Full-Scale Analyses

    Science.gov (United States)

    2001-01-01

    This document presents the full-scale analyses of the CFD RSRM. The RSRM model was developed with a 20 second burn time. The following are presented as part of the full-scale analyses: (1) RSRM embedded inclusion analysis; (2) RSRM igniter nozzle design analysis; (3) Nozzle Joint 4 erosion anomaly; (4) RSRM full motor port slag accumulation analysis; (5) RSRM motor analysis of two-phase flow in the aft segment/submerged nozzle region; (6) Completion of 3-D Analysis of the hot air nozzle manifold; (7) Bates Motor distributed combustion test case; and (8) Three Dimensional Polysulfide Bump Analysis.

  4. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    International Nuclear Information System (INIS)

    Hola, Marketa; Kalvoda, Jiri; Novakova, Hana; Skoda, Radek; Kanicky, Viktor

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp (Cyprinus carpio) scales: (a)A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 μm width and 50 μm depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. (b)Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  5. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    Energy Technology Data Exchange (ETDEWEB)

    Hola, Marketa [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic); Kalvoda, Jiri, E-mail: jkalvoda@centrum.cz [Department of Geological Sciences, Masaryk University of Brno, Kotlarska 2, 611 37 Brno (Czech Republic); Novakova, Hana [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic); Skoda, Radek [Department of Geological Sciences, Masaryk University of Brno, Kotlarska 2, 611 37 Brno (Czech Republic); Kanicky, Viktor [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic)

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp (Cyprinus carpio) scales: (a)A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 {mu}m width and 50 {mu}m depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. (b)Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  6. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  7. How scaling fluctuation analyses can transform our view of the climate

    Science.gov (United States)

    Lovejoy, Shaun; Schertzer, Daniel

    2013-04-01

    There exist a bewildering diversity of proxy climate data including tree rings, ice cores, lake varves, boreholes, ice cores, pollen, foraminifera, corals and speleothems. Their quantitative use raises numerous questions of interpretation and calibration. Even in classical cases - such as the isotope signal in ice cores - the usual assumption of linear dependence on ambient temperature is only a first approximation. In other cases - such as speleothems - the isotope signals arise from multiple causes (which are not always understood) and this hinders their widespread use. We argue that traditional interpretations and calibrations - based on essentially deterministic comparisons between instrumental data, model outputs and proxies (albeit with the help of uncertainty analyses) - have been both overly ambitious while simultaneously underexploiting the data. The former since comparisons typically involve series at different temporal resolutions and from different geographical locations - one does not expect agreement in a deterministic sense, while with respect to climate models, one only expects statistical correspondences. The proxies are underexploited since comparisons are done at unique temporal and / or spatial resolutions whereas the fluctuations they describe provide information over wide ranges of scale. A convenient method of overcoming these difficulties is the use of fluctuation analysis systematically applied over the full range of available scales to determine the scaling proeprties. The new transformative element presented here, is to define fluctuations ΔT in a series T(t) at scale Δt not by differences (ΔT(Δt) = T(t+Δt) - T(t)) but rather by the difference in the means over the first and second halves of the lag Δt . This seemingly minor change - technically from "poor man's" to "Haar" wavelets - turns out to make a huge difference since for example, it is adequate for analysing temperatures from seconds to hundreds of millions of years yet

  8. Analysing and Correcting the Differences between Multi-Source and Multi-Scale Spatial Remote Sensing Observations

    Science.gov (United States)

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding

  9. Identification of the Scale of Changes in Personnel Motivation Techniques at Mechanical-Engineering Enterprises

    Directory of Open Access Journals (Sweden)

    Melnyk Olga G.

    2016-02-01

    Full Text Available The method for identification of the scale of changes in personnel motivation techniques at mechanical-engineering enterprises based on structural and logical sequence of implementation of relevant stages (identification of the mission, strategy and objectives of the enterprise; forecasting the development of the enterprise business environment; SWOT-analysis of actual motivation techniques, deciding on the scale of changes in motivation techniques, choosing providers for changing personnel motivation techniques, choosing an alternative to changing motivation techniques, implementation of changes in motivation techniques; control over changes in motivation techniques. It has been substantiated that the improved method enables providing a systematic and analytical justification for management decisionmaking in this field and choosing the best for the mechanical-engineering enterprise scale and variant of changes in motivation techniques. The method for identification of the scale of changes in motivation techniques at mechanical-engineering enterprises takes into account the previous, current and prospective character. Firstly, the approach is based on considering the past state in the motivational sphere of the mechanical-engineering enterprise; secondly, the method involves identifying the current state of personnel motivation techniques; thirdly, within the method framework the prospective, which is manifested in strategic vision of the enterprise development as well as in forecasting the development of its business environment, is taken into account. The advantage of the proposed method is that the level of its specification may vary depending on the set goals, resource constraints and necessity. Among other things, this method allows integrating various formalized and non-formalized causal relationships in the sphere of personnel motivation at machine-building enterprises and management of relevant processes. This creates preconditions for a

  10. XRF analyses for the study of painting technique and degradation on frescoes by Beato Angelico: first results

    International Nuclear Information System (INIS)

    Mazzinghi, A.

    2014-01-01

    Beato Angelico is one of the most important Italian painters of the Renaissance period, in particular he was a master of the so-called 'Buon fresco' technique for mural paintings. A wide diagnostic campaign with X-Ray Fluorescence (XRF) analyses has been carried out on three masterworks painted by Beato Angelico in the San Marco monastery in Florence: the Crocifissione con Santi, the 'Annunciazione' and the 'Madonna delle Ombre'. The latter is painted by mixing fresco and secco techniques, which makes it of particular interest for the study of two different paintings techniques of the same artist. Then the aim of the study was focused on the characterization of the painting palette, and therefore the painting techniques, used by Beato Angelico. Moreover, the conservators were interested in the study of degradation processes and old restoration treatments. Our analyses have been carried out by means of the XRF spectrometer developed at LABEC laboratory at Istituto Nazionale di Fisica Nucleare in Florence (Italy). XRF is indeed especially suited for such a kind of study, allowing for multi-elemental, nondestructive, non-invasive analyses in a short time, with portable instruments. In this paper the first results concerning the XRF analysis are presented.

  11. Techniques for Analysing Problems in Engineering Projects

    DEFF Research Database (Denmark)

    Thorsteinsson, Uffe

    1998-01-01

    Description of how CPM network can be used for analysing complex problems in engineering projects.......Description of how CPM network can be used for analysing complex problems in engineering projects....

  12. Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)

    Science.gov (United States)

    Hancher, M.

    2013-12-01

    Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.

  13. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 1: Review and comparison of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked

  14. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 2: robustness of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type II errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples

  15. Spiritual Well-Being Scale Ethnic Differences between Caucasians and African-Americans: Follow Up Analyses.

    Science.gov (United States)

    Miller, Geri; Gridley, Betty; Fleming, Willie

    This follow up study is in response to Miller, Fleming, and Brown-Andersons (1998) study of ethnic differences between Caucasians and African-Americans where the authors suggested that the Spiritual Well-Being (SWB) Scale may need to be interpreted differently depending on ethnicity. In this study, confirmatory factor analyses were conducted for…

  16. The application of fluid structure interaction techniques within finite element analyses of water-filled transport flasks

    International Nuclear Information System (INIS)

    Smith, C.; Stojko, S.

    2004-01-01

    Historically, Finite Element (FE) analyses of water-filled transport flasks and their payloads have been carried out assuming a dry environment, mainly due to a lack of robust Fluid Structure Interaction (FSI) modelling techniques. Also it has been accepted within the RAM transport industry that the presence of water would improve the impact withstand capability of dropped payloads within containers. In recent years the FE community has seen significant progress and improvement in FSI techniques. These methods have been utilised to investigate the effects of a wet environment on payload behaviour for the regulatory drop test within a recent transport licence renewal application. Fluid flow and pressure vary significantly during a wet impact and the effects on the contents become complex when water is incorporated into the flask analyses. Modelling a fluid environment within the entire flask is considered impractical; hence a good understanding of the FSI techniques and assumptions regarding fluid boundaries is required in order to create a representative FSI model. Therefore, a Verification and Validation (V and V) exercise was undertaken to underpin the FSI techniques eventually utilised. A number of problems of varying complexity have been identified to test the FSI capabilities of the explicit code LS-DYNA, which is used in the extant dry container impact analyses. RADIOSS explicit code has been used for comparison, to provide further confidence in LS-DYNA predictions. Various methods of modelling fluid are tested, and the relative advantages and limitations of each method and FSI coupling approaches are discussed. Results from the V and V problems examined provided sufficient confidence that FSI effects within containers can be accurately modelled

  17. Falsire: CSNI project for fracture analyses of large-scale international reference experiments (Phase 1). Comparison report

    International Nuclear Information System (INIS)

    1994-01-01

    A summary of the recently completed Phase I of the Project for Fracture Analysis of Large-Scale International Reference Experiments (Project FALSIRE) is presented. Project FALSIRE was created by the Fracture Assessment Group (FAG) of Principal Working Group No. 3 (PWG/3) of the OECD/NEA Committee on the Safety of Nuclear Installations (CSNI), formed to evaluate fracture prediction capabilities currently used in safety assessments of nuclear vessel components. The aim of the Project FALSIRE was to assess various fracture methodologies through interpretive analyses of selected large-scale fracture experiments. The six experiments used in Project FALSIRE (performed in the Federal Republic of Germany, Japan, the United Kingdom, and the U.S.A.) were designed to examine various aspects of crack growth in reactor pressure vessel (RPV) steels under pressurized-thermal-shock (PTS) loading conditions. The analysis techniques employed by the participants included engineering and finite-element methods, which were combined with Jr fracture methodology and the French local approach. For each experiment, analysis results provided estimates of variables such as crack growth, crack-mouth-opening displacement, temperature, stress, strain, and applied J and K values. A comparative assessment and discussion of the analysis results are presented; also, the current status of the entire results data base is summarized. Some conclusions concerning predictive capabilities of selected ductile fracture methodologies, as applied to RPVs subjected to PTS loading, are given, and recommendations for future development of fracture methodologies are made

  18. Simple Crosscutting Concerns Are Not So Simple : Analysing Variability in Large-Scale Idioms-Based Implementations

    NARCIS (Netherlands)

    Bruntink, M.; Van Deursen, A.; d’Hondt, M.; Tourwé, T.

    2007-01-01

    This paper describes a method for studying idioms-based implementations of crosscutting concerns, and our experiences with it in the context of a real-world, large-scale embedded software system. In particular, we analyse a seemingly simple concern, tracing, and show that it exhibits significant

  19. CO{sub 2} Sequestration Capacity and Associated Aspects of the Most Promising Geologic Formations in the Rocky Mountain Region: Local-Scale Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Laes, Denise; Eisinger, Chris; Morgan, Craig; Rauzi, Steve; Scholle, Dana; Scott, Phyllis; Lee, Si-Yong; Zaluski, Wade; Esser, Richard; Matthews, Vince; McPherson, Brian

    2013-07-30

    The purpose of this report is to provide a summary of individual local-­scale CCS site characterization studies conducted in Colorado, New Mexico and Utah. These site-­ specific characterization analyses were performed as part of the “Characterization of Most Promising Sequestration Formations in the Rocky Mountain Region” (RMCCS) project. The primary objective of these local-­scale analyses is to provide a basis for regional-­scale characterization efforts within each state. Specifically, limits on time and funding will typically inhibit CCS projects from conducting high-­ resolution characterization of a state-­sized region, but smaller (< 10,000 km{sup 2}) site analyses are usually possible, and such can provide insight regarding limiting factors for the regional-­scale geology. For the RMCCS project, the outcomes of these local-­scale studies provide a starting point for future local-­scale site characterization efforts in the Rocky Mountain region.

  20. Microneedle-assisted transdermal delivery of Zolmitriptan: effect of microneedle geometry, in vitro permeation experiments, scaling analyses and numerical simulations.

    Science.gov (United States)

    Uppuluri, Chandra Teja; Devineni, Jyothirmayee; Han, Tao; Nayak, Atul; Nair, Kartik J; Whiteside, Benjamin R; Das, Diganta B; Nalluri, Buchi N

    2017-08-01

    The present study was aimed to investigate the effect of salient microneedle (MN) geometry parameters like length, density, shape and type on transdermal permeation enhancement of Zolmitriptan (ZMT). Two types of MN devices viz. AdminPatch ® arrays (ADM) (0.6, 0.9, 1.2 and 1.5 mm lengths) and laboratory fabricated polymeric MNs (PM) of 0.6 mm length were employed. In the case of PMs, arrays were applied thrice at different places within a 1.77 cm 2 skin area (PM-3) to maintain the MN density closer to 0.6 mm ADM. Scaling analyses was done using dimensionless parameters like concentration of ZMT (C t /C s ), thickness (h/L) and surface area of the skin (Sa/L 2 ). Micro-injection molding technique was employed to fabricate PM. Histological studies revealed that the PM, owing to their geometry/design, formed wider and deeper microconduits when compared to ADM of similar length. Approximately 3.17- and 3.65-fold increase in ZMT flux values were observed with 1.5 mm ADM and PM-3 applications when compared to the passive studies. Good correlations were observed between different dimensionless parameters with scaling analyses. Numerical simulations, using MATLAB and COMSOL software, based on experimental data and histological images provided information regarding the ZMT skin distribution after MN application. Both from experimental studies and simulations, it was inferred that PM were more effective in enhancing the transdermal delivery of ZMT when compared to ADM. The study suggests that MN application enhances the ZMT transdermal permeation and the geometrical parameters of MNs play an important role in the degree of such enhancement.

  1. Analysis of Grassland Ecosystem Physiology at Multiple Scales Using Eddy Covariance, Stable Isotope and Remote Sensing Techniques

    Science.gov (United States)

    Flanagan, L. B.; Geske, N.; Emrick, C.; Johnson, B. G.

    2006-12-01

    Grassland ecosystems typically exhibit very large annual fluctuations in above-ground biomass production and net ecosystem productivity (NEP). Eddy covariance flux measurements, plant stable isotope analyses, and canopy spectral reflectance techniques have been applied to study environmental constraints on grassland ecosystem productivity and the acclimation responses of the ecosystem at a site near Lethbridge, Alberta, Canada. We have observed substantial interannual variation in grassland productivity during 1999-2005. In addition, there was a strong correlation between peak above-ground biomass production and NEP calculated from eddy covariance measurements. Interannual variation in NEP was strongly controlled by the total amount of precipitation received during the growing season (April-August). We also observed significant positive correlations between a multivariate ENSO index and total growing season precipitation, and between the ENSO index and annual NEP values. This suggested that a significant fraction of the annual variability in grassland productivity was associated with ENSO during 1999-2005. Grassland productivity varies asymmetrically in response to changes in precipitation with increases in productivity during wet years being much more pronounced than reductions during dry years. Strong increases in plant water-use efficiency, based on carbon and oxygen stable isotope analyses, contribute to the resilience of productivity during times of drought. Within a growing season increased stomatal limitation of photosynthesis, associated with improved water-use efficiency, resulted in apparent shifts in leaf xanthophyll cycle pigments and changes to the Photochemical Reflectance Index (PRI) calculated from hyper-spectral reflectance measurements conducted at the canopy-scale. These shifts in PRI were apparent before seasonal drought caused significant reductions in leaf area index (LAI) and changes to canopy-scale "greenness" based on NDVI values. With

  2. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    Science.gov (United States)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  3. Industrial scale production of stable isotopes employing the technique of plasma separation

    International Nuclear Information System (INIS)

    Stevenson, N.R.; Bigelow, T.S.; Tarallo, F.J.

    2003-01-01

    Calutrons, centrifuges, diffusion and distillation processes are some of the devices and techniques that have been employed to produce substantial quantities of enriched stable isotopes. Nevertheless, the availability of enriched isotopes in sufficient quantities for industrial applications remains very restricted. Industries such as those involved with medicine, semiconductors, nuclear fuel, propulsion, and national defense have identified the potential need for various enriched isotopes in large quantities. Economically producing most enriched (non-gaseous) isotopes in sufficient quantities has so far eluded commercial producers. The plasma separation process is a commercial technique now available for producing large quantities of a wide range of enriched isotopes. Until recently, this technique has mainly been explored with small-scale ('proof-of-principle') devices that have been built and operated at research institutes. The new Theragenics TM facility at Oak Ridge, TN houses the only existing commercial scale PSP system. This device, which successfully operated in the 1980's, has recently been re-commissioned and is planned to be used to produce a variety of isotopes. Progress and the capabilities of this device and it's potential for impacting the world's supply of stable isotopes in the future is summarized. This technique now holds promise of being able to open the door to allowing new and exciting applications of these isotopes in the future. (author)

  4. Cross-cultural and sex differences in the Emotional Skills and Competence Questionnaire scales: Challenges of differential item functioning analyses

    Directory of Open Access Journals (Sweden)

    Bo Molander

    2009-11-01

    Full Text Available University students in Croatia, Slovenia, and Sweden (N = 1129 were examined by means of the Emotional Skills and Competence Questionnaire (Takšić, 1998. Results showed a significant effect for the sex factor only on the total-score scale, women scoring higher than men, but significant effects were obtained for country, as well as for sex, on the Express and Label (EL and Perceive and Understand (PU subscales. Sweden showed higher scores than Croatia and Slovenia on the EL scale, and Slovenia showed higher scores than Croatia and Sweden on the PU scale. In subsequent analyses of differential item functioning (DIF, comparisons were carried out for pairs of countries. The analyses revealed that a large proportion of the items in the total-score scale were potentially biased, most so for the Croatian-Swedish comparison, less for the Slovenian-Swedish comparison, and least for the Croatian-Slovenian comparison. These findings give doubts about the validity of mean score differences in comparisons of countries. However, DIF analyses of sex differences within each country show very few DIF items, indicating that the ESCQ instrument works well within each cultural/linguistic setting. Possible explanations of the findings are discussed, and improvements for future studies are suggested.

  5. The use of production management techniques in the construction of large scale physics detectors

    International Nuclear Information System (INIS)

    Bazan, A.; Chevenier, G.; Estrella, F.

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Workflow Management Software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an Information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector construction. This is the first time industrial production techniques have been deployed to this extent in detector construction

  6. Channeling contrast microscopy: a new technique for microanalysis of semiconductors

    International Nuclear Information System (INIS)

    McCallum, J.C.

    1985-01-01

    The technique of channeling contrast microscopy has been developed over the past few years for use with the Melbourne microprobe. It has been used for several profitable analyses of small-scale structures in semiconductor materials. This paper outlines the basic features of the technique and examples of its applications are given

  7. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  8. Scaling analyses of the spectral dimension in 3-dimensional causal dynamical triangulations

    Science.gov (United States)

    Cooperman, Joshua H.

    2018-05-01

    The spectral dimension measures the dimensionality of a space as witnessed by a diffusing random walker. Within the causal dynamical triangulations approach to the quantization of gravity (Ambjørn et al 2000 Phys. Rev. Lett. 85 347, 2001 Nucl. Phys. B 610 347, 1998 Nucl. Phys. B 536 407), the spectral dimension exhibits novel scale-dependent dynamics: reducing towards a value near 2 on sufficiently small scales, matching closely the topological dimension on intermediate scales, and decaying in the presence of positive curvature on sufficiently large scales (Ambjørn et al 2005 Phys. Rev. Lett. 95 171301, Ambjørn et al 2005 Phys. Rev. D 72 064014, Benedetti and Henson 2009 Phys. Rev. D 80 124036, Cooperman 2014 Phys. Rev. D 90 124053, Cooperman et al 2017 Class. Quantum Grav. 34 115008, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151, Kommu 2012 Class. Quantum Grav. 29 105003). I report the first comprehensive scaling analysis of the small-to-intermediate scale spectral dimension for the test case of the causal dynamical triangulations of 3-dimensional Einstein gravity. I find that the spectral dimension scales trivially with the diffusion constant. I find that the spectral dimension is completely finite in the infinite volume limit, and I argue that its maximal value is exactly consistent with the topological dimension of 3 in this limit. I find that the spectral dimension reduces further towards a value near 2 as this case’s bare coupling approaches its phase transition, and I present evidence against the conjecture that the bare coupling simply sets the overall scale of the quantum geometry (Ambjørn et al 2001 Phys. Rev. D 64 044011). On the basis of these findings, I advance a tentative physical explanation for the dynamical reduction of the spectral dimension observed within causal dynamical triangulations: branched polymeric quantum geometry on sufficiently small scales. My analyses should facilitate attempts to employ the spectral

  9. Power plant economy of scale and cost trends: further analyses and review of empirical studies

    International Nuclear Information System (INIS)

    Fisher, C.F. Jr.; Paik, S.; Schriver, W.R.

    1986-07-01

    Multiple regression analyses were performed on capital cost data for nuclear and coal-fired power plants in an extension of an earlier study which indicated that nuclear units completed prior to the accident at Three-Mile Island (TMI) have no economy of scale, and that units completed after that event have a weak economy of scale (scaling exponent of about 0.81). The earlier study also indicated that the scaling exponent for coal-fired units is about 0.92, compared with conceptual models which project scaling exponents in a range from about 0.5 to 0.9. Other empirical studies have indicated poor economy of scale, but a large range of cost-size scaling exponents has been reported. In the present study, the results for nuclear units indicate a scaling exponent of about 0.94 but with no economy of scale for large units, that a first unit costs 17% more than a second unit, that a unit in the South costs 20% less than others, that a unit completed after TMI costs 33% more than one completed before TMI, and that costs are increasing at 9.3% per year. In the present study, the results for coal-fired units indicate a scaling exponent of 0.93 but with better scaling economy in the larger units, that a first unit costs 38.5% more, a unit in the South costs 10% less, flue-gas desulfurization units cost 23% more, and that costs are increasing at 4% per year

  10. The use of production management techniques in the construction of large scale physics detectors

    CERN Document Server

    Bazan, A; Estrella, F; Kovács, Z; Le Flour, T; Le Goff, J M; Lieunard, S; McClatchey, R; Murray, S; Varga, L Z; Vialle, J P; Zsenei, M

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so- called Workflow Management software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector ...

  11. Extension and application of a scaling technique for duplication of in-flight aerodynamic heat flux in ground test facilities

    NARCIS (Netherlands)

    Veraar, R.G.

    2009-01-01

    To enable direct experimental duplication of the inflight heat flux distribution on supersonic and hypersonic vehicles, an aerodynamic heating scaling technique has been developed. The scaling technique is based on the analytical equations for convective heat transfer for laminar and turbulent

  12. Vibration tests and analyses of the reactor building model on a small scale

    International Nuclear Information System (INIS)

    Tsuchiya, Hideo; Tanaka, Mitsuru; Ogihara, Yukio; Moriyama, Ken-ichi; Nakayama, Masaaki

    1985-01-01

    The purpose of this paper is to describe the vibration tests and the simulation analyses of the reactor building model on a small scale. The model vibration tests were performed to investigate the vibrational characteristics of the combined super-structure and to verify the computor code based on Dr. H. Tajimi's Thin Layered Element Theory, using the uniaxial shaking table (60 cm x 60 cm). The specimens consist of ground model, three structural model (prestressed concrete containment vessel, inner concrete structure, and enclosure building), a combined structural model and a combined structure-soil interaction model. These models are made of silicon-rubber, and they have a scale of 1:600. Harmonic step by step excitation of 40 gals was performed to investigate the vibrational characteristics for each structural model. The responses of the specimen to harmonic excitation were measured by optical displacement meters, and analyzed by a real time spectrum analyzer. The resonance and phase lag curves of the specimens to the shaking table were obtained respectively. As for the tests of a combined structure-soil interaction model, three predominant frequencies were observed in the resonance curves. These values were in good agreement with the analytical transfer function curves on the computer code. From the vibration tests and the simulation analyses, the silicon-rubber model test is useful for the fundamental study of structural problems. The computer code based on the Thin Element Theory can simulate well the test results. (Kobozono, M.)

  13. Application of the Particle Swarm Optimization (PSO) technique to the thermal-hydraulics project of a PWR reactor core in reduced scale

    International Nuclear Information System (INIS)

    Lima Junior, Carlos Alberto de Souza

    2008-09-01

    The reduced scale models design have been employed by engineers from several different industries fields such as offshore, spatial, oil extraction, nuclear industries and others. Reduced scale models are used in experiments because they are economically attractive than its own prototype (real scale) because in many cases they are cheaper than a real scale one and most of time they are also easier to build providing a way to lead the real scale design allowing indirect investigations and analysis to the real scale system (prototype). A reduced scale model (or experiment) must be able to represent all physical phenomena that occurs and further will do in the real scale one under operational conditions, e.g., in this case the reduced scale model is called similar. There are some different methods to design a reduced scale model and from those two are basic: the empiric method based on the expert's skill to determine which physical measures are relevant to the desired model; and the differential equation method that is based on a mathematical description of the prototype (real scale system) to model. Applying a mathematical technique to the differential equation that describes the prototype then highlighting the relevant physical measures so the reduced scale model design problem may be treated as an optimization problem. Many optimization techniques as Genetic Algorithm (GA), for example, have been developed to solve this class of problems and have also been applied to the reduced scale model design problem as well. In this work, Particle Swarm Optimization (PSO) technique is investigated as an alternative optimization tool for such problem. In this investigation a computational approach, based on particle swarm optimization technique (PSO), is used to perform a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power operation on a forced flow cooling circulation and non-accidental operating conditions. A performance comparison

  14. Applicability of two mobile analysers for mercury in urine in small-scale gold mining areas.

    Science.gov (United States)

    Baeuml, Jennifer; Bose-O'Reilly, Stephan; Lettmeier, Beate; Maydl, Alexandra; Messerer, Katalin; Roider, Gabriele; Drasch, Gustav; Siebert, Uwe

    2011-12-01

    Mercury is still used in developing countries to extract gold from the ore in small-scale gold mining areas. This is a major health hazard for people living in mining areas. The concentration of mercury in urine was analysed in different mining areas in Zimbabwe, Indonesia and Tanzania. First the urine samples were analysed by CV-AAS (cold vapour atomic absorption spectrometry) during the field projects with a mobile mercury analyser (Lumex(®) or Seefelder(®)) and secondly, in a laboratory with a stationary CV-AAS mercury analyser (PerkinElmer(®)). Caused by the different systems (reduction agent either SnCl(2) (Lumex(®) or Seefelder(®))) or NaBH(4) (PerkinElmer(®)), with the mobile analysers only the inorganic mercury was obtained and with the stationary system the total mercury concentration was measured. The aims of the study were whether the results obtained in field with the mobile equipments can be compared with the stationary reference method in the laboratory and allow the application of these mobile analysers in screening studies on concerned populations to select those, who are exposed to critical mercury levels. Overall, the concentrations obtained with the two mobile systems were approximately 25% lower than determined with the stationary system. Nevertheless, both mobile systems seem to be very useful for screening of volunteers in field. Moreover, regional staff may be trained on such analysers to perform screening tests by themselves. Copyright © 2011 Elsevier GmbH. All rights reserved.

  15. Scaling and design analyses of a scaled-down, high-temperature test facility for experimental investigation of the initial stages of a VHTR air-ingress accident

    International Nuclear Information System (INIS)

    Arcilesi, David J.; Ham, Tae Kyu; Kim, In Hun; Sun, Xiaodong; Christensen, Richard N.; Oh, Chang H.

    2015-01-01

    Highlights: • A 1/8th geometric-scale test facility that models the VHTR hot plenum is proposed. • Geometric scaling analysis is introduced for VHTR to analyze air-ingress accident. • Design calculations are performed to show that accident phenomenology is preserved. • Some analyses include time scale, hydraulic similarity and power scaling analysis. • Test facility has been constructed and shake-down tests are currently being carried out. - Abstract: A critical event in the safety analysis of the very high-temperature gas-cooled reactor (VHTR) is an air-ingress accident. This accident is initiated, in its worst case scenario, by a double-ended guillotine break of the coaxial cross vessel, which leads to a rapid reactor vessel depressurization. In a VHTR, the reactor vessel is located within a reactor cavity that is filled with air during normal operating conditions. Following the vessel depressurization, the dominant mode of ingress of an air–helium mixture into the reactor vessel will either be molecular diffusion or density-driven stratified flow. The mode of ingress is hypothesized to depend largely on the break conditions of the cross vessel. Since the time scales of these two ingress phenomena differ by orders of magnitude, it is imperative to understand under which conditions each of these mechanisms will dominate in the air ingress process. Computer models have been developed to analyze this type of accident scenario. There are, however, limited experimental data available to understand the phenomenology of the air-ingress accident and to validate these models. Therefore, there is a need to design and construct a scaled-down experimental test facility to simulate the air-ingress accident scenarios and to collect experimental data. The current paper focuses on the analyses performed for the design and operation of a 1/8th geometric scale (by height and diameter), high-temperature test facility. A geometric scaling analysis for the VHTR, a time

  16. Analysing conflicts around small-scale gold mining in the Amazon : The contribution of a multi-temporal model

    NARCIS (Netherlands)

    Salman, Ton; de Theije, Marjo

    Conflict is small-scale gold mining's middle name. In only a very few situations do mining operations take place without some sort of conflict accompanying the activity, and often various conflicting stakeholders struggle for their interests simultaneously. Analyses of such conflicts are typically

  17. Complementary techniques for solid oxide cell characterisation on micro- and nano-scale

    International Nuclear Information System (INIS)

    Wiedenmann, D.; Hauch, A.; Grobety, B.; Mogensen, M.; Vogt, U.

    2009-01-01

    High temperature steam electrolysis by solid oxide electrolysis cells (SOEC) is a way with great potential to transform clean and renewable energy from non-fossil sources to synthetic fuels such as hydrogen, methane or dimethyl ether, which have been identified as promising alternative energy carriers. Also, as SOEC can operate in the reverse mode as solid oxide fuel cells (SOFC), during high peak hours e.g. hydrogen can be used in a very efficient way to reconvert chemically stored energy into electrical energy. As solid oxide cells (SOC) are working at high temperatures (700-900 o C), material degradation and evaporation can occur e.g. from the cell sealing material, leading to poisoning effects and aging mechanisms which are decreasing the cell efficiency and long-term durability. In order to investigate such cell degradation processes, thorough examination on SOC often requires the chemical and structural characterisation on the microscopic and the nanoscopic level. The combination of different microscope techniques like conventional scanning electron microscopy (SEM), electron-probe microanalysis (EPMA) and the focused ion-beam (FIB) preparation technique for transmission electron microscopy (TEM) allows performing post mortem analysis on a multi scale level of cells after testing. These complementary techniques can be used to characterize structural and chemical changes over a large and representative sample area (micro-scale) on the one hand, and also on the nano-scale level for selected sample details on the other hand. This article presents a methodical approach for the structural and chemical characterisation of changes in aged cathode-supported electrolysis cells produced at Riso DTU, Denmark. Also, results from the characterisation of impurities at the electrolyte/hydrogen interface caused by evaporation from sealing material are discussed. (author)

  18. Systematic comparative and sensitivity analyses of additive and outranking techniques for supporting impact significance assessments

    International Nuclear Information System (INIS)

    Cloquell-Ballester, Vicente-Agustin; Monterde-Diaz, Rafael; Cloquell-Ballester, Victor-Andres; Santamarina-Siurana, Maria-Cristina

    2007-01-01

    Assessing the significance of environmental impacts is one of the most important and all together difficult processes of Environmental Impact Assessment. This is largely due to the multicriteria nature of the problem. To date, decision techniques used in the process suffer from two drawbacks, namely the problem of compensation and the problem of identification of the 'exact boundary' between sub-ranges. This article discusses these issues and proposes a methodology for determining the significance of environmental impacts based on comparative and sensitivity analyses using the Electre TRI technique. An application of the methodology for the environmental assessment of a Power Plant project within the Valencian Region (Spain) is presented, and its performance evaluated. It is concluded that contrary to other techniques, Electre TRI automatically identifies those cases where allocation of significance categories is most difficult and, when combined with sensitivity analysis, offers greatest robustness in the face of variation in weights of the significance attributes. Likewise, this research demonstrates the efficacy of systematic comparison between Electre TRI and sum-based techniques, in the solution of assignment problems. The proposed methodology can therefore be regarded as a successful aid to the decision-maker, who will ultimately take the final decision

  19. Consensuses and discrepancies of basin-scale ocean heat content changes in different ocean analyses

    Science.gov (United States)

    Wang, Gongjie; Cheng, Lijing; Abraham, John; Li, Chongyin

    2018-04-01

    Inconsistent global/basin ocean heat content (OHC) changes were found in different ocean subsurface temperature analyses, especially in recent studies related to the slowdown in global surface temperature rise. This finding challenges the reliability of the ocean subsurface temperature analyses and motivates a more comprehensive inter-comparison between the analyses. Here we compare the OHC changes in three ocean analyses (Ishii, EN4 and IAP) to investigate the uncertainty in OHC in four major ocean basins from decadal to multi-decadal scales. First, all products show an increase of OHC since 1970 in each ocean basin revealing a robust warming, although the warming rates are not identical. The geographical patterns, the key modes and the vertical structure of OHC changes are consistent among the three datasets, implying that the main OHC variabilities can be robustly represented. However, large discrepancies are found in the percentage of basinal ocean heating related to the global ocean, with the largest differences in the Pacific and Southern Ocean. Meanwhile, we find a large discrepancy of ocean heat storage in different layers, especially within 300-700 m in the Pacific and Southern Oceans. Furthermore, the near surface analysis of Ishii and IAP are consistent with sea surface temperature (SST) products, but EN4 is found to underestimate the long-term trend. Compared with ocean heat storage derived from the atmospheric budget equation, all products show consistent seasonal cycles of OHC in the upper 1500 m especially during 2008 to 2012. Overall, our analyses further the understanding of the observed OHC variations, and we recommend a careful quantification of errors in the ocean analyses.

  20. Chromatographic techniques used in the laboratory scale fractionation and purification of plasma

    International Nuclear Information System (INIS)

    Siti Najila Mohd Janib; Wan Hamirul Bahrin Wan Kamal; Shaharuddin Mohd

    2004-01-01

    Chromatography is a powerful technique used in the separation as well as purification of proteins for use as biopharmaceuticals or medicines. Scientists use many different chromatographic techniques in biotechnology as they bring a molecule from its initial identification stage to the stage of it becoming a marketed product. The most commonly used of these techniques is liquid chromatography (1,C). This technique can be used to separate the target molecule from undesired contaminants, as well as to analyse the final product for the requisite purity as established by governmental regulatory groups such as the FDA. Some examples of LC techniques include: ion exchange (IEC), hydrophobic interaction (HIC), gel filtration (GF), affinity (AC) and reverse phase (RPC) chromatography. These techniques are very versatile and can be used at any stage of the purification process i.e. capture, intermediate purification phase and polishing. The choice of a particular technique is dependent upon the nature of the target protein as well as its intended final use. This paper describes the preliminary work done on the chromatographic purification of factor VIII (FVIII), factor IX (FIX), albumin and IgG from plasma. Results, in particular, in the isolation of albumin and IgG using IEC, have been promising. Preparation and production of cryoprecipitate to yield FVIII and FIX have also been successful. (Author)

  1. Preionization Techniques in a kJ-Scale Dense Plasma Focus

    Science.gov (United States)

    Povilus, Alexander; Shaw, Brian; Chapman, Steve; Podpaly, Yuri; Cooper, Christopher; Falabella, Steve; Prasad, Rahul; Schmidt, Andrea

    2016-10-01

    A dense plasma focus (DPF) is a type of z-pinch device that uses a high current, coaxial plasma gun with an implosion phase to generate dense plasmas. These devices can accelerate a beam of ions to MeV-scale energies through strong electric fields generated by instabilities during the implosion of the plasma sheath. The formation of these instabilities, however, relies strongly on the history of the plasma sheath in the device, including the evolution of the gas breakdown in the device. In an effort to reduce variability in the performance of the device, we attempt to control the initial gas breakdown in the device by seeding the system with free charges before the main power pulse arrives. We report on the effectiveness of two techniques developed for a kJ-scale DPF at LLNL, a miniature primer spark gap and pulsed, 255nm LED illumination. Prepared by LLNL under Contract DE-AC52-07NA27344.

  2. Application of IBA in the comparative analyses of fish scales used as biomonitors in the Matola River, Mozambique

    International Nuclear Information System (INIS)

    Guambe, J.F.; Mars, J.A.; Day, J.

    2013-01-01

    Full text: Many natural resources are invariably contaminated by industries located on the periphery of the resources. More so, fish found in the resources are used as dietary supplements, especially by individual that reside near the natural resources. The scale offish have been proven to be applicable in monitoring contamination of the natural resources. However, the morphology and chemical composition of the scale of various species differ to a significant degree. Consequently, the incorporation of contaminants into the scale structure will be different. There is a need of pilot for contaminants which can harm the biota. The composition of the fish scales is different. To quantify the degree of incorporation onto the scale matrix we have analysed, using PIXE, RBS and SEM, the scale of four types of fish scales, that is, Pomadasys kaakan the javelin grunter; Luljanus gibbus the humpback red snapper; Pinjalo pinjalo the pinjalo and Uthognathus mormyrus the sand streenbras. In this work we report on the viability of using various fish scales as monitors of natural resource contamination. (author)

  3. Application of IBA in the comparative analyses of fish scales used as biomonitors in the Matola River, Mozambique

    Energy Technology Data Exchange (ETDEWEB)

    Guambe, J.F. [Freshwater Research Unit, Department of Zoology, University of Cape Town, Private Bag, Rondebosch, 7701 (South Africa); Physics Department, Eduardo Mondlane Universily, PO Box 257, Maputo (Mozambique); Materials Research Department, iThemba LABS, PO Box 722, Somerset West, 7129 (South Africa); Mars, J.A. [Faculty of Health and Wellness Sciences, Cape Peninsula University of Technology, PO Box 1906, Bellville, 7535 (South Africa); Day, J. [Freshwater Research Unit, Department of Zoology, University of Cape Town, Private Bag, Rondebosch, 7701 (South Africa)

    2013-07-01

    Full text: Many natural resources are invariably contaminated by industries located on the periphery of the resources. More so, fish found in the resources are used as dietary supplements, especially by individual that reside near the natural resources. The scale offish have been proven to be applicable in monitoring contamination of the natural resources. However, the morphology and chemical composition of the scale of various species differ to a significant degree. Consequently, the incorporation of contaminants into the scale structure will be different. There is a need of pilot for contaminants which can harm the biota. The composition of the fish scales is different. To quantify the degree of incorporation onto the scale matrix we have analysed, using PIXE, RBS and SEM, the scale of four types of fish scales, that is, Pomadasys kaakan the javelin grunter; Luljanus gibbus the humpback red snapper; Pinjalo pinjalo the pinjalo and Uthognathus mormyrus the sand streenbras. In this work we report on the viability of using various fish scales as monitors of natural resource contamination. (author)

  4. Nudging technique for scale bridging in air quality/climate atmospheric composition modelling

    Directory of Open Access Journals (Sweden)

    A. Maurizi

    2012-04-01

    Full Text Available The interaction between air quality and climate involves dynamical scales that cover a very wide range. Bridging these scales in numerical simulations is fundamental in studies devoted to megacity/hot-spot impacts on larger scales. A technique based on nudging is proposed as a bridging method that can couple different models at different scales.

    Here, nudging is used to force low resolution chemical composition models with a run of a high resolution model on a critical area. A one-year numerical experiment focused on the Po Valley hot spot is performed using the BOLCHEM model to asses the method.

    The results show that the model response is stable to perturbation induced by the nudging and that, taking the high resolution run as a reference, performances of the nudged run increase with respect to the non-forced run. The effect outside the forcing area depends on transport and is significant in a relevant number of events although it becomes weak on seasonal or yearly basis.

  5. A simple technique investigating baseline heterogeneity helped to eliminate potential bias in meta-analyses.

    Science.gov (United States)

    Hicks, Amy; Fairhurst, Caroline; Torgerson, David J

    2018-03-01

    To perform a worked example of an approach that can be used to identify and remove potentially biased trials from meta-analyses via the analysis of baseline variables. True randomisation produces treatment groups that differ only by chance; therefore, a meta-analysis of a baseline measurement should produce no overall difference and zero heterogeneity. A meta-analysis from the British Medical Journal, known to contain significant heterogeneity and imbalance in baseline age, was chosen. Meta-analyses of baseline variables were performed and trials systematically removed, starting with those with the largest t-statistic, until the I 2 measure of heterogeneity became 0%, then the outcome meta-analysis repeated with only the remaining trials as a sensitivity check. We argue that heterogeneity in a meta-analysis of baseline variables should not exist, and therefore removing trials which contribute to heterogeneity from a meta-analysis will produce a more valid result. In our example none of the overall outcomes changed when studies contributing to heterogeneity were removed. We recommend routine use of this technique, using age and a second baseline variable predictive of outcome for the particular study chosen, to help eliminate potential bias in meta-analyses. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Validation of a large-scale audit technique for CT dose optimisation

    International Nuclear Information System (INIS)

    Wood, T. J.; Davis, A. W.; Moore, C. S.; Beavis, A. W.; Saunderson, J. R.

    2008-01-01

    The expansion and increasing availability of computed tomography (CT) imaging means that there is a greater need for the development of efficient optimisation strategies that are able to inform clinical practice, without placing a significant burden on limited departmental resources. One of the most fundamental aspects to any optimisation programme is the collection of patient dose information, which can be compared with appropriate diagnostic reference levels. This study has investigated the implementation of a large-scale audit technique, which utilises data that already exist in the radiology information system, to determine typical doses for a range of examinations on four CT scanners. This method has been validated against what is considered the 'gold standard' technique for patient dose audits, and it has been demonstrated that results equivalent to the 'standard-sized patient' can be inferred from this much larger data set. This is particularly valuable where CT optimisation is concerned as it is considered a 'high dose' technique, and hence close monitoring of patient dose is particularly important. (authors)

  7. Ionizing radiation effects in Acai oil analysed by gas chromatography coupled to mass spectrometry technique

    International Nuclear Information System (INIS)

    Valli, Felipe; Fernandes, Carlos Eduardo; Moura, Sergio; Machado, Ana Carolina; Furasawa, Helio Akira; Pires, Maria Aparecida Faustino; Bustillos, Oscar Vega

    2007-01-01

    The Acai fruit is a well know Brazilian seed plant used in large scale as a source of feed stock, specially in the Brazilian North-east region. The Acai oil is use in many purposes from fuel sources to medicine. The scope of this paper is to analyzed the chemical structures modification of the acai oil after the ionizing radiation. The radiation were set in the range of 10 to 25 kGy in the extracted Acai oil. The analyses were made by gas chromatography coupled to mass spectrometry techniques. A GC/MS Shimatzu QP-5000 equipped with 30 meters DB-5 capillary column with internal diameter of 0.25 mm and 0.25 μm film thickness was used. Helium was used as carried gas and gave a column head pressure of 12 p.s.i. (1 p.s.i. = 6894.76 Pa) and an average flux of 1 ml/min. The temperature program for the GC column consisted of a 4-minutes hold at 75 deg C, a 15 deg C /min ramp to 200 deg C, 8 minutes isothermal. 20 deg C/min ramp to 250 deg C, 2 minutes isothermal. The extraction of the fatty acids was based on liquid-liquid method using chloroform as solvent. The chromatograms resulted shows the presences of the oleic acid and others fatty acids identify by the mass spectra library (NIST-92). The ionization radiation deplete the fatty acids presents in the Acai oil. Details on the chemical qualitative analytical is present as well in this work. (author)

  8. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  9. Problems of allometric scaling analysis: examples from mammalian reproductive biology.

    Science.gov (United States)

    Martin, Robert D; Genoud, Michel; Hemelrijk, Charlotte K

    2005-05-01

    Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best

  10. Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots

    Science.gov (United States)

    2010-04-01

    you are performing the low crawl 4.25 5.00 Drive the robot while you are negotiating the hill 6.00 5.00 Drive the robot while you are climbing the... stairs 4.67 5.00 Drive the robot while you are walking 5.70 5.27 HMD It was fairly doable. 1 When you’re looking through the lens, it’s not...Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots by Elizabeth S. Redden, Rodger A. Pettitt

  11. Systematic study of the effects of scaling techniques in numerical simulations with application to enhanced geothermal systems

    Science.gov (United States)

    Heinze, Thomas; Jansen, Gunnar; Galvan, Boris; Miller, Stephen A.

    2016-04-01

    Numerical modeling is a well established tool in rock mechanics studies investigating a wide range of problems. Especially for estimating seismic risk of a geothermal energy plants a realistic rock mechanical model is needed. To simulate a time evolving system, two different approaches need to be separated: Implicit methods for solving linear equations are unconditionally stable, while explicit methods are limited by the time step. However, explicit methods are often preferred because of their limited memory demand, their scalability in parallel computing, and simple implementation of complex boundary conditions. In numerical modeling of explicit elastoplastic dynamics the time step is limited by the rock density. Mass scaling techniques, which increase the rock density artificially by several orders, can be used to overcome this limit and significantly reduce computation time. In the context of geothermal energy this is of great interest because in a coupled hydro-mechanical model the time step of the mechanical part is significantly smaller than for the fluid flow. Mass scaling can also be combined with time scaling, which increases the rate of physical processes, assuming that processes are rate independent. While often used, the effect of mass and time scaling and how it may influence the numerical results is rarely-mentioned in publications, and choosing the right scaling technique is typically performed by trial and error. Also often scaling techniques are used in commercial software packages, hidden from the untrained user. To our knowledge, no systematic studies have addressed how mass scaling might affect the numerical results. In this work, we present results from an extensive and systematic study of the influence of mass and time scaling on the behavior of a variety of rock-mechanical models. We employ a finite difference scheme to model uniaxial and biaxial compression experiments using different mass and time scaling factors, and with physical models

  12. Elemental analyses of goundwater: demonstrated advantage of low-flow sampling and trace-metal clean techniques over standard techniques

    Science.gov (United States)

    Creasey, C. L.; Flegal, A. R.

    'introduction accidentelle de contaminants au cours de l'échantillonnage, du stockage et de l'analyse. Lorsque ces techniques sont appliquées, les concentrations résultantes en éléments en traces sont nettement plus faibles que les résultats obtenus par les techniques d'échantillonnage classique. Dans une comparaison de données concernant des puits contaminés et des puits de contrôle d'un site de Californie (États-Unis), les concentrations en éléments en traces de cette étude ont été de 2 à 1000 fois plus faibles que celles déterminées par les techniques conventionnelles utilisées pour l'échantillonnage des mêmes puits cinq mois auparavant et un mois après ces prélèvements. En particulier, les concentrations en cadmium et en chrome obtenues par les techniques classiques de prélèvements dépassent les teneurs maximales admises en Californie, alors que les concentrations obtenues pour ces deux éléments dans cette étude sont nettement au-dessous de ces teneurs maximales. Par conséquent, le recours à des techniques à faible débit et sans traces de métal peut faire apparaître que la publication de contamination d'eaux souterraines par des éléments en traces était erronée. Resumen El uso combinado del purgado y muestreo a bajo caudal con las técnicas limpias de metales traza proporcionan medidas de la concentración de elementos traza en las aguas subterráneas que son más representativas que las obtenidas con técnicas tradicionales. El purgado y muestreo a bajo caudal proporciona muestras de agua prácticamente inalteradas, representativas de las condiciones en el terreno. Las técnicas limpias de metales traza limitan la no deseada introducción de contaminantes durante el muestreo, almacenamiento y análisis. Las concentraciones de elementos traza resultantes suelen ser bastante menores que las obtenidas por técnicas tradicionales. En una comparación entre los datos procedentes de pozos en California, las concentraciones obtenidas con el nuevo m

  13. DupTree: a program for large-scale phylogenetic analyses using gene tree parsimony.

    Science.gov (United States)

    Wehe, André; Bansal, Mukul S; Burleigh, J Gordon; Eulenstein, Oliver

    2008-07-01

    DupTree is a new software program for inferring rooted species trees from collections of gene trees using the gene tree parsimony approach. The program implements a novel algorithm that significantly improves upon the run time of standard search heuristics for gene tree parsimony, and enables the first truly genome-scale phylogenetic analyses. In addition, DupTree allows users to examine alternate rootings and to weight the reconciliation costs for gene trees. DupTree is an open source project written in C++. DupTree for Mac OS X, Windows, and Linux along with a sample dataset and an on-line manual are available at http://genome.cs.iastate.edu/CBL/DupTree

  14. Examination of an eHealth literacy scale and a health literacy scale in a population with moderate to high cardiovascular risk: Rasch analyses.

    Directory of Open Access Journals (Sweden)

    Sarah S Richtering

    Full Text Available Electronic health (eHealth strategies are evolving making it important to have valid scales to assess eHealth and health literacy. Item response theory methods, such as the Rasch measurement model, are increasingly used for the psychometric evaluation of scales. This paper aims to examine the internal construct validity of an eHealth and health literacy scale using Rasch analysis in a population with moderate to high cardiovascular disease risk.The first 397 participants of the CONNECT study completed the electronic health Literacy Scale (eHEALS and the Health Literacy Questionnaire (HLQ. Overall Rasch model fit as well as five key psychometric properties were analysed: unidimensionality, response thresholds, targeting, differential item functioning and internal consistency.The eHEALS had good overall model fit (χ2 = 54.8, p = 0.06, ordered response thresholds, reasonable targeting and good internal consistency (person separation index (PSI 0.90. It did, however, appear to measure two constructs of eHealth literacy. The HLQ subscales (except subscale 5 did not fit the Rasch model (χ2: 18.18-60.60, p: 0.00-0.58 and had suboptimal targeting for most subscales. Subscales 6 to 9 displayed disordered thresholds indicating participants had difficulty distinguishing between response options. All subscales did, nonetheless, demonstrate moderate to good internal consistency (PSI: 0.62-0.82.Rasch analyses demonstrated that the eHEALS has good measures of internal construct validity although it appears to capture different aspects of eHealth literacy (e.g. using eHealth and understanding eHealth. Whilst further studies are required to confirm this finding, it may be necessary for these constructs of the eHEALS to be scored separately. The nine HLQ subscales were shown to measure a single construct of health literacy. However, participants' scores may not represent their actual level of ability, as distinction between response categories was unclear for

  15. Experimental technique of stress analyses by neutron diffraction

    International Nuclear Information System (INIS)

    Sun, Guangai; Chen, Bo; Huang, Chaoqiang

    2009-09-01

    The structures and main components of neutron diffraction stress analyses spectrometer, SALSA, as well as functions and parameters of each components are presented. The technical characteristic and structure parameters of SALSA are described. Based on these aspects, the choice of gauge volume, method of positioning sample, determination of diffraction plane and measurement of zero stress do are discussed. Combined with the practical experiments, the basic experimental measurement and the related settings are introduced, including the adjustments of components, pattern scattering, data recording and checking etc. The above can be an instruction for stress analyses experiments by neutron diffraction and neutron stress spectrometer construction. (authors)

  16. Dynamical scaling in polymer solutions investigated by the neutron spin echo technique

    International Nuclear Information System (INIS)

    Richter, D.; Ewen, B.

    1979-01-01

    Chain dynamics in polymer solutions was investigated by means of the recently developed neutron spin echo spectroscopy. - By this technique, it was possible for the first time to verify unambiguously the scaling predictions of the Zimm model in the case of single chain behaviour and to observe the cross over to many chain behaviour. The segmental diffusion of single chains exhibits deviations from a simple exponential law, indicating the importance of memory effects. (orig.) [de

  17. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  18. CSNI Project for Fracture Analyses of Large-Scale International Reference Experiments (FALSIRE II)

    Energy Technology Data Exchange (ETDEWEB)

    Bass, B.R.; Pugh, C.E.; Keeney, J. [Oak Ridge National Lab., TN (United States); Schulz, H.; Sievers, J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Gemany)

    1996-11-01

    A summary of Phase II of the Project for FALSIRE is presented. FALSIRE was created by the Fracture Assessment Group (FAG) of the OECD/NEA`s Committee on the Safety of Nuclear Installations (CNSI) Principal Working Group No. 3. FALSIRE I in 1988 assessed fracture methods through interpretive analyses of 6 large-scale fracture experiments in reactor pressure vessel (RPV) steels under pressurized- thermal-shock (PTS) loading. In FALSIRE II, experiments examined cleavage fracture in RPV steels for a wide range of materials, crack geometries, and constraint and loading conditions. The cracks were relatively shallow, in the transition temperature region. Included were cracks showing either unstable extension or two stages of extensions under transient thermal and mechanical loads. Crack initiation was also investigated in connection with clad surfaces and with biaxial load. Within FALSIRE II, comparative assessments were performed for 7 reference fracture experiments based on 45 analyses received from 22 organizations representing 12 countries. Temperature distributions in thermal shock loaded samples were approximated with high accuracy and small scatter bands. Structural response was predicted reasonably well; discrepancies could usually be traced to the assumed material models and approximated material properties. Almost all participants elected to use the finite element method.

  19. CSNI Project for Fracture Analyses of Large-Scale International Reference Experiments (FALSIRE II)

    International Nuclear Information System (INIS)

    Bass, B.R.; Pugh, C.E.; Keeney, J.; Schulz, H.; Sievers, J.

    1996-11-01

    A summary of Phase II of the Project for FALSIRE is presented. FALSIRE was created by the Fracture Assessment Group (FAG) of the OECD/NEA's Committee on the Safety of Nuclear Installations (CNSI) Principal Working Group No. 3. FALSIRE I in 1988 assessed fracture methods through interpretive analyses of 6 large-scale fracture experiments in reactor pressure vessel (RPV) steels under pressurized- thermal-shock (PTS) loading. In FALSIRE II, experiments examined cleavage fracture in RPV steels for a wide range of materials, crack geometries, and constraint and loading conditions. The cracks were relatively shallow, in the transition temperature region. Included were cracks showing either unstable extension or two stages of extensions under transient thermal and mechanical loads. Crack initiation was also investigated in connection with clad surfaces and with biaxial load. Within FALSIRE II, comparative assessments were performed for 7 reference fracture experiments based on 45 analyses received from 22 organizations representing 12 countries. Temperature distributions in thermal shock loaded samples were approximated with high accuracy and small scatter bands. Structural response was predicted reasonably well; discrepancies could usually be traced to the assumed material models and approximated material properties. Almost all participants elected to use the finite element method

  20. TA3 - Dosimetry and instrumentation supply of the M-Fish technique to the Fish-3 painting technique for analysing translocations: A radiotherapy-treated patient study

    International Nuclear Information System (INIS)

    Pouzoulet, F.; Roch-Lefevre, S.; Giraudet, A.L.; Vaurijoux, A.; Voisin, P.A.; Buard, V.; Delbos, M.; Voisin, Ph.; Roy, L.; Bourhis, J.

    2006-01-01

    Purpose: Currently, the chromosome translocation study is the best method to estimate the dose of an old radiation exposure. Fluorescent In Situ Hybridization (F.I.S.H.) technique allows an easy detection of this kind of aberrations. However, as only a few number of chromosomes is usually painted, some bias could skew the result. To evaluate the advantage of using full genome staining (M-F.I.S.H. technique) compared with three chromosomes labelling (F.I.S.H.-3 painting), we compared translocation yields in radiotherapy treated patients. Methods: Chromosome aberration analyses were performed on peripheral blood lymphocyte cultures of two patients treated for a throat cancer by radiotherapy. Blood samples were obtained, before, along the treatment and six or four months later. For each sample, a dicentrics analysis was performed together with translocation analysis either with F.I.S.H.-3 painting or M-F.I.S.H.. Results: By confronting results from the F.I.S.H.-3 painting technique and the M-F.I.S.H. technique, significant differences were revealed. The translocations yield seemed to be stable with the F.I.S.H.-3 painting technique whereas it is not the case with the M-F.I.S.H. technique. This difference in results was explained by the bias induced by F.I.S.H.-3 Painting technique in the visualisation of complex aberrations. Furthermore, we found the presence of a clone bearing a translocation involving a painted chromosome. Conclusions: According to the potential bias of F.I.S.H.-3 painting on translocations study, the M-F.I.S.H. technique should provide more precise and reproducible results. Because of its more difficult implement, it seems hardly applicable to retrospective dosimetry instead of F.I.S.H.-3 painting technique. (authors)

  1. TA3 - Dosimetry and instrumentation supply of the M-Fish technique to the Fish-3 painting technique for analysing translocations: A radiotherapy-treated patient study

    Energy Technology Data Exchange (ETDEWEB)

    Pouzoulet, F.; Roch-Lefevre, S.; Giraudet, A.L.; Vaurijoux, A.; Voisin, P.A.; Buard, V.; Delbos, M.; Voisin, Ph.; Roy, L. [Institut de Radioprotection et de Surete Nucleaire, Lab. de Dosimetrie Biologique, 92 - Fontenay aux Roses (France); Bourhis, J. [Laboratoire UPRES EA 27-10, Radiosensibilite des Tumeurs et Tissus sains, PR1, 94 - Villejuif (France)

    2006-07-01

    Purpose: Currently, the chromosome translocation study is the best method to estimate the dose of an old radiation exposure. Fluorescent In Situ Hybridization (F.I.S.H.) technique allows an easy detection of this kind of aberrations. However, as only a few number of chromosomes is usually painted, some bias could skew the result. To evaluate the advantage of using full genome staining (M-F.I.S.H. technique) compared with three chromosomes labelling (F.I.S.H.-3 painting), we compared translocation yields in radiotherapy treated patients. Methods: Chromosome aberration analyses were performed on peripheral blood lymphocyte cultures of two patients treated for a throat cancer by radiotherapy. Blood samples were obtained, before, along the treatment and six or four months later. For each sample, a dicentrics analysis was performed together with translocation analysis either with F.I.S.H.-3 painting or M-F.I.S.H.. Results: By confronting results from the F.I.S.H.-3 painting technique and the M-F.I.S.H. technique, significant differences were revealed. The translocations yield seemed to be stable with the F.I.S.H.-3 painting technique whereas it is not the case with the M-F.I.S.H. technique. This difference in results was explained by the bias induced by F.I.S.H.-3 Painting technique in the visualisation of complex aberrations. Furthermore, we found the presence of a clone bearing a translocation involving a painted chromosome. Conclusions: According to the potential bias of F.I.S.H.-3 painting on translocations study, the M-F.I.S.H. technique should provide more precise and reproducible results. Because of its more difficult implement, it seems hardly applicable to retrospective dosimetry instead of F.I.S.H.-3 painting technique. (authors)

  2. Examining Similarity Structure: Multidimensional Scaling and Related Approaches in Neuroimaging

    Directory of Open Access Journals (Sweden)

    Svetlana V. Shinkareva

    2013-01-01

    Full Text Available This paper covers similarity analyses, a subset of multivariate pattern analysis techniques that are based on similarity spaces defined by multivariate patterns. These techniques offer several advantages and complement other methods for brain data analyses, as they allow for comparison of representational structure across individuals, brain regions, and data acquisition methods. Particular attention is paid to multidimensional scaling and related approaches that yield spatial representations or provide methods for characterizing individual differences. We highlight unique contributions of these methods by reviewing recent applications to functional magnetic resonance imaging data and emphasize areas of caution in applying and interpreting similarity analysis methods.

  3. Lightweight and Statistical Techniques for Petascale Debugging: Correctness on Petascale Systems (CoPS) Preliminry Report

    Energy Technology Data Exchange (ETDEWEB)

    de Supinski, B R; Miller, B P; Liblit, B

    2011-09-13

    Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques. Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two

  4. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    Science.gov (United States)

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  5. Gallium Nitride: A Nano scale Study using Electron Microscopy and Associated Techniques

    International Nuclear Information System (INIS)

    Mohammed Benaissa; Vennegues, Philippe

    2008-01-01

    A complete nano scale study on GaN thin films doped with Mg. This study was carried out using TEM and associated techniques such as HREM, CBED, EDX and EELS. It was found that the presence of triangular defects (of few nanometers in size) within GaN:Mg films were at the origin of unexpected electrical and optical behaviors, such as a decrease in the free hole density at high Mg doping. It is shown that these defects are inversion domains limited with inversion-domains boundaries. (author)

  6. Development of a Body Image Concern Scale using both exploratory and confirmatory factor analyses in Chinese university students

    Directory of Open Access Journals (Sweden)

    He W

    2017-05-01

    Full Text Available Wenxin He, Qiming Zheng, Yutian Ji, Chanchan Shen, Qisha Zhu, Wei Wang Department of Clinical Psychology and Psychiatry, School of Public Health, Zhejiang University College of Medicine, Hangzhou, People’s Republic of China Background: The body dysmorphic disorder is prevalent in general population and in psychiatric, dermatological, and plastic-surgery patients, but there lacks a structure-validated, comprehensive self-report measure of body image concerns, which is established through both exploratory and confirmatory factor analyses. Methods: We have composed a 34-item matrix targeting the body image concerns and trialed it in 328 male and 365 female Chinese university students. Answers to the matrix dealt with treatments including exploratory factor analyses, reserve of qualified items, and confirmatory factor analyses of latent structures. Results: Six latent factors, namely the Social Avoidance, Appearance Dissatisfaction, Preoccupation with Reassurance, Perceived Distress/Discrimination, Defect Hiding, and Embarrassment in Public, were identified. The factors and their respective items have composed a 24-item questionnaire named as the Body Image Concern Scale. Each factor earned a satisfactory internal reliability, and the intercorrelations between these factors were in a median level. Women scored significantly higher than men did on the Appearance Dissatisfaction, Preoccupation with Reassurance, and Defect Hiding. Conclusion: The Body Image Concern Scale has displayed its structure validation and gender preponderance in Chinese university students. Keywords: body dysmorphic disorder, body image, factor analysis, questionnaire development

  7. Experimental and Computational Modal Analyses for Launch Vehicle Models considering Liquid Propellant and Flange Joints

    Directory of Open Access Journals (Sweden)

    Chang-Hoon Sim

    2018-01-01

    Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  9. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes

  10. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE

  11. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  12. Imaging Catalysts at Work: A Hierarchical Approach from the Macro- to the Meso- and Nano-scale

    DEFF Research Database (Denmark)

    Grunwaldt, Jan-Dierk; Wagner, Jakob Birkedal; Dunin-Borkowski, Rafal E.

    2013-01-01

    This review highlights the importance of developing multi-scale characterisation techniques for analysing operating catalysts in their working environment. We emphasise that a hierarchy of insitu techniques that provides macro-, meso- and nano-scale information is required to elucidate and optimise....../heat/mass transport gradients in shaped catalysts and catalyst grains and c)meso- and nano-scale information about particles and clusters, whose physical and electronic properties are linked directly to the micro-kinetic behaviour of the catalysts. Techniques such as X-ray diffraction (XRD), infrared (IR), Raman, X......-ray photoelectron spectroscopy (XPS), UV/Vis, and X-ray absorption spectroscopy (XAS), which have mainly provided global atomic scale information, are being developed to provide the same information on a more local scale, often with sub-second time resolution. X-ray microscopy, both in the soft and more recently...

  13. Pilot-scale reactor activation facility at SRL

    International Nuclear Information System (INIS)

    Bowman, W.W.

    1976-01-01

    The Hydrogeocemical and Stream Sediment Reconnaissance portion of the National Uranium Resource Evaluation program requires an analytical technique for uranium and other elements. Based on an automated absolute activation analysis technique using 252 Cf, a pilt-scale facility installed in a production reactor has provided analyses for 2800 samples. Key features include: an automated sample transport system, a delayed neutron detector, two GeLi detectors, a loader, and an unloader, with all components controlled by a microprocessor; a dedicated PDP-9 computer and pulse height analyzer; and correlation and reduction of acquired data by a series of programs using an IBM 360/195 computer. The facility was calibrated with elemental and isotopic standards. Results of analyses of standard reference materials and operational detection limits for typical sediment samples are presented. Plans to increase sample throughput are discussed briefly

  14. An automatic system to search, acquire, and analyse chromosomal aberrations obtained using FISH technique

    International Nuclear Information System (INIS)

    Esposito, R.D.

    2003-01-01

    Full text: Chromosomal aberrations (CA) analysis in peripheral blood lymphocytes is useful both in prenatal diagnoses and cancer cytogenetics, as well as in toxicology to determine the biologically significant dose of specific, both physical and chemical, genotoxic agents to which an individual is exposed. A useful cytogenetic technique for CAs analysis is Fluorescence-in-situ-Hybridization (FISH) which simplifies the automatic Identification and characterisation of aberrations, allowing the visualisation of chromosomes as bright signals on a dark background, and a fast analysis of stable aberrations, which are particularly interesting for late effects. The main limitation of CA analysis is the rarity with which these events occur, and therefore the time necessary to single out a statistically significant number of aberrant cells. In order to address this problem, a prototype system, capable of automatically searching, acquiring, and recognising chromosomal images of samples prepared using FISH, has been developed. The system is able to score large number of samples in a reasonable time using predefined search criteria. The system is based on the appropriately implemented and characterised automatic metaphase finder Metafer4 (MetaSystems), coupled with a specific module for the acquisition of high magnification metaphase images with any combination of fluorescence filters. These images are then analysed and classified using our software. The prototype is currently capable of separating normal metaphase images from presumed aberrant ones. This system is currently in use in our laboratories both by ourselves and by other researchers not involved in its development, in order to carry out analyses of CAs induced by ionising radiation. The prototype allows simple acquisition and management of large quantities of images and makes it possible to carry out methodological studies -such as the comparison of results obtained by different operators- as well as increasing the

  15. Study on high density multi-scale calculation technique

    International Nuclear Information System (INIS)

    Sekiguchi, S.; Tanaka, Y.; Nakada, H.; Nishikawa, T.; Yamamoto, N.; Yokokawa, M.

    2004-01-01

    To understand degradation of nuclear materials under irradiation, it is essential to know as much about each phenomenon observed from multi-scale points of view; they are micro-scale in atomic-level, macro-level in structural scale and intermediate level. In this study for application to meso-scale materials (100A ∼ 2μm), computer technology approaching from micro- and macro-scales was developed including modeling and computer application using computational science and technology method. And environmental condition of grid technology for multi-scale calculation was prepared. The software and MD (molecular dynamics) stencil for verifying the multi-scale calculation were improved and their movement was confirmed. (A. Hishinuma)

  16. Tools and Techniques for Basin-Scale Climate Change Assessment

    Science.gov (United States)

    Zagona, E.; Rajagopalan, B.; Oakley, W.; Wilson, N.; Weinstein, P.; Verdin, A.; Jerla, C.; Prairie, J. R.

    2012-12-01

    The Department of Interior's WaterSMART Program seeks to secure and stretch water supplies to benefit future generations and identify adaptive measures to address climate change. Under WaterSMART, Basin Studies are comprehensive water studies to explore options for meeting projected imbalances in water supply and demand in specific basins. Such studies could be most beneficial with application of recent scientific advances in climate projections, stochastic simulation, operational modeling and robust decision-making, as well as computational techniques to organize and analyze many alternatives. A new integrated set of tools and techniques to facilitate these studies includes the following components: Future supply scenarios are produced by the Hydrology Simulator, which uses non-parametric K-nearest neighbor resampling techniques to generate ensembles of hydrologic traces based on historical data, optionally conditioned on long paleo reconstructed data using various Markov Chain techniuqes. Resampling can also be conditioned on climate change projections from e.g., downscaled GCM projections to capture increased variability; spatial and temporal disaggregation is also provided. The simulations produced are ensembles of hydrologic inputs to the RiverWare operations/infrastucture decision modeling software. Alternative demand scenarios can be produced with the Demand Input Tool (DIT), an Excel-based tool that allows modifying future demands by groups such as states; sectors, e.g., agriculture, municipal, energy; and hydrologic basins. The demands can be scaled at future dates or changes ramped over specified time periods. Resulting data is imported directly into the decision model. Different model files can represent infrastructure alternatives and different Policy Sets represent alternative operating policies, including options for noticing when conditions point to unacceptable vulnerabilities, which trigger dynamically executing changes in operations or other

  17. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  18. Application of digital-image-correlation techniques in analysing ...

    Indian Academy of Sciences (India)

    Basis theory of strain analysis using the digital image correlation method .... Type 304N Stainless Steel (Modulus of Elasticity = 193 MPa, Tensile Yield .... also proves the accuracy of the qualitative analyses by using the DIC ... We thank the National Science Council of Taiwan for supporting this research through grant. No.

  19. Contact mechanics at nanometric scale using nanoindentation technique for brittle and ductile materials.

    Science.gov (United States)

    Roa, J J; Rayon, E; Morales, M; Segarra, M

    2012-06-01

    In the last years, Nanoindentation or Instrumented Indentation Technique has become a powerful tool to study the mechanical properties at micro/nanometric scale (commonly known as hardness, elastic modulus and the stress-strain curve). In this review, the different contact mechanisms (elastic and elasto-plastic) are discussed, the recent patents for each mechanism (elastic and elasto-plastic) are summarized in detail, and the basic equations employed to know the mechanical behaviour for brittle and ductile materials are described.

  20. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    Science.gov (United States)

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  1. Plasmonic nanoparticle lithography: Fast resist-free laser technique for large-scale sub-50 nm hole array fabrication

    Science.gov (United States)

    Pan, Zhenying; Yu, Ye Feng; Valuckas, Vytautas; Yap, Sherry L. K.; Vienne, Guillaume G.; Kuznetsov, Arseniy I.

    2018-05-01

    Cheap large-scale fabrication of ordered nanostructures is important for multiple applications in photonics and biomedicine including optical filters, solar cells, plasmonic biosensors, and DNA sequencing. Existing methods are either expensive or have strict limitations on the feature size and fabrication complexity. Here, we present a laser-based technique, plasmonic nanoparticle lithography, which is capable of rapid fabrication of large-scale arrays of sub-50 nm holes on various substrates. It is based on near-field enhancement and melting induced under ordered arrays of plasmonic nanoparticles, which are brought into contact or in close proximity to a desired material and acting as optical near-field lenses. The nanoparticles are arranged in ordered patterns on a flexible substrate and can be attached and removed from the patterned sample surface. At optimized laser fluence, the nanohole patterning process does not create any observable changes to the nanoparticles and they have been applied multiple times as reusable near-field masks. This resist-free nanolithography technique provides a simple and cheap solution for large-scale nanofabrication.

  2. Application of multivariate statistical techniques in microbial ecology.

    Science.gov (United States)

    Paliy, O; Shankar, V

    2016-03-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.

  3. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    Science.gov (United States)

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  4. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  5. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Miscellaneous -- Volume 3, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, L.M.; Jordon, W.C. [Oak Ridge National Lab., TN (United States); Edwards, A.L. [Oak Ridge National Lab., TN (United States)]|[Lawrence Livermore National Lab., CA (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries.

  6. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Miscellaneous -- Volume 3, Revision 4

    International Nuclear Information System (INIS)

    Petrie, L.M.; Jordon, W.C.; Edwards, A.L.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries

  7. Comparison of pre-test analyses with the Sizewell-B 1:10 scale prestressed concrete containment test

    International Nuclear Information System (INIS)

    Dameron, R.A.; Rashid, Y.R.; Parks, M.B.

    1991-01-01

    This paper describes pretest analyses of a one-tenth scale model of the 'Sizewell-B' prestressed concrete containment building. The work was performed by ANATECH Research Corp. under contract with Sandia National Laboratories (SNL). Hydraulic testing of the model was conducted in the United Kingdom by the Central Electricity Generating Board (CEGB). In order to further their understanding of containment behavior, the USNRC, through an agreement with the United Kingdom Atomic Energy Authority (UKAEA), also participated in the test program with SNL serving as their technical agent. The analyses that were conducted included two global axisymmetric models with 'bonded' and 'unbonded' analytical treatment of meridional tendons, a 3D quarter model of the structure, an axisymmetric representation of the equipment hatch region, and local plane stress and r-θ models of a buttress. Results of these analyses are described and compared with the results of the test. A global hoop failure at midheight of the cylinder and a shear/bending type failure at the base of the cylinder wall were both found to have roughly equal probability of occurrence; however, the shear failure mode had higher uncertainty associated with it. Consequently, significant effort was dedicated to improving the modeling capability for concrete shear behavior. This work is also described briefly. (author)

  8. Comparison of pre-test analyses with the Sizewell-B 1:10 scale prestressed concrete containment test

    International Nuclear Information System (INIS)

    Dameron, R.A.; Rashid, Y.R.; Parks, M.B.

    1991-01-01

    This paper describes pretest analyses of a one-tenth scale model of the Sizewell-B prestressed concrete containment building. The work was performed by ANATECH Research Corp. under contract with Sandia National Laboratories (SNL). Hydraulic testing of the model was conducted in the United Kingdom by the Central Electricity Generating Board (CEGB). In order to further their understanding of containment behavior, the USNRC, through an agreement with the United Kingdom Atomic Energy Authority (UKAEA), also participated in the test program with SNL serving as their technical agent. The analyses that were conducted included two global axisymmetric models with ''bonded'' and ''unbonded'' analytical treatment of meridional tendons, a 3D quarter model of the structure, an axisymmetric representation of the equipment hatch region, and local plan stress and r-θ models of a buttress. Results of these analyses are described and compared with the results of the test. A global hoop failure at midheight of the cylinder and a shear/bending type failure at the base of the cylinder wall were both found to have roughly equal probability of occurrence; however, the shear failure mode had higher uncertainty associated with it. Consequently, significant effort was dedicated to improving the modeling capability for concrete shear behavior. This work is also described briefly. 5 refs., 7 figs

  9. On Rigorous Drought Assessment Using Daily Time Scale: Non-Stationary Frequency Analyses, Revisited Concepts, and a New Method to Yield Non-Parametric Indices

    Directory of Open Access Journals (Sweden)

    Charles Onyutha

    2017-10-01

    Full Text Available Some of the problems in drought assessments are that: analyses tend to focus on coarse temporal scales, many of the methods yield skewed indices, a few terminologies are ambiguously used, and analyses comprise an implicit assumption that the observations come from a stationary process. To solve these problems, this paper introduces non-stationary frequency analyses of quantiles. How to use non-parametric rescaling to obtain robust indices that are not (or minimally skewed is also introduced. To avoid ambiguity, some concepts on, e.g., incidence, extremity, etc., were revisited through shift from monthly to daily time scale. Demonstrations on the introduced methods were made using daily flow and precipitation insufficiency (precipitation minus potential evapotranspiration from the Blue Nile basin in Africa. Results show that, when a significant trend exists in extreme events, stationarity-based quantiles can be far different from those when non-stationarity is considered. The introduced non-parametric indices were found to closely agree with the well-known standardized precipitation evapotranspiration indices in many aspects but skewness. Apart from revisiting some concepts, the advantages of the use of fine instead of coarse time scales in drought assessment were given. The links for obtaining freely downloadable tools on how to implement the introduced methods were provided.

  10. The integration of novel diagnostics techniques for multi-scale monitoring of large civil infrastructures

    Directory of Open Access Journals (Sweden)

    F. Soldovieri

    2008-11-01

    Full Text Available In the recent years, structural monitoring of large infrastructures (buildings, dams, bridges or more generally man-made structures has raised an increased attention due to the growing interest about safety and security issues and risk assessment through early detection. In this framework, aim of the paper is to introduce a new integrated approach which combines two sensing techniques acting on different spatial and temporal scales. The first one is a distributed optic fiber sensor based on the Brillouin scattering phenomenon, which allows a spatially and temporally continuous monitoring of the structure with a "low" spatial resolution (meter. The second technique is based on the use of Ground Penetrating Radar (GPR, which can provide detailed images of the inner status of the structure (with a spatial resolution less then tens centimetres, but does not allow a temporal continuous monitoring. The paper describes the features of these two techniques and provides experimental results concerning preliminary test cases.

  11. Fracture analyses of WWER reactor pressure vessels

    International Nuclear Information System (INIS)

    Sievers, J.; Liu, X.

    1997-01-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab

  12. Fracture analyses of WWER reactor pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Sievers, J; Liu, X [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    1997-09-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab.

  13. Large-scale nanofabrication of periodic nanostructures using nanosphere-related techniques for green technology applications (Conference Presentation)

    Science.gov (United States)

    Yen, Chen-Chung; Wu, Jyun-De; Chien, Yi-Hsin; Wang, Chang-Han; Liu, Chi-Ching; Ku, Chen-Ta; Chen, Yen-Jon; Chou, Meng-Cheng; Chang, Yun-Chorng

    2016-09-01

    Nanotechnology has been developed for decades and many interesting optical properties have been demonstrated. However, the major hurdle for the further development of nanotechnology depends on finding economic ways to fabricate such nanostructures in large-scale. Here, we demonstrate how to achieve low-cost fabrication using nanosphere-related techniques, such as Nanosphere Lithography (NSL) and Nanospherical-Lens Lithography (NLL). NSL is a low-cost nano-fabrication technique that has the ability to fabricate nano-triangle arrays that cover a very large area. NLL is a very similar technique that uses polystyrene nanospheres to focus the incoming ultraviolet light and exposure the underlying photoresist (PR) layer. PR hole arrays form after developing. Metal nanodisk arrays can be fabricated following metal evaporation and lifting-off processes. Nanodisk or nano-ellipse arrays with various sizes and aspect ratios are routinely fabricated in our research group. We also demonstrate we can fabricate more complicated nanostructures, such as nanodisk oligomers, by combining several other key technologies such as angled exposure and deposition, we can modify these methods to obtain various metallic nanostructures. The metallic structures are of high fidelity and in large scale. The metallic nanostructures can be transformed into semiconductor nanostructures and be used in several green technology applications.

  14. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  15. WIND SPEED AND ENERGY POTENTIAL ANALYSES

    Directory of Open Access Journals (Sweden)

    A. TOKGÖZLÜ

    2013-01-01

    Full Text Available This paper provides a case study on application of wavelet techniques to analyze wind speed and energy (renewable and environmental friendly energy. Solar and wind are main sources of energy that allows farmers to have the potential for transferring kinetic energy captured by the wind mill for pumping water, drying crops, heating systems of green houses, rural electrification's or cooking. Larger wind turbines (over 1 MW can pump enough water for small-scale irrigation. This study tried to initiate data gathering process for wavelet analyses, different scale effects and their role on wind speed and direction variations. The wind data gathering system is mounted at latitudes: 37° 50" N; longitude 30° 33" E and height: 1200 m above mean sea level at a hill near Süleyman Demirel University campus. 10 minutes average values of two levels wind speed and direction (10m and 30m above ground level have been recorded by a data logger between July 2001 and February 2002. Wind speed values changed between the range of 0 m/s and 54 m/s. Annual mean speed value is 4.5 m/s at 10 m ground level. Prevalent wind

  16. Preferential flow from pore to landscape scales

    Science.gov (United States)

    Koestel, J. K.; Jarvis, N.; Larsbo, M.

    2017-12-01

    In this presentation, we give a brief personal overview of some recent progress in quantifying preferential flow in the vadose zone, based on our own work and those of other researchers. One key challenge is to bridge the gap between the scales at which preferential flow occurs (i.e. pore to Darcy scales) and the scales of interest for management (i.e. fields, catchments, regions). We present results of recent studies that exemplify the potential of 3-D non-invasive imaging techniques to visualize and quantify flow processes at the pore scale. These studies should lead to a better understanding of how the topology of macropore networks control key state variables like matric potential and thus the strength of preferential flow under variable initial and boundary conditions. Extrapolation of this process knowledge to larger scales will remain difficult, since measurement technologies to quantify macropore networks at these larger scales are lacking. Recent work suggests that the application of key concepts from percolation theory could be useful in this context. Investigation of the larger Darcy-scale heterogeneities that generate preferential flow patterns at the soil profile, hillslope and field scales has been facilitated by hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help to parameterize models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  17. Analysing human genomes at different scales

    DEFF Research Database (Denmark)

    Liu, Siyang

    The thriving of the Next-Generation sequencing (NGS) technologies in the past decade has dramatically revolutionized the field of human genetics. We are experiencing a wave of several large-scale whole genome sequencing studies of humans in the world. Those studies vary greatly regarding cohort...... will be reflected by the analysis of real data. This thesis covers studies in two human genome sequencing projects that distinctly differ in terms of studied population, sample size and sequencing depth. In the first project, we sequenced 150 Danish individuals from 50 trio families to 78x coverage....... The sophisticated experimental design enables high-quality de novo assembly of the genomes and provides a good opportunity for mapping the structural variations in the human population. We developed the AsmVar approach to discover, genotype and characterize the structural variations from the assemblies. Our...

  18. Novel Space Exploration Technique for Analysing Planetary Atmospheres

    OpenAIRE

    Dekoulis, George

    2010-01-01

    The chapter presents a new reconfigurable wide-beam radio interferometer system for analysing planetary atmospheres. The system operates at frequencies, where the ionisation of the planetary plasma regions induces strong attenuation. For Earth, the attenuation is undistinguishable from the CMB at frequencies over 50 MHz. The system introduces a set of advanced specifications to this field of science, previously unseen in similar suborbital experiments. The reprogrammable dynamic range of the ...

  19. Le dysfonctionnement socio-spatial des grands ensembles en Algérie: technique de l’analyse wayfinding par méthode “movement traces” et l’analyse morphologique (syntaxe spatiale par logiciel “depthmap”

    Directory of Open Access Journals (Sweden)

    Amara Hima

    2018-03-01

    Full Text Available Résumé La technique de l’analyse syntaxique de la visibilité (Visibility Graph Analysis – VGA et de l’accessibilité (All Line Analysis – ALA par logiciel “DepthMap©(UCL, Londres” et l’analyse du dysfonctionnement wayfinding par méthode “movement traces”, sont utilisées dans ce papier afin de développer un modèle d’analyse et d’investigation de l’impact des changements spatiaux sur le dysfonctionnement socio-spatial du wayfinding, ainsi sur la reproduction urbaine, notamment les transformations des façades et l’appropriation des espaces extérieurs dans les grands ensembles en Algérie. Nous donnons ici le cas d’étude de la cité 1000 logt-Biskra et la cité 500 logt-M’sila. Afin de vérifier cette hypothèse, un modèle d’analyse hybride a été développé par croisement des résultats d’analyses des deux techniques. Par conséquent, le schéma de l’interférence montre que la majorité des piétons préfèrent parcourir les axes courts et droits — caractérisés par une forte propriété syntaxique de visibilité et d’accessibilité (l’intégration, la connectivité et l’intelligibilité — en directions des équipements adjacents et aux milieux des deux cités. Ces itinéraires ont un impact sur les transformations des façades et l’appropriation des espaces extérieurs. Le modèle développé promet de futures recherches sur le plan de la quantification, la modélisation et la simulation du processus de la reproduction urbaine, notamment par les automates cellulaires.

  20. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  1. Scaling Transformation in the Rembrandt Technique

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Leleur, Steen

    2013-01-01

    This paper examines a decision support system (DSS) for the appraisal of complex decision problems using multi-criteria decision analysis (MCDA). The DSS makes use of a structured hierarchical approach featuring the multiplicative AHP also known as the REMBRANDT technique. The paper addresses...... of a conventional AHP calculation in order to examine what impact the choice of progression factors as well as the choice of technique have on the decision making. Based on this a modified progression factor for the calculation of scores for the alternatives in REMBRANDT is suggested while the progression factor...

  2. Analysed potential of big data and supervised machine learning techniques in effectively forecasting travel times from fused data

    Directory of Open Access Journals (Sweden)

    Ivana Šemanjski

    2015-12-01

    Full Text Available Travel time forecasting is an interesting topic for many ITS services. Increased availability of data collection sensors increases the availability of the predictor variables but also highlights the high processing issues related to this big data availability. In this paper we aimed to analyse the potential of big data and supervised machine learning techniques in effectively forecasting travel times. For this purpose we used fused data from three data sources (Global Positioning System vehicles tracks, road network infrastructure data and meteorological data and four machine learning techniques (k-nearest neighbours, support vector machines, boosting trees and random forest. To evaluate the forecasting results we compared them in-between different road classes in the context of absolute values, measured in minutes, and the mean squared percentage error. For the road classes with the high average speed and long road segments, machine learning techniques forecasted travel times with small relative error, while for the road classes with the small average speeds and segment lengths this was a more demanding task. All three data sources were proven itself to have a high impact on the travel time forecast accuracy and the best results (taking into account all road classes were achieved for the k-nearest neighbours and random forest techniques.

  3. The scale analysis sequence for LWR fuel depletion

    International Nuclear Information System (INIS)

    Hermann, O.W.; Parks, C.V.

    1991-01-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system is used extensively to perform away-from-reactor safety analysis (particularly criticality safety, shielding, heat transfer analyses) for spent light water reactor (LWR) fuel. Spent fuel characteristics such as radiation sources, heat generation sources, and isotopic concentrations can be computed within SCALE using the SAS2 control module. A significantly enhanced version of the SAS2 control module, which is denoted as SAS2H, has been made available with the release of SCALE-4. For each time-dependent fuel composition, SAS2H performs one-dimensional (1-D) neutron transport analyses (via XSDRNPM-S) of the reactor fuel assembly using a two-part procedure with two separate unit-cell-lattice models. The cross sections derived from a transport analysis at each time step are used in a point-depletion computation (via ORIGEN-S) that produces the burnup-dependent fuel composition to be used in the next spectral calculation. A final ORIGEN-S case is used to perform the complete depletion/decay analysis using the burnup-dependent cross sections. The techniques used by SAS2H and two recent applications of the code are reviewed in this paper. 17 refs., 5 figs., 5 tabs

  4. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  5. Bench top and portable mineral analysers, borehole core analysers and in situ borehole logging

    International Nuclear Information System (INIS)

    Howarth, W.J.; Watt, J.S.

    1982-01-01

    Bench top and portable mineral analysers are usually based on balanced filter techniques using scintillation detectors or on low resolution proportional detectors. The application of radioisotope x-ray techniques to in situ borehole logging is increasing, and is particularly suited for logging for tin and higher atomic number elements

  6. Tuneable diode laser gas analyser for methane measurements on a large scale solid oxide fuel cell

    Science.gov (United States)

    Lengden, Michael; Cunningham, Robert; Johnstone, Walter

    2011-10-01

    A new in-line, real time gas analyser is described that uses tuneable diode laser spectroscopy (TDLS) for the measurement of methane in solid oxide fuel cells. The sensor has been tested on an operating solid oxide fuel cell (SOFC) in order to prove the fast response and accuracy of the technology as compared to a gas chromatograph. The advantages of using a TDLS system for process control in a large-scale, distributed power SOFC unit are described. In future work, the addition of new laser sources and wavelength modulation will allow the simultaneous measurement of methane, water vapour, carbon-dioxide and carbon-monoxide concentrations.

  7. Volume changes at macro- and nano-scale in epoxy resins studied by PALS and PVT experimental techniques

    Energy Technology Data Exchange (ETDEWEB)

    Somoza, A. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina) and CICPBA, Pinto 399, B7000GHG Tandil (Argentina)]. E-mail: asomoza@exa.unicen.edu.ar; Salgueiro, W. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina); Goyanes, S. [LPMPyMC, Depto. de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pabellon I, 1428 Buenos Aires (Argentina); Ramos, J. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain); Mondragon, I. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain)

    2007-02-15

    A systematic study on changes in the volumes at macro- and nano-scale in epoxy systems cured with selected aminic hardeners at different pre-cure temperatures is presented. Free- and macroscopic specific-volumes were measured by PALS and pressure-volume-temperature techniques, respectively. An analysis of the relation existing between macro- and nano-scales of the thermosetting networks developed by the different chemical structures is shown. The result obtained indicates that the structure of the hardeners governs the packing of the molecular chains of the epoxy network.

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Landers, N.F.; Petrie, L.M.; Knight, J.R. [Oak Ridge National Lab., TN (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries.

  9. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    International Nuclear Information System (INIS)

    Landers, N.F.; Petrie, L.M.; Knight, J.R.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries

  10. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  11. FALSIRE Phase II. CSNI project for Fracture Analyses of Large-Scale International Reference Experiments (Phase II). Comparison report

    International Nuclear Information System (INIS)

    Sievers, J.; Schulz, H.; Bass, R.; Pugh, C.; Keeney, J.

    1996-11-01

    A summary of Phase II of the Project for Fracture Analysis of Large-Scale International Reference Experiments (FALSIRE) is presented. A FALSIRE II Workshop focused on analyses of reference fracture experiments. More than 30 participants representing 22 organizations from 12 countries took part in the workshop. Final results for 45 analyses of the reference experiments were received from the participating analysts. For each experiment, analysis results provided estimates of variables that include temperature, crack-mouth-opening displacement, stress, strain, and applied K and J values. The data were sent electronically to the Organizing Committee, who assembled the results into a comparative data base using a special-purpose computer program. A comparative assessment and discussion of the analysis results are presented in the report. Generally, structural responses of the test specimens were predicted with tolerable scatter bands. (orig./DG)

  12. Framing scales and scaling frames

    NARCIS (Netherlands)

    van Lieshout, M.; Dewulf, A.; Aarts, N.; Termeer, K.

    2009-01-01

    Policy problems are not just out there. Actors highlight different aspects of a situation as problematic and situate the problem on different scales. In this study we will analyse the way actors apply scales in their talk (or texts) to frame the complex decision-making process of the establishment

  13. Comparative CO2 flux measurements by eddy covariance technique using open- and closed-path gas analysers over the equatorial Pacific Ocean

    Directory of Open Access Journals (Sweden)

    Fumiyoshi Kondo

    2012-04-01

    Full Text Available Direct comparison of air–sea CO2 fluxes by open-path eddy covariance (OPEC and closed-path eddy covariance (CPEC techniques was carried out over the equatorial Pacific Ocean. Previous studies over oceans have shown that the CO2 flux by OPEC was larger than the bulk CO2 flux using the gas transfer velocity estimated by the mass balance technique, while the CO2 flux by CPEC agreed with the bulk CO2 flux. We investigated a traditional conflict between the CO2 flux by the eddy covariance technique and the bulk CO2 flux, and whether the CO2 fluctuation attenuated using the closed-path analyser can be measured with sufficient time responses to resolve small CO2 flux over oceans. Our results showed that the closed-path analyser using a short sampling tube and a high volume air pump can be used to measure the small CO2 fluctuation over the ocean. Further, the underestimated CO2 flux by CPEC due to the attenuated fluctuation can be corrected by the bandpass covariance method; its contribution was almost identical to that of H2O flux. The CO2 flux by CPEC agreed with the total CO2 flux by OPEC with density correction; however, both of them are one order of magnitude larger than the bulk CO2 flux.

  14. Anti-control of chaos of single time-scale brushless DC motor.

    Science.gov (United States)

    Ge, Zheng-Ming; Chang, Ching-Ming; Chen, Yen-Sheng

    2006-09-15

    Anti-control of chaos of single time-scale brushless DC motors is studied in this paper. In order to analyse a variety of periodic and chaotic phenomena, we employ several numerical techniques such as phase portraits, bifurcation diagrams and Lyapunov exponents. Anti-control of chaos can be achieved by adding an external constant term or an external periodic term.

  15. The use of data mining techniques for analysing factors affecting cow reactivity during milking

    Directory of Open Access Journals (Sweden)

    Wojciech NEJA

    2017-06-01

    Full Text Available Motor activity of 158 Polish Holstein-Friesian cows was evaluated 5 times (before and during milking in a DeLaval 2*10 milking parlour for both the morning and evening milking, on a 5-point scale, according to the method of Budzyńska et al. (2007. The statistical analysis used multiple logistic regression and classification trees (Enterprise Miner 7.1 software which comes in with SAS package. In the evaluation of motor activity, cows that were among the first ten to enter the milking parlour were more often given a score of 3 points before (11.5% and during milking (23.5% compared to the other cows. Cows’ activity tended to decrease (both before and during milking with advancing lactation. The cows’ reduced activity was accompanied by shorter teat cup attachment times and lower milk yields. The criteria calculated for the quality of models based on classification tree technique as well as logistic regression showed that similar variables were responsible for the reactivity of cows before milking (teat cup attachment time, day of lactation, number of lactation, side of the milking parlour and during milking (day of lactation, side of the milking parlour, morning or evening milking, milk yield, number of lactation. At the same time, the applied methods showed that the determinants of the cow reactivity trait are highly complex. This complexity may be well explained using the classification tree technique.

  16. Distribution-analytical techniques in the study of AD/HD: Delta plot analyses reveal deficits in response inhibition that are eliminated by methylphenidate treatment

    NARCIS (Netherlands)

    Ridderinkhof, K.R.; Scheres, A.; Oosterlaan, J.; Sergeant, J.A.

    2005-01-01

    The authors highlight the utility of distribution-analytical techniques in the study of individual differences and clinical disorders. Cognitive deficits associated with attention-deficit/hyperactivity disorder (AD/HD) were examined by using delta-plot analyses of performance data (reaction time and

  17. Analysing CMS transfers using Machine Learning techniques

    CERN Document Server

    Diotalevi, Tommaso

    2016-01-01

    LHC experiments transfer more than 10 PB/week between all grid sites using the FTS transfer service. In particular, CMS manages almost 5 PB/week of FTS transfers with PhEDEx (Physics Experiment Data Export). FTS sends metrics about each transfer (e.g. transfer rate, duration, size) to a central HDFS storage at CERN. The work done during these three months, here as a Summer Student, involved the usage of ML techniques, using a CMS framework called DCAFPilot, to process this new data and generate predictions of transfer latencies on all links between Grid sites. This analysis will provide, as a future service, the necessary information in order to proactively identify and maybe fix latency issued transfer over the WLCG.

  18. Using GIS Mapping to Target Public Health Interventions: Examining Birth Outcomes Across GIS Techniques.

    Science.gov (United States)

    MacQuillan, E L; Curtis, A B; Baker, K M; Paul, R; Back, Y O

    2017-08-01

    With advances in spatial analysis techniques, there has been a trend in recent public health research to assess the contribution of area-level factors to health disparity for a number of outcomes, including births. Although it is widely accepted that health disparity is best addressed by targeted, evidence-based and data-driven community efforts, and despite national and local focus in the U.S. to reduce infant mortality and improve maternal-child health, there is little work exploring how choice of scale and specific GIS visualization technique may alter the perception of analyses focused on health disparity in birth outcomes. Retrospective cohort study. Spatial analysis of individual-level vital records data for low birthweight and preterm births born to black women from 2007 to 2012 in one mid-sized Midwest city using different geographic information systems (GIS) visualization techniques [geocoded address records were aggregated at two levels of scale and additionally mapped using kernel density estimation (KDE)]. GIS analyses in this study support our hypothesis that choice of geographic scale (neighborhood or census tract) for aggregated birth data can alter programmatic decision-making. Results indicate that the relative merits of aggregated visualization or the use of KDE technique depend on the scale of intervention. The KDE map proved useful in targeting specific areas for interventions in cities with smaller populations and larger census tracts, where they allow for greater specificity in identifying intervention areas. When public health programmers seek to inform intervention placement in highly populated areas, however, aggregated data at the census tract level may be preferred, since it requires lower investments in terms of time and cartographic skill and, unlike neighborhood, census tracts are standardized in that they become smaller as the population density of an area increases.

  19. Very large scale characterization of graphene mechanical devices using a colorimetry technique.

    Science.gov (United States)

    Cartamil-Bueno, Santiago Jose; Centeno, Alba; Zurutuza, Amaia; Steeneken, Peter Gerard; van der Zant, Herre Sjoerd Jan; Houri, Samer

    2017-06-08

    We use a scalable optical technique to characterize more than 21 000 circular nanomechanical devices made of suspended single- and double-layer graphene on cavities with different diameters (D) and depths (g). To maximize the contrast between suspended and broken membranes we used a model for selecting the optimal color filter. The method enables parallel and automatized image processing for yield statistics. We find the survival probability to be correlated with a structural mechanics scaling parameter given by D 4 /g 3 . Moreover, we extract a median adhesion energy of Γ = 0.9 J m -2 between the membrane and the native SiO 2 at the bottom of the cavities.

  20. Photogrammetric techniques for across-scale soil erosion assessment

    OpenAIRE

    Eltner, Anette

    2016-01-01

    Soil erosion is a complex geomorphological process with varying influences of different impacts at different spatio-temporal scales. To date, measurement of soil erosion is predominantly realisable at specific scales, thereby detecting separate processes, e.g. interrill erosion contrary to rill erosion. It is difficult to survey soil surface changes at larger areal coverage such as field scale with high spatial resolution. Either net changes at the system outlet or remaining traces after the ...

  1. Risk and reliability analyses (LURI) and expert judgement techniques

    International Nuclear Information System (INIS)

    Pyy, P.; Pulkkinen, U.

    1998-01-01

    Probabilistic safety analysis (PSA) is currently used as a regulatory licensing tool in risk informed and plant performance based regulation. More often also utility safety improvements are based on PSA calculations as one criterion. PSA attempts to comprehensively identify all important risk contributors, compare them with each other, assess the safety level and suggest improvements based on its findings. The strength of PSA is that it is capable to provide decision makers with numerical estimates of risks. This makes decision making easier than the comparison of purely qualitative results. PSA is the only comprehensive tool that compactly attempts to include all the important risk contributors in its scope. Despite the demonstrated strengths of PSA, there are some features that have reduced its uses. For example, the PSA scope has been limited to the power operation and process internal events (transients and LOCAs). Only lately, areas such as shutdown, external events and severe accidents have been included in PSA models in many countries. Problems related to modelling are, e.g., that rather static fault and event tree models are commonly used in PSA to model dynamic event sequences. Even if a valid model may be generated, there may not be any other data sources to be used than expert judgement. Furthermore, there are a variety of different techniques for human reliability assessment (HRA) giving varying results. In the project Reliability and Risk Analyses (LURI) these limitations and shortcomings have been studied. In the decision making area, case studies on the application of decision analysis and a doctoral thesis have been published. Further, practical aid has been given to utilities and regulatory decision making. Model uncertainty effect on PSA results has been demonstrated by two case studies. Human reliability has been studied both in the integrated safety analysis study and in the study of maintenance originated NPP component faults based on the

  2. Techniques for extracting single-trial activity patterns from large-scale neural recordings

    Science.gov (United States)

    Churchland, Mark M; Yu, Byron M; Sahani, Maneesh; Shenoy, Krishna V

    2008-01-01

    Summary Large, chronically-implanted arrays of microelectrodes are an increasingly common tool for recording from primate cortex, and can provide extracellular recordings from many (order of 100) neurons. While the desire for cortically-based motor prostheses has helped drive their development, such arrays also offer great potential to advance basic neuroscience research. Here we discuss the utility of array recording for the study of neural dynamics. Neural activity often has dynamics beyond that driven directly by the stimulus. While governed by those dynamics, neural responses may nevertheless unfold differently for nominally identical trials, rendering many traditional analysis methods ineffective. We review recent studies – some employing simultaneous recording, some not – indicating that such variability is indeed present both during movement generation, and during the preceding premotor computations. In such cases, large-scale simultaneous recordings have the potential to provide an unprecedented view of neural dynamics at the level of single trials. However, this enterprise will depend not only on techniques for simultaneous recording, but also on the use and further development of analysis techniques that can appropriately reduce the dimensionality of the data, and allow visualization of single-trial neural behavior. PMID:18093826

  3. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  4. The Use of Quality Control and Data Mining Techniques for Monitoring Scaled Scores: An Overview. Research Report. ETS RR-12-20

    Science.gov (United States)

    von Davier, Alina A.

    2012-01-01

    Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…

  5. Environmental pollutants monitoring network using nuclear techniques

    International Nuclear Information System (INIS)

    Cohen, D.D.

    1994-01-01

    The Australian Nuclear Science and Technology Organisation (ANSTO) in collaboration with the NSW Environment Protection Authority (EPA), Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 60,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 μm particle diameter cut off and samples for 24 hours using a stretched Teflon filter for each day. Accelerator-based Ion Beam Analysis(IBA) techniques are well suited to analyse the thousands of filter papers a year that originate from such a large scale aerosol sampling network. These techniques are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on a 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. This paper described the four simultaneous accelerator based IBA techniques used at ANSTO, to analyse for the following 24 elements H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. Each analysis requires only a few minutes of accelerator running time to complete. 15 refs., 9 figs

  6. Methods and Techniques Used to Convey Total System Performance Assessment Analyses and Results for Site Recommendation at Yucca Mountain, Nevada, USA

    International Nuclear Information System (INIS)

    Mattie, Patrick D.; McNeish, Jerry A.; Sevougian, S. David; Andrews, Robert W.

    2001-01-01

    Total System Performance Assessment (TSPA) is used as a key decision-making tool for the potential geologic repository of high level radioactive waste at Yucca Mountain, Nevada USA. Because of the complexity and uncertainty involved in a post-closure performance assessment, an important goal is to produce a transparent document describing the assumptions, the intermediate steps, the results, and the conclusions of the analyses. An important objective for a TSPA analysis is to illustrate confidence in performance projections of the potential repository given a complex system of interconnected process models, data, and abstractions. The methods and techniques used for the recent TSPA analyses demonstrate an effective process to portray complex models and results with transparency and credibility

  7. A Study on the Estimation of the Scale Factor for Precise Point Positioning

    Science.gov (United States)

    Erdogan, Bahattin; Kayacik, Orhan

    2017-04-01

    Precise Point Positioning (PPP) technique is one of the most important subject in Geomatic Engineering. PPP technique needs only one GNSS receiver and users have preferred it instead of traditional relative positioning technique for several applications. Scientific software has been used for PPP solutions and the software may underestimate the formal errors of the estimated coordinates. The formal errors have major effects on statistical interpretation. Variance-Covariance (VCV) matrix derived from GNSS processing software plays important role for deformation analysis and scientists sometimes need to scale VCV matrix. In this study, 10 continuously operating reference stations have been considered for 11 days dated 2014. All points have been analyzed by Gipsy-OASIS v6.4 scientific software. The solutions were derived for different session durations as 2, 4, 6, 8, 12 and 24 hours to obtain repeatability of the coordinates and analyses were carried out in order to estimate scale factor for Gipsy-OASIS v6.4 PPP results. According to the first results scale factors slightly increase depending on the raises in respect of session duration. Keywords: Precise Point Positioning, Gipsy-OASIS v6.4, Variance-Covariance Matrix, Scale Factor

  8. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  9. Residence time distribution measurements in a pilot-scale poison tank using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Goswami, Sunil; Samantray, J S; Sharma, V K; Maheshwari, N K

    2015-09-01

    Various types of systems are used to control the reactivity and shutting down of a nuclear reactor during emergency and routine shutdown operations. Injection of boron solution (borated water) into the core of a reactor is one of the commonly used methods during emergency operation. A pilot-scale poison tank was designed and fabricated to simulate injection of boron poison into the core of a reactor along with coolant water. In order to design a full-scale poison tank, it was desired to characterize flow of liquid from the tank. Residence time distribution (RTD) measurement and analysis was adopted to characterize the flow dynamics. Radiotracer technique was applied to measure RTD of aqueous phase in the tank using Bromine-82 as a radiotracer. RTD measurements were carried out with two different modes of operation of the tank and at different flow rates. In Mode-1, the radiotracer was instantaneously injected at the inlet and monitored at the outlet, whereas in Mode-2, the tank was filled with radiotracer and its concentration was measured at the outlet. From the measured RTD curves, mean residence times (MRTs), dead volume and fraction of liquid pumped in with time were determined. The treated RTD curves were modeled using suitable mathematical models. An axial dispersion model with high degree of backmixing was found suitable to describe flow when operated in Mode-1, whereas a tanks-in-series model with backmixing was found suitable to describe flow of the poison in the tank when operated in Mode-2. The results were utilized to scale-up and design a full-scale poison tank for a nuclear reactor. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A scaled underwater launch system accomplished by stress wave propagation technique

    International Nuclear Information System (INIS)

    Wei Yanpeng; Wang Yiwei; Huang Chenguang; Fang Xin; Duan Zhuping

    2011-01-01

    A scaled underwater launch system based on the stress wave theory and the slip Hopkinson pressure bar (SHPB) technique is developed to study the phenomenon of cavitations and other hydrodynamic features of high-speed submerged bodies. The present system can achieve a transient acceleration in the water instead of long-time acceleration outside the water. The projectile can obtain a maximum speed of 30 m/s in about 200 μs by the SHPB launcher. The cavitation characteristics in the stage of acceleration and deceleration are captured by the high-speed camera. The processes of cavitation inception, development and collapse are also simulated with the business software FLUENT, and the results are in good agreement with experiment. There is about 20-30% energy loss during the launching processes, the mechanism of energy loss is also preliminary investigated by measuring the energy of the incident bar and the projectile. (authors)

  11. Physical simulations using centrifuge techniques

    International Nuclear Information System (INIS)

    Sutherland, H.J.

    1981-01-01

    Centrifuge techniques offer a technique for doing physical simulations of the long-term mechanical response of deep ocean sediment to the emplacement of waste canisters and to the temperature gradients generated by them. Preliminary investigations of the scaling laws for pertinent phenomena indicate that the time scaling will be consistent among them and equal to the scaling factor squared. This result implies that this technique will permit accelerated-life-testing of proposed configurations; i.e, long-term studies may be done in relatively short times. Presently, existing centrifuges are being modified to permit scale model testing. This testing will start next year

  12. Structural analyses on piping systems of sodium reactors. 2. Eigenvalue analyses of hot-leg pipelines of large scale sodium reactors

    International Nuclear Information System (INIS)

    Furuhashi, Ichiro; Kasahara, Naoto

    2002-01-01

    Two types of finite element models analyzed eigenvalues of hot-leg pipelines of a large-scale sodium reactor. One is a beam element model, which is usual for pipe analyses. The other is a shell element model to evaluate particular modes in thin pipes with large diameters. Summary of analysis results: (1) A beam element model and a order natural frequency. A beam element model is available to get the first order vibration mode. (2) The maximum difference ratio of beam mode natural frequencies was 14% between a beam element model with no shear deformations and a shell element model. However, its difference becomes very small, when shear deformations are considered in beam element. (3) In the first order horizontal mode, the Y-piece acts like a pendulum, and the elbow acts like the hinge. The natural frequency is strongly affected by the bending and shear rigidities of the outer supporting pipe. (4) In the first order vertical mode, the vertical sections of the outer and inner pipes moves in the axial-directional piston mode, the horizontal section of inner pipe behaves like the cantilever, and the elbow acts like the hinge. The natural frequency is strongly affected by the axial rigidity of outer supporting pipe. (5) Both effective masses and participation factors were small for particular shell modes. (author)

  13. Development of a novel once-through flow visualization technique for kinetic study of bulk and surface scaling

    Science.gov (United States)

    Sanni, O.; Bukuaghangin, O.; Huggan, M.; Kapur, N.; Charpentier, T.; Neville, A.

    2017-10-01

    There is a considerable interest to investigate surface crystallization in order to have a full mechanistic understanding of how layers of sparingly soluble salts (scale) build on component surfaces. Despite much recent attention, a suitable methodology to improve on the understanding of the precipitation/deposition systems to enable the construction of an accurate surface deposition kinetic model is still needed. In this work, an experimental flow rig and associated methodology to study mineral scale deposition is developed. The once-through flow rig allows us to follow mineral scale precipitation and surface deposition in situ and in real time. The rig enables us to assess the effects of various parameters such as brine chemistry and scaling indices, temperature, flow rates, and scale inhibitor concentrations on scaling kinetics. Calcium carbonate (CaCO3) scaling at different values of the saturation ratio (SR) is evaluated using image analysis procedures that enable the assessment of surface coverage, nucleation, and growth of the particles with time. The result for turbidity values measured in the flow cell is zero for all the SR considered. The residence time from the mixing point to the sample is shorter than the induction time for bulk precipitation; therefore, there are no crystals in the bulk solution as the flow passes through the sample. The study shows that surface scaling is not always a result of pre-precipitated crystals in the bulk solution. The technique enables both precipitation and surface deposition of scale to be decoupled and for the surface deposition process to be studied in real time and assessed under constant condition.

  14. System reliability analysis using dominant failure modes identified by selective searching technique

    International Nuclear Information System (INIS)

    Kim, Dong-Seok; Ok, Seung-Yong; Song, Junho; Koh, Hyun-Moo

    2013-01-01

    The failure of a redundant structural system is often described by innumerable system failure modes such as combinations or sequences of local failures. An efficient approach is proposed to identify dominant failure modes in the space of random variables, and then perform system reliability analysis to compute the system failure probability. To identify dominant failure modes in the decreasing order of their contributions to the system failure probability, a new simulation-based selective searching technique is developed using a genetic algorithm. The system failure probability is computed by a multi-scale matrix-based system reliability (MSR) method. Lower-scale MSR analyses evaluate the probabilities of the identified failure modes and their statistical dependence. A higher-scale MSR analysis evaluates the system failure probability based on the results of the lower-scale analyses. Three illustrative examples demonstrate the efficiency and accuracy of the approach through comparison with existing methods and Monte Carlo simulations. The results show that the proposed method skillfully identifies the dominant failure modes, including those neglected by existing approaches. The multi-scale MSR method accurately evaluates the system failure probability with statistical dependence fully considered. The decoupling between the failure mode identification and the system reliability evaluation allows for effective applications to larger structural systems

  15. The Chinese version of the Myocardial Infarction Dimensional Assessment Scale (MIDAS: Mokken scaling

    Directory of Open Access Journals (Sweden)

    Watson Roger

    2012-01-01

    Full Text Available Abstract Background Hierarchical scales are very useful in clinical practice due to their ability to discriminate precisely between individuals, and the original English version of the Myocardial Infarction Dimensional Assessment Scale has been shown to contain a hierarchy of items. The purpose of this study was to analyse a Mandarin Chinese translation of the Myocardial Infarction Dimensional Assessment Scale for a hierarchy of items according to the criteria of Mokken scaling. Data from 180 Chinese participants who completed the Chinese translation of the Myocardial Infarction Dimensional Assessment Scale were analysed using the Mokken Scaling Procedure and the 'R' statistical programme using the diagnostics available in these programmes. Correlation between Mandarin Chinese items and a Chinese translation of the Short Form (36 Health Survey was also analysed. Findings Fifteen items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale were retained in a strong and reliable Mokken scale; invariant item ordering was not evident and the Mokken scaled items of the Chinese Myocardial Infarction Dimensional Assessment Scale correlated with the Short Form (36 Health Survey. Conclusions Items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale form a Mokken scale and this offers further insight into how the items of the Myocardial Infarction Dimensional Assessment Scale relate to the measurement of health-related quality of life people with a myocardial infarction.

  16. Different scale land subsidence and ground fissure monitoring with multiple InSAR techniques over Fenwei basin, China

    Directory of Open Access Journals (Sweden)

    C. Zhao

    2015-11-01

    Full Text Available Fenwei basin, China, composed by several sub-basins, has been suffering severe geo-hazards in last 60 years, including large scale land subsidence and small scale ground fissure, which caused serious infrastructure damages and property losses. In this paper, we apply different InSAR techniques with different SAR data to monitor these hazards. Firstly, combined small baseline subset (SBAS InSAR method and persistent scatterers (PS InSAR method is used to multi-track Envisat ASAR data to retrieve the large scale land subsidence covering entire Fenwei basin, from which different land subsidence magnitudes are analyzed of different sub-basins. Secondly, PS-InSAR method is used to monitor the small scale ground fissure deformation in Yuncheng basin, where different spatial deformation gradient can be clearly discovered. Lastly, different track SAR data are contributed to retrieve two-dimensional deformation in both land subsidence and ground fissure region, Xi'an, China, which can be benefitial to explain the occurrence of ground fissure and the correlation between land subsidence and ground fissure.

  17. Statistical analyses in the study of solar wind-magnetosphere coupling

    International Nuclear Information System (INIS)

    Baker, D.N.

    1985-01-01

    Statistical analyses provide a valuable method for establishing initially the existence (or lack of existence) of a relationship between diverse data sets. Statistical methods also allow one to make quantitative assessments of the strengths of observed relationships. This paper reviews the essential techniques and underlying statistical bases for the use of correlative methods in solar wind-magnetosphere coupling studies. Techniques of visual correlation and time-lagged linear cross-correlation analysis are emphasized, but methods of multiple regression, superposed epoch analysis, and linear prediction filtering are also described briefly. The long history of correlation analysis in the area of solar wind-magnetosphere coupling is reviewed with the assessments organized according to data averaging time scales (minutes to years). It is concluded that these statistical methods can be very useful first steps, but that case studies and various advanced analysis methods should be employed to understand fully the average response of the magnetosphere to solar wind input. It is clear that many workers have not always recognized underlying assumptions of statistical methods and thus the significance of correlation results can be in doubt. Long-term averages (greater than or equal to 1 hour) can reveal gross relationships, but only when dealing with high-resolution data (1 to 10 min) can one reach conclusions pertinent to magnetospheric response time scales and substorm onset mechanisms

  18. Structural analyses of sucrose laurate regioisomers by mass spectrometry techniques

    DEFF Research Database (Denmark)

    Lie, Aleksander; Stensballe, Allan; Pedersen, Lars Haastrup

    2015-01-01

    6- And 6′-O-lauroyl sucrose were isolated and analyzed by matrix-assisted laser desorption/ionisation (MALDI) time-of-flight (TOF) mass spectrometry (MS), Orbitrap high-resolution (HR) MS, and electrospray-ionization (ESI) tandem mass spectrometry (MS/MS). The analyses aimed to explore the physic......6- And 6′-O-lauroyl sucrose were isolated and analyzed by matrix-assisted laser desorption/ionisation (MALDI) time-of-flight (TOF) mass spectrometry (MS), Orbitrap high-resolution (HR) MS, and electrospray-ionization (ESI) tandem mass spectrometry (MS/MS). The analyses aimed to explore.......8, respectively, and Orbitrap HRMS confirmed the mass of [M+Na]+ (m/z 547.2712). ESI-MS/MS on the precursor ion [M+Na]+ resulted in product ion mass spectra showing two high-intensity signals for each sample. 6-O-Lauroyl sucrose produced signals located at m/z 547.27 and m/z 385.21, corresponding to the 6-O...

  19. Comparative CO{sub 2} flux measurements by eddy covariance technique using open- and closed-path gas analysers over the equatorial Pacific Ocean

    Energy Technology Data Exchange (ETDEWEB)

    Kondo, Fumiyoshi (Graduate School of Natural Science and Technology, Okayama Univ., Okayama (Japan); Atmosphere and Ocean Research Inst., Univ. of Tokyo, Tokyo (Japan)), Email: fkondo@aori.u-tokyo.ac.jp; Tsukamoto, Osamu (Graduate School of Natural Science and Technology, Okayama Univ., Okayama (Japan))

    2012-04-15

    Direct comparison of airsea CO{sub 2} fluxes by open-path eddy covariance (OPEC) and closed-path eddy covariance (CPEC) techniques was carried out over the equatorial Pacific Ocean. Previous studies over oceans have shown that the CO{sub 2} flux by OPEC was larger than the bulk CO{sub 2} flux using the gas transfer velocity estimated by the mass balance technique, while the CO{sub 2} flux by CPEC agreed with the bulk CO{sub 2} flux. We investigated a traditional conflict between the CO{sub 2} flux by the eddy covariance technique and the bulk CO{sub 2} flux, and whether the CO{sub 2} fluctuation attenuated using the closed-path analyser can be measured with sufficient time responses to resolve small CO{sub 2} flux over oceans. Our results showed that the closed-path analyser using a short sampling tube and a high volume air pump can be used to measure the small CO{sub 2} fluctuation over the ocean. Further, the underestimated CO{sub 2} flux by CPEC due to the attenuated fluctuation can be corrected by the bandpass covariance method; its contribution was almost identical to that of H{sub 2}O flux. The CO{sub 2} flux by CPEC agreed with the total CO{sub 2} flux by OPEC with density correction; however, both of them are one order of magnitude larger than the bulk CO{sub 2} flux

  20. Energy, exergy, economic (3E) analyses and multi-objective optimization of vapor absorption heat transformer using NSGA-II technique

    International Nuclear Information System (INIS)

    Jain, Vaibhav; Sachdeva, Gulshan

    2017-01-01

    Highlights: • Study includes energy, exergy and economic analyses of absorption heat transformer. • It addresses multi-objective optimization study using NSGA-II technique. • Total annual cost and total exergy destruction are simultaneously optimized. • Results with multi-objective optimized design are more acceptable than other. - Abstract: Present paper addresses the energy, exergy and economic (3E) analyses of absorption heat transformer (AHT) working with LiBr-H 2 O fluid pair. The heat exchangers namely absorber, condenser, evaporator, generator and solution heat exchanger are designed for the size and cost estimation of AHT. Later, the effect of operating variables is examined on the system performance, size and cost. Simulation studies showed a conflict between thermodynamic and economic performance of the system. The heat exchangers with lower investment cost showed high irreversible losses and vice versa. Thus, the operating variables of systems are determined economically as well as thermodynamically by implementing non-dominated sort genetic algorithm-II (NSGA-II) technique of multi-objective optimization. In present work, if the cost based optimized design is chosen, total exergy destruction is 2.4% higher than its minimum possible value; whereas, if total exergy based optimized design is chosen, total annual cost is 6.1% higher than its minimum possible value. On the other hands, total annual cost and total exergy destruction are only 1.0% and 0.8%, respectively more from their minimum possible values with multi-objective optimized design. Thus, the multi-objective optimized design of the AHT is best outcome than any other single-objective optimized designs.

  1. Comparative analyses of industrial-scale human platelet lysate preparations.

    Science.gov (United States)

    Pierce, Jan; Benedetti, Eric; Preslar, Amber; Jacobson, Pam; Jin, Ping; Stroncek, David F; Reems, Jo-Anna

    2017-12-01

    Efforts are underway to eliminate fetal bovine serum from mammalian cell cultures for clinical use. An emerging, viable replacement option for fetal bovine serum is human platelet lysate (PL) as either a plasma-based or serum-based product. Nine industrial-scale, serum-based PL manufacturing runs (i.e., lots) were performed, consisting of an average ± standard deviation volume of 24.6 ± 2.2 liters of pooled, platelet-rich plasma units that were obtained from apheresis donors. Manufactured lots were compared by evaluating various biochemical and functional test results. Comprehensive cytokine profiles of PL lots and product stability tests were performed. Global gene expression profiles of mesenchymal stromal cells (MSCs) cultured with plasma-based or serum-based PL were compared to MSCs cultured with fetal bovine serum. Electrolyte and protein levels were relatively consistent among all serum-based PL lots, with only slight variations in glucose and calcium levels. All nine lots were as good as or better than fetal bovine serum in expanding MSCs. Serum-based PL stored at -80°C remained stable over 2 years. Quantitative cytokine arrays showed similarities as well as dissimilarities in the proteins present in serum-based PL. Greater differences in MSC gene expression profiles were attributable to the starting cell source rather than with the use of either PL or fetal bovine serum as a culture supplement. Using a large-scale, standardized method, lot-to-lot variations were noted for industrial-scale preparations of serum-based PL products. However, all lots performed as well as or better than fetal bovine serum in supporting MSC growth. Together, these data indicate that off-the-shelf PL is a feasible substitute for fetal bovine serum in MSC cultures. © 2017 AABB.

  2. The Use of System Codes in Scaling Studies: Relevant Techniques for Qualifying NPP Nodalizations for Particular Scenarios

    Directory of Open Access Journals (Sweden)

    V. Martinez-Quiroga

    2014-01-01

    Full Text Available System codes along with necessary nodalizations are valuable tools for thermal hydraulic safety analysis. Qualifying both codes and nodalizations is an essential step prior to their use in any significant study involving code calculations. Since most existing experimental data come from tests performed on the small scale, any qualification process must therefore address scale considerations. This paper describes the methodology developed at the Technical University of Catalonia in order to contribute to the qualification of Nuclear Power Plant nodalizations by means of scale disquisitions. The techniques that are presented include the so-called Kv-scaled calculation approach as well as the use of “hybrid nodalizations” and “scaled-up nodalizations.” These methods have revealed themselves to be very helpful in producing the required qualification and in promoting further improvements in nodalization. The paper explains both the concepts and the general guidelines of the method, while an accompanying paper will complete the presentation of the methodology as well as showing the results of the analysis of scaling discrepancies that appeared during the posttest simulations of PKL-LSTF counterpart tests performed on the PKL-III and ROSA-2 OECD/NEA Projects. Both articles together produce the complete description of the methodology that has been developed in the framework of the use of NPP nodalizations in the support to plant operation and control.

  3. Genome scale engineering techniques for metabolic engineering.

    Science.gov (United States)

    Liu, Rongming; Bassalo, Marcelo C; Zeitoun, Ramsey I; Gill, Ryan T

    2015-11-01

    Metabolic engineering has expanded from a focus on designs requiring a small number of genetic modifications to increasingly complex designs driven by advances in genome-scale engineering technologies. Metabolic engineering has been generally defined by the use of iterative cycles of rational genome modifications, strain analysis and characterization, and a synthesis step that fuels additional hypothesis generation. This cycle mirrors the Design-Build-Test-Learn cycle followed throughout various engineering fields that has recently become a defining aspect of synthetic biology. This review will attempt to summarize recent genome-scale design, build, test, and learn technologies and relate their use to a range of metabolic engineering applications. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  4. Applicability of laboratory data to large scale tests under dynamic loading conditions

    International Nuclear Information System (INIS)

    Kussmaul, K.; Klenk, A.

    1993-01-01

    The analysis of dynamic loading and subsequent fracture must be based on reliable data for loading and deformation history. This paper describes an investigation to examine the applicability of parameters which are determined by means of small-scale laboratory tests to large-scale tests. The following steps were carried out: (1) Determination of crack initiation by means of strain gauges applied in the crack tip field of compact tension specimens. (2) Determination of dynamic crack resistance curves of CT-specimens using a modified key-curve technique. The key curves are determined by dynamic finite element analyses. (3) Determination of strain-rate-dependent stress-strain relationships for the finite element simulation of small-scale and large-scale tests. (4) Analysis of the loading history for small-scale tests with the aid of experimental data and finite element calculations. (5) Testing of dynamically loaded tensile specimens taken as strips from ferritic steel pipes with a thickness of 13 mm resp. 18 mm. The strips contained slits and surface cracks. (6) Fracture mechanics analyses of the above mentioned tests and of wide plate tests. The wide plates (960x608x40 mm 3 ) had been tested in a propellant-driven 12 MN dynamic testing facility. For calculating the fracture mechanics parameters of both tests, a dynamic finite element simulation considering the dynamic material behaviour was employed. The finite element analyses showed a good agreement with the simulated tests. This prerequisite allowed to gain critical J-integral values. Generally the results of the large-scale tests were conservative. 19 refs., 20 figs., 4 tabs

  5. Acupuncture-Related Techniques for Psoriasis: A Systematic Review with Pairwise and Network Meta-Analyses of Randomized Controlled Trials.

    Science.gov (United States)

    Yeh, Mei-Ling; Ko, Shu-Hua; Wang, Mei-Hua; Chi, Ching-Chi; Chung, Yu-Chu

    2017-12-01

    There has be a large body of evidence on the pharmacological treatments for psoriasis, but whether nonpharmacological interventions are effective in managing psoriasis remains largely unclear. This systematic review conducted pairwise and network meta-analyses to determine the effects of acupuncture-related techniques on acupoint stimulation for the treatment of psoriasis and to determine the order of effectiveness of these remedies. This study searched the following databases from inception to March 15, 2016: Medline, PubMed, Cochrane Central Register of Controlled Trials, EBSCO (including Academic Search Premier, American Doctoral Dissertations, and CINAHL), Airiti Library, and China National Knowledge Infrastructure. Randomized controlled trials (RCTs) on the effects of acupuncture-related techniques on acupoint stimulation as intervention for psoriasis were independently reviewed by two researchers. A total of 13 RCTs with 1,060 participants were included. The methodological quality of included studies was not rigorous. Acupoint stimulation, compared with nonacupoint stimulation, had a significant treatment for psoriasis. However, the most common adverse events were thirst and dry mouth. Subgroup analysis was further done to confirm that the short-term treatment effect was superior to that of the long-term effect in treating psoriasis. Network meta-analysis identified acupressure or acupoint catgut embedding, compared with medication, and had a significant effect for improving psoriasis. It was noted that acupressure was the most effective treatment. Acupuncture-related techniques could be considered as an alternative or adjuvant therapy for psoriasis in short term, especially of acupressure and acupoint catgut embedding. This study recommends further well-designed, methodologically rigorous, and more head-to-head randomized trials to explore the effects of acupuncture-related techniques for treating psoriasis.

  6. X-ray fluorescence in Member States (Italy): Portable EDXRF in a multi-technique approach for the analyses of large paintings

    International Nuclear Information System (INIS)

    Ridolfi, Stefano

    2014-01-01

    Energy-dispersive X-ray fluorescence (EDXRF) with its portable capability, generally characterized by a small Xray tube and a Si-PIN or Si-drift detector, is particularly useful to analyze works of art. The main aspect that characterizes the EDXRF technique is its non-invasive character. This characteristic that makes the technique so powerful and appealing is on the other hand the main source of uncertainty in XRF measurements on Cultural Heritage. This problem is even more evident when we analyze paintings because of their intrinsic stratigraphic essence. As a matter of fact a painting is made of several layers: the support, which can be mainly of wood, canvas, paper; the preparation layer, mainly gypsums, white lead or ochre; pigment layers and at the end the protective varnish layer. The penetrating power of X rays allows that most of the times the information of all the layers reaches the detector. Most of the information that is in the spectrum arrives from deep layers of which we have no clue. In order to better understand this concept, let us use the equation of A. Markowicz. in which the various uncertainties that influence the analyses with portable EDXRF are reported. Let us adjust this equation for non invasive portable EDXRF analysis. The second, the third and the fourth term do not exist, for obvious reasons. Only the first and the last term influence the total uncertainty of an EDXRF analysis. The ways to reduce the influence of the fifth term is known by any scientist: good stability of the system, long measuring time, correct standard samples, good energy resolution etc. But what about the first term when we are executing a non invasive analysis? An example that shows the influence of the sample representation in the increasing of the uncertainty of a XRF analysis is the case in which we are asked to determine the original pigments used in a painting. If we have no clue of where restoration areas are dislocated on the painting, the probability of

  7. Multi-scale high-performance fluid flow: Simulations through porous media

    KAUST Repository

    Perović, Nevena

    2016-08-03

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy\\'s law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  8. Multi-scale high-performance fluid flow: Simulations through porous media

    KAUST Repository

    Perović, Nevena; Frisch, Jé rô me; Salama, Amgad; Sun, Shuyu; Rank, Ernst; Mundani, Ralf Peter

    2016-01-01

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy's law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  9. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  10. Constraining the global carbon budget from global to regional scales - The measurement challenge

    International Nuclear Information System (INIS)

    Francey, R.J.; Rayner, P.J.; Allison, C.E.

    2002-01-01

    The Global Carbon Cycle can be modelled by a Bayesian synthesis inversion technique, where measured atmospheric CO 2 concentrations and isotopic compositions are analysed by use of an atmospheric transport model and estimates of regional sources and sinks of atmospheric carbon. The uncertainty associated to carbon flux estimates even on a regional scale can be improved considerably using the inversion technique. In this approach, besides the necessary control on the precision of atmospheric transport models and on the constraints for surface fluxes, an important component is the calibration of atmospheric CO 2 concentration and isotope measurements. The recent improved situation in respect to data comparability is discussed using results of conducted interlaboratory comparison exercises and larger scale calibration programs are proposed for the future to further improve the comparability of analytical data. (author)

  11. Soil analyses by ICP-MS (Review)

    International Nuclear Information System (INIS)

    Yamasaki, Shin-ichi

    2000-01-01

    Soil analyses by inductively coupled plasma mass spectrometry (ICP-MS) are reviewed. The first half of the paper is devoted to the development of techniques applicable to soil analyses, where diverse analytical parameters are carefully evaluated. However, the choice of soil samples is somewhat arbitrary, and only a limited number of samples (mostly reference materials) are examined. In the second half, efforts are mostly concentrated on the introduction of reports, where a large number of samples and/or very precious samples have been analyzed. Although the analytical techniques used in these reports are not necessarily novel, valuable information concerning such topics as background levels of elements in soils, chemical forms of elements in soils and behavior of elements in soil ecosystems and the environment can be obtained. The major topics discussed are total elemental analysis, analysis of radionuclides with long half-lives, speciation, leaching techniques, and isotope ratio measurements. (author)

  12. Introduction of Functional Structures in Nano-Scales into Engineering Polymer Films Using Radiation Technique

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Y., E-mail: maekawa.yasunari@jaea.go.jp [Japan Atomic Energy Agency (JAEA), Quantum Beam Science Directorate, High Performance Polymer Group, 1233 Watanuki-Machi, Takasaki, Gunma-ken 370-1292 (Japan)

    2010-07-01

    Introduction of functional regions in nanometer scale in polymeric films using γ-rays, EB, and ion beams are proposed. Two approaches to build nano-scale functional domains in polymer substrates are proposed: 1) Radiation-induced grafting to transfer nano-scale polymer crystalline structures (morphology), acting as a nano-template, to nano-scale graft polymer regions. The obtained polymers with nano structures can be applied to high performance polymer membranes. 2) Fabrication of nanopores and functional domains in engineering plastic films using ion beams, which deposit the energy in very narrow region of polymer films. Hydrophilic grafting polymers are introduced into hydrophobic fluorinated polymers, cross-linked PTFE (cPTFE) and aromatic hydrocarbon polymer, poly(ether ether ketone (PEEK), which is known to have lamella and crystallite in the polymer films. Then, the hierarchical structures of graft domains are analyzed by a small angle neutron scattering (SANS) experiment. From these analyses, the different structures and the different formation of graft domains were observed in fluorinated and hydrocarbon polymer substrates. the grafted domains in the cPTFE film, working as an ion channel, grew as covering the crystallite and the size of domain seems to be similar to that of crystallite. On the other hand, the PEEK-based PEM has a smaller domain size and it seems to grow independently on the crystallites of PEEK substrate. For nano-fabrication of polymer films using heavy ion beams, the energy distribution in radial direction, which is perpendicular to ion trajectory, is mainly concerned. For penumbra, we re-estimated effective radius of penumbra, in which radiation induced grafting took place, for several different ion beams. We observed the different diameters of the ion channels consisting of graft polymers. The channel sizes were quite in good agreement with the effective penumbra which possess the absorption doses more than 1 kGy. (author)

  13. Introduction of Functional Structures in Nano-Scales into Engineering Polymer Films Using Radiation Technique

    International Nuclear Information System (INIS)

    Maekawa, Y.

    2010-01-01

    Introduction of functional regions in nanometer scale in polymeric films using γ-rays, EB, and ion beams are proposed. Two approaches to build nano-scale functional domains in polymer substrates are proposed: 1) Radiation-induced grafting to transfer nano-scale polymer crystalline structures (morphology), acting as a nano-template, to nano-scale graft polymer regions. The obtained polymers with nano structures can be applied to high performance polymer membranes. 2) Fabrication of nanopores and functional domains in engineering plastic films using ion beams, which deposit the energy in very narrow region of polymer films. Hydrophilic grafting polymers are introduced into hydrophobic fluorinated polymers, cross-linked PTFE (cPTFE) and aromatic hydrocarbon polymer, poly(ether ether ketone (PEEK), which is known to have lamella and crystallite in the polymer films. Then, the hierarchical structures of graft domains are analyzed by a small angle neutron scattering (SANS) experiment. From these analyses, the different structures and the different formation of graft domains were observed in fluorinated and hydrocarbon polymer substrates. the grafted domains in the cPTFE film, working as an ion channel, grew as covering the crystallite and the size of domain seems to be similar to that of crystallite. On the other hand, the PEEK-based PEM has a smaller domain size and it seems to grow independently on the crystallites of PEEK substrate. For nano-fabrication of polymer films using heavy ion beams, the energy distribution in radial direction, which is perpendicular to ion trajectory, is mainly concerned. For penumbra, we re-estimated effective radius of penumbra, in which radiation induced grafting took place, for several different ion beams. We observed the different diameters of the ion channels consisting of graft polymers. The channel sizes were quite in good agreement with the effective penumbra which possess the absorption doses more than 1 kGy. (author)

  14. Comparison of residual NAPL source removal techniques in 3D metric scale experiments

    Science.gov (United States)

    Atteia, O.; Jousse, F.; Cohen, G.; Höhener, P.

    2017-07-01

    the contaminant fluxes, which were different for each technique. This paper presents the first comparison of four remediation techniques at the scale of 1 m3 tanks including heterogeneities. Sparging, persulfate and surfactant only remove 50% of the mass, while it is more than 99% for thermal. In terms of flux removal oxidant addition performs better when density effects are used.

  15. Inter-subject FDG PET Brain Networks Exhibit Multi-scale Community Structure with Different Normalization Techniques.

    Science.gov (United States)

    Sperry, Megan M; Kartha, Sonia; Granquist, Eric J; Winkelstein, Beth A

    2018-07-01

    Inter-subject networks are used to model correlations between brain regions and are particularly useful for metabolic imaging techniques, like 18F-2-deoxy-2-(18F)fluoro-D-glucose (FDG) positron emission tomography (PET). Since FDG PET typically produces a single image, correlations cannot be calculated over time. Little focus has been placed on the basic properties of inter-subject networks and if they are affected by group size and image normalization. FDG PET images were acquired from rats (n = 18), normalized by whole brain, visual cortex, or cerebellar FDG uptake, and used to construct correlation matrices. Group size effects on network stability were investigated by systematically adding rats and evaluating local network connectivity (node strength and clustering coefficient). Modularity and community structure were also evaluated in the differently normalized networks to assess meso-scale network relationships. Local network properties are stable regardless of normalization region for groups of at least 10. Whole brain-normalized networks are more modular than visual cortex- or cerebellum-normalized network (p network resolutions where modularity differs most between brain and randomized networks. Hierarchical analysis reveals consistent modules at different scales and clustering of spatially-proximate brain regions. Findings suggest inter-subject FDG PET networks are stable for reasonable group sizes and exhibit multi-scale modularity.

  16. Patterns and sources of adult personality development: growth curve analyses of the NEO PI-R scales in a longitudinal twin study.

    Science.gov (United States)

    Bleidorn, Wiebke; Kandler, Christian; Riemann, Rainer; Spinath, Frank M; Angleitner, Alois

    2009-07-01

    The present study examined the patterns and sources of 10-year stability and change of adult personality assessed by the 5 domains and 30 facets of the Revised NEO Personality Inventory. Phenotypic and biometric analyses were performed on data from 126 identical and 61 fraternal twins from the Bielefeld Longitudinal Study of Adult Twins (BiLSAT). Consistent with previous research, LGM analyses revealed significant mean-level changes in domains and facets suggesting maturation of personality. There were also substantial individual differences in the change trajectories of both domain and facet scales. Correlations between age and trait changes were modest and there were no significant associations between change and gender. Biometric extensions of growth curve models showed that 10-year stability and change of personality were influenced by both genetic as well as environmental factors. Regarding the etiology of change, the analyses uncovered a more complex picture than originally stated, as findings suggest noticeable differences between traits with respect to the magnitude of genetic and environmental effects. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  17. Molecular-Scale Electronics: From Concept to Function.

    Science.gov (United States)

    Xiang, Dong; Wang, Xiaolong; Jia, Chuancheng; Lee, Takhee; Guo, Xuefeng

    2016-04-13

    Creating functional electrical circuits using individual or ensemble molecules, often termed as "molecular-scale electronics", not only meets the increasing technical demands of the miniaturization of traditional Si-based electronic devices, but also provides an ideal window of exploring the intrinsic properties of materials at the molecular level. This Review covers the major advances with the most general applicability and emphasizes new insights into the development of efficient platform methodologies for building reliable molecular electronic devices with desired functionalities through the combination of programmed bottom-up self-assembly and sophisticated top-down device fabrication. First, we summarize a number of different approaches of forming molecular-scale junctions and discuss various experimental techniques for examining these nanoscale circuits in details. We then give a full introduction of characterization techniques and theoretical simulations for molecular electronics. Third, we highlight the major contributions and new concepts of integrating molecular functionalities into electrical circuits. Finally, we provide a critical discussion of limitations and main challenges that still exist for the development of molecular electronics. These analyses should be valuable for deeply understanding charge transport through molecular junctions, the device fabrication process, and the roadmap for future practical molecular electronics.

  18. Static and fatigue experimental tests on a full scale fuselage panel and FEM analyses

    Directory of Open Access Journals (Sweden)

    Raffaele Sepe

    2016-02-01

    Full Text Available A fatigue test on a full scale panel with complex loading condition and geometry configuration has been carried out using a triaxial test machine. The demonstrator is made up of two skins which are linked by a transversal butt-joint, parallel to the stringer direction. A fatigue load was applied in the direction normal to the longitudinal joint, while a constant load was applied in the longitudinal joint direction. The test panel was instrumented with strain gages and previously quasi-static tests were conducted to ensure a proper load transferring to the panel. In order to support the tests, geometric nonlinear shell finite element analyses were conducted to predict strain and stress distributions. The demonstrator broke up after about 177000 cycles. Subsequently, a finite element analysis (FEA was carried out in order to correlate failure events; due to the biaxial nature of the fatigue loads, Sines criterion was used. The analysis was performed taking into account the different materials by which the panel is composed. The numerical results show a good correlation with experimental data, successfully predicting failure locations on the panel.

  19. Large-scale chromosome folding versus genomic DNA sequences: A discrete double Fourier transform technique.

    Science.gov (United States)

    Chechetkin, V R; Lobzin, V V

    2017-08-07

    Using state-of-the-art techniques combining imaging methods and high-throughput genomic mapping tools leaded to the significant progress in detailing chromosome architecture of various organisms. However, a gap still remains between the rapidly growing structural data on the chromosome folding and the large-scale genome organization. Could a part of information on the chromosome folding be obtained directly from underlying genomic DNA sequences abundantly stored in the databanks? To answer this question, we developed an original discrete double Fourier transform (DDFT). DDFT serves for the detection of large-scale genome regularities associated with domains/units at the different levels of hierarchical chromosome folding. The method is versatile and can be applied to both genomic DNA sequences and corresponding physico-chemical parameters such as base-pairing free energy. The latter characteristic is closely related to the replication and transcription and can also be used for the assessment of temperature or supercoiling effects on the chromosome folding. We tested the method on the genome of E. coli K-12 and found good correspondence with the annotated domains/units established experimentally. As a brief illustration of further abilities of DDFT, the study of large-scale genome organization for bacteriophage PHIX174 and bacterium Caulobacter crescentus was also added. The combined experimental, modeling, and bioinformatic DDFT analysis should yield more complete knowledge on the chromosome architecture and genome organization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. VENUS-III two-dimensional multi-component thermal hydraulic techniques

    International Nuclear Information System (INIS)

    Weber, D.P.

    1979-01-01

    In recent analyses of the initiating phase in LMFBR core disruptive accidents the energy deposition rate may not be nearly so high as originally thought and the development of material motion and interaction may take place on a time scale considerably larger than the classic disassembly time scale of milliseconds. This introduces a considerably different twist to the problem and it becomes apparent that processes heretofore ignored, such as differential motion and heat exchange, may become important. In addition, time scales may become long enough that substantial core material motion may take place and since rearrangement in more critical configurations cannot be absolutely precluded, capability for extended motion analysis, not easily performed with Lagrangian techniques in multi-dimensions, become desirable. Such considerations provided the motivation for developing a hydrodynamic algorithm to resolve these questions, and an Eulerian rather than Lagrangian frame of reference was chosen, primarily to handle extended motion and interpenetration. The results of the study are described

  1. Spraying Techniques for Large Scale Manufacturing of PEM-FC Electrodes

    Science.gov (United States)

    Hoffman, Casey J.

    Fuel cells are highly efficient energy conversion devices that represent one part of the solution to the world's current energy crisis in the midst of global climate change. When supplied with the necessary reactant gasses, fuel cells produce only electricity, heat, and water. The fuel used, namely hydrogen, is available from many sources including natural gas and the electrolysis of water. If the electricity for electrolysis is generated by renewable energy (e.g., solar and wind power), fuel cells represent a completely 'green' method of producing electricity. The thought of being able to produce electricity to power homes, vehicles, and other portable or stationary equipment with essentially zero environmentally harmful emissions has been driving academic and industrial fuel cell research and development with the goal of successfully commercializing this technology. Unfortunately, fuel cells cannot achieve any appreciable market penetration at their current costs. The author's hypothesis is that: the development of automated, non-contact deposition methods for electrode manufacturing will improve performance and process flexibility, thereby helping to accelerate the commercialization of PEMFC technology. The overarching motivation for this research was to lower the cost of manufacturing fuel cell electrodes and bring the technology one step closer to commercial viability. The author has proven this hypothesis through a detailed study of two non-contact spraying methods. These scalable deposition systems were incorporated into an automated electrode manufacturing system that was designed and built by the author for this research. The electrode manufacturing techniques developed by the author have been shown to produce electrodes that outperform a common lab-scale contact method that was studied as a baseline, as well as several commercially available electrodes. In addition, these scalable, large scale electrode manufacturing processes developed by the author are

  2. Determination of formation heterogeneity at a range of scales using novel multi-electrode resistivity scanning techniques

    International Nuclear Information System (INIS)

    Williams, G.M.; Jackson, P.D.; Ward, R.S.; Sen, M.A.; Meldrum, P.; Lovell, M.

    1991-01-01

    The traditional method of measuring ground resistivity involves passing a current through two outer electrodes, measuring the potential developed across two electrodes in between, and applying Ohm's Law. In the RESCAN system developed by the British Geological Survey, each electrode can be electronically selected and controlled by software to either pass current or measure potential. Thousands of electrodes can be attached to the system either in 2-D surface arrays or along special plastic covered probes driven vertically into the ground or emplaced in boreholes. Under computer control, the resistivity distribution within the emplaced array can be determined automatically with unprecedented detail and speed, and may be displayed as an image. So far, the RESCAN system has been applied at the meso-scale in monitoring the radial migration of an electrolyte introduced into a recharge well in an unconsolidated aquifer; and CORSCAN at the micro-scale on drill cores to evaluate spatial variability in physical properties. The RESCAN technique has considerable potential for determining formation heterogeneity at different scales and provides a basis for developing stochastic models of groundwater and solute flow in heterogeneous systems. 13 figs.; 1 tab.; 12 refs

  3. Multi-scale full-field measurements and near-wall modeling of turbulent subcooled boiling flow using innovative experimental techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hassan, Yassin A., E-mail: y-hassan@tamu.edu

    2016-04-01

    Highlights: • Near wall full-field velocity components under subcooled boiling were measured. • Simultaneous shadowgraphy, infrared thermometry wall temperature and particle-tracking velocimetry techniques were combined. • Near wall velocity modifications under subcooling boiling were observed. - Abstract: Multi-phase flows are one of the challenges on which the CFD simulation community has been working extensively with a relatively low success. The phenomena associated behind the momentum and heat transfer mechanisms associated to multi-phase flows are highly complex requiring resolving simultaneously for multiple scales on time and space. Part of the reasons behind the low predictive capability of CFD when studying multi-phase flows, is the scarcity of CFD-grade experimental data for validation. The complexity of the phenomena and its sensitivity to small sources of perturbations makes its measurements a difficult task. Non-intrusive and innovative measuring techniques are required to accurately measure multi-phase flow parameters while at the same time satisfying the high resolution required to validate CFD simulations. In this context, this work explores the feasible implementation of innovative measuring techniques that can provide whole-field and multi-scale measurements of two-phase flow turbulence, heat transfer, and boiling parameters. To this end, three visualization techniques are simultaneously implemented to study subcooled boiling flow through a vertical rectangular channel with a single heated wall. These techniques are listed next and are used as follow: (1) High-speed infrared thermometry (IR-T) is used to study the impact of the boiling level on the heat transfer coefficients at the heated wall, (2) Particle Tracking Velocimetry (PTV) is used to analyze the influence that boiling parameters have on the liquid phase turbulence statistics, (3) High-speed shadowgraphy with LED illumination is used to obtain the gas phase dynamics. To account

  4. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico

    Science.gov (United States)

    Haer, Toon; Botzen, W. J. Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M.; Ward, Philip J.

    2018-06-01

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  5. Quantitative study of Xanthosoma violaceum leaf surfaces using RIMAPS and variogram techniques.

    Science.gov (United States)

    Favret, Eduardo A; Fuentes, Néstor O; Molina, Ana M

    2006-08-01

    Two new imaging techniques (rotated image with maximum averaged power spectrum (RIMAPS) and variogram) are presented for the study and description of leaf surfaces. Xanthosoma violaceum was analyzed to illustrate the characteristics of both techniques. Both techniques produce a quantitative description of leaf surface topography. RIMAPS combines digitized images rotation with Fourier transform, and it is used to detect patterns orientation and characteristics of surface topography. Variogram relates the mathematical variance of a surface with the area of the sample window observed. It gives the typical scale lengths of the surface patterns. RIMAPS detects the morphological variations of the surface topography pattern between fresh and dried (herbarium) samples of the leaf. The variogram method finds the characteristic dimensions of the leaf microstructure, i.e., cell length, papillae diameter, etc., showing that there are not significant differences between dry and fresh samples. The results obtained show the robustness of RIMAPS and variogram analyses to detect, distinguish, and characterize leaf surfaces, as well as give scale lengths. Both techniques are tools for the biologist to study variations of the leaf surface when different patterns are present. The use of RIMAPS and variogram opens a wide spectrum of possibilities by providing a systematic, quantitative description of the leaf surface topography.

  6. Development of triple scale finite element analyses based on crystallographic homogenization methods

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2004-01-01

    Crystallographic homogenization procedure is implemented in the piezoelectric and elastic-crystalline plastic finite element (FE) code to assess its macro-continuum properties of piezoelectric ceramics and BCC and FCC sheet metals. Triple scale hierarchical structure consists of an atom cluster, a crystal aggregation and a macro- continuum. In this paper, we focus to discuss a triple scale numerical analysis for piezoelectric material, and apply to assess a macro-continuum material property. At first, we calculate material properties of Perovskite crystal of piezoelectric material, XYO3 (such as BaTiO3 and PbTiO3) by employing ab-initio molecular analysis code CASTEP. Next, measured results of SEM and EBSD observations of crystal orientation distributions, shapes and boundaries of a real material (BaTiO3) are employed to define an inhomogeneity of crystal aggregation, which corresponds to a unit cell of micro-structure, and satisfies the periodicity condition. This procedure is featured as a first scaling up from the molecular to the crystal aggregation. Finally, the conventional homogenization procedure is implemented in FE code to evaluate a macro-continuum property. This final procedure is featured as a second scaling up from the crystal aggregation (unit cell) to macro-continuum. This triple scale analysis is applied to design piezoelectric ceramic and finds an optimum crystal orientation distribution, in which a macroscopic piezoelectric constant d33 has a maximum value

  7. Application of proton-induced X-ray emission technique to gunshot residue analyses

    International Nuclear Information System (INIS)

    Sen, P.; Panigrahi, N.; Rao, M.S.; Varier, K.M.; Sen, S.; Mehta, G.K.

    1982-01-01

    The proton-induced X-ray emission (PIXE) technique was applied to the identification and analysis of gunshot residues. Studies were made of the type of bullet and bullet hole identification, firearm discharge element profiles, the effect of various target backings, and hand swabbings. The discussion of the results reviews the sensitivity of the PIXE technique, its nondestructive nature, and its role in determining the distance from the gun to the victim and identifying the type of bullet used and whether a wound was made by a bullet or not. The high sensitivity of the PIXE technique, which is able to analyze samples as small as 0.1 to 1 ng, and its usefulness for detecting a variety of elements should make it particularly useful in firearms residue investigations

  8. Préparation à l'analyse de données dans Virgo : aspects de techniques informatiques et de techniques d'analyse pour la recherche de coalescences de binaires

    OpenAIRE

    Buskulic , D.

    2006-01-01

    Le détecteur interférométrique d'ondes gravitationnelles Virgo est en phase de mise au point, il devrait atteindre une sensibilité lui permettant de prendre des données scientifiques dès la deuxième moitié de l'année 2006. La préparation à l'analyse de ces données est en cours et le mémoire traite de plusieurs aspects :- Un environnement d'analyse, VEGA, a été mis au point. Il permet à un utilisateur physicien d'accéder et de gérer les données provenant de Virgo, de développer un code d'analy...

  9. Pico-CSIA: Picomolar Scale Compound-Specific Isotope Analyses

    Science.gov (United States)

    Baczynski, A. A.; Polissar, P. J.; Juchelka, D.; Schwieters, J. B.; Hilkert, A.; Freeman, K. H.

    2016-12-01

    The basic approach to analyzing molecular isotopes has remained largely unchanged since the late 1990s. Conventional compound-specific isotope analyses (CSIA) are conducted using capillary gas chromatography (GC), a combustion interface, and an isotope-ratio mass spectrometer (IRMS). Commercially available GC-IRMS systems are comprised of components with inner diameters ≥0.25 mm and employ helium flow rates of 1-4 mL/min. These flow rates are an order of magnitude larger than what the IRMS can accept. Consequently, ≥90% of the sample is lost through the open split, and 1-10s of nanomoles of carbon are required for analysis. These sample requirements are prohibitive for many biomarkers, which are often present in picomolar concentrations. We utilize the resolving power and low flows of narrow-bore capillary GC to improve the sensitivity of CSIA. Narrow bore capillary columns (<0.25 mm ID) allow low helium flow rates of ≤0.5mL/min for more efficient sample transfer to the ion source of the IRMS while maintaining the high linear flow rates necessary to preserve narrow peak widths ( 250 ms). The IRMS has been fitted with collector amplifiers configured to 25 ms response times for rapid data acquisition across narrow peaks. Previous authors (e.g., Sacks et al., 2007) successfully demonstrated improved sensitivity afforded by narrow-bore GC columns. They reported an accuracy and precision of 1.4‰ for peaks with an average width at half maximum of 720 ms for 100 picomoles of carbon on column. Our method builds on their advances and further reduces peak widths ( 600 ms) and the amount of sample lost prior to isotopic analysis. Preliminary experiments with 100 picomoles of carbon on column show an accuracy and standard deviation <1‰. With further improvement, we hope to demonstrate robust isotopic analysis of 10s of picomoles of carbon, more than 2 orders of magnitude lower than commercial systems. The pico-CSIA method affords high-precision isotopic analyses for

  10. Spectral analyses of the Forel-Ule Ocean colour comparator scale

    NARCIS (Netherlands)

    Wernand, M.; van der Woerd, H.J.

    2010-01-01

    François Alphonse Forel (1890) and Willi Ule (1892) composed a colour comparator scale, with tints varying from indigo-blue to cola brown, to quantify the colour of natural waters, like seas, lakes and rivers. For each measurement, the observer compares the colour of the water above a submersed

  11. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    Science.gov (United States)

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  12. Decomposing the trade-environment nexus for Malaysia: what do the technique, scale, composition, and comparative advantage effect indicate?

    Science.gov (United States)

    Ling, Chong Hui; Ahmed, Khalid; Binti Muhamad, Rusnah; Shahbaz, Muhammad

    2015-12-01

    This paper investigates the impact of trade openness on CO2 emissions using time series data over the period of 1970QI-2011QIV for Malaysia. We disintegrate the trade effect into scale, technique, composition, and comparative advantage effects to check the environmental consequence of trade at four different transition points. To achieve the purpose, we have employed augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) unit root tests in order to examine the stationary properties of the variables. Later, the long-run association among the variables is examined by applying autoregressive distributed lag (ARDL) bounds testing approach to cointegration. Our results confirm the presence of cointegration. Further, we find that scale effect has positive and technique effect has negative impact on CO2 emissions after threshold income level and form inverted U-shaped relationship-hence validates the environmental Kuznets curve hypothesis. Energy consumption adds in CO2 emissions. Trade openness and composite effect improve environmental quality by lowering CO2 emissions. The comparative advantage effect increases CO2 emissions and impairs environmental quality. The results provide the innovative approach to see the impact of trade openness in four sub-dimensions of trade liberalization. Hence, this study attributes more comprehensive policy tool for trade economists to better design environmentally sustainable trade rules and agreements.

  13. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 1, Part 2: Control modules S1--H1; Revision 5

    International Nuclear Information System (INIS)

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system

  14. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 2, Part 3: Functional modules F16--F17; Revision 5

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.

  15. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 2, Part 3: Functional modules F16--F17; Revision 5

    International Nuclear Information System (INIS)

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system

  16. Sentinel-1 data massive processing for large scale DInSAR analyses within Cloud Computing environments through the P-SBAS approach

    Science.gov (United States)

    Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana

    2017-04-01

    -core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.

  17. Correlated Amino Acid and Mineralogical Analyses of Milligram and Submilligram Samples of Carbonaceous Chondrite Lonewolf Nunataks 94101

    Science.gov (United States)

    Burton, S.; Berger, E. L.; Locke, D. R.; Lewis, E. K.

    2018-01-01

    Amino acids, the building blocks of proteins, have been found to be indigenous in the eight carbonaceous chondrite groups. The abundances, structural, enantiomeric and isotopic compositions of amino acids differ significantly among meteorites of different groups and petrologic types. These results suggest parent-body conditions (thermal or aqueous alteration), mineralogy, and the preservation of amino acids are linked. Previously, elucidating specific relationships between amino acids and mineralogy was not possible because the samples analyzed for amino acids were much larger than the scale at which petrologic heterogeneity is observed (sub mm-scale differences corresponding to sub-mg samples); for example, Pizzarello and coworkers measured amino acid abundances and performed X-ray diffraction (XRD) on several samples of the Murchison meteorite, but these analyses were performed on bulk samples that were 500 mg or larger. Advances in the sensitivity of amino acid measurements by liquid chromatography with fluorescence detection/time-of-flight mass spectrometry (LC-FD/TOF-MS), and application of techniques such as high resolution X-ray diffraction (HR-XRD) and scanning electron microscopy (SEM) with energy dispersive spectroscopy (EDS) for mineralogical characterizations have now enabled coordinated analyses on the scale at which mineral heterogeneity is observed. In this work, we have analyzed samples of the Lonewolf Nunataks (LON) 94101 CM2 carbonaceous chondrite. We are investigating the link(s) between parent body processes, mineralogical context, and amino acid compositions in meteorites on bulk samples (approx. 20mg) and mineral separates (< or = 3mg) from several of spatial locations within our allocated samples. Preliminary results of these analyses are presented here.

  18. Faster Parallel Traversal of Scale Free Graphs at Extreme Scale with Vertex Delegates

    KAUST Repository

    Pearce, Roger

    2014-11-01

    © 2014 IEEE. At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices. We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and Page-Rank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%

  19. Faster Parallel Traversal of Scale Free Graphs at Extreme Scale with Vertex Delegates

    KAUST Repository

    Pearce, Roger; Gokhale, Maya; Amato, Nancy M.

    2014-01-01

    © 2014 IEEE. At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices. We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and Page-Rank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%

  20. Technique for large-scale structural mapping at uranium deposits i in non-metamorphosed sedimentary cover rocks

    International Nuclear Information System (INIS)

    Kochkin, B.T.

    1985-01-01

    The technique for large-scale construction (1:1000 - 1:10000), reflecting small amplitude fracture plicate structures, is given for uranium deposits in non-metamorphozed sedimentary cover rocks. Structure drill log sections, as well as a set of maps with the results of area analysis of hidden disturbances, structural analysis of iso-pachous lines and facies of platform mantle horizons serve as sour ce materials for structural mapplotting. The steps of structural map construction are considered: 1) structural carcass construction; 2) reconstruction of structure contour; 3) time determination of structure initiation; 4) plotting of an additional geologic load

  1. Viscoplastic-dynamic analyses of small-scale fracture tests to obtain crack arrest toughness values for PTS conditions

    International Nuclear Information System (INIS)

    Kanninen, M.F.; Hudak, S.J. Jr; Dexter, R.J.; Couque, H.; O'Donoghue, P.E.; Polch, E.Z.

    1988-01-01

    Reliable predictions of crack arrest at the high upper shelf toughness conditions involved in postulated pressurized thermal shock (PTS) events require procedures beyond those utilized in conventional fracture mechanics treatments. To develop such a procedure, viscoplastic-dynamic fracture mechanics finite element analyses, viscoplastic material characterization testing, and small-scale crack propagation and arrest experimentation are being combines in this research. The approach couples SwRI's viscoplastic-dynamic fracture mechanics finite element code VISCRK with experiments using duplex 4340/A533B steel compact specimens. The experiments are simulated by VISCRK computations employing the Bodner-Partom viscoplastic constitutive relation and the nonlinear fracture mechanics parameter T. The goal is to develop temperature-dependent crack arrest toughness values for A533B steel. While only room temperature K Ia values have been obtained so far, these have been found to agree closely with those obtained from wide plate tests. (author)

  2. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    Science.gov (United States)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone

  3. Refining and validating the Social Interaction Anxiety Scale and the Social Phobia Scale.

    Science.gov (United States)

    Carleton, R Nicholas; Collimore, Kelsey C; Asmundson, Gordon J G; McCabe, Randi E; Rowa, Karen; Antony, Martin M

    2009-01-01

    The Social Interaction Anxiety Scale and Social Phobia Scale are companion measures for assessing symptoms of social anxiety and social phobia. The scales have good reliability and validity across several samples, however, exploratory and confirmatory factor analyses have yielded solutions comprising substantially different item content and factor structures. These discrepancies are likely the result of analyzing items from each scale separately or simultaneously. The current investigation sets out to assess items from those scales, both simultaneously and separately, using exploratory and confirmatory factor analyses in an effort to resolve the factor structure. Participants consisted of a clinical sample (n 5353; 54% women) and an undergraduate sample (n 5317; 75% women) who completed the Social Interaction Anxiety Scale and Social Phobia Scale, along with additional fear-related measures to assess convergent and discriminant validity. A three-factor solution with a reduced set of items was found to be most stable, irrespective of whether the items from each scale are assessed together or separately. Items from the Social Interaction Anxiety Scale represented one factor, whereas items from the Social Phobia Scale represented two other factors. Initial support for scale and factor validity, along with implications and recommendations for future research, is provided. (c) 2009 Wiley-Liss, Inc.

  4. High-Resolution Global and Basin-Scale Ocean Analyses and Forecasts

    Science.gov (United States)

    2009-09-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Research Laboratory,Oceanographic Division,Stennis Space Center,MS,39529-5004 8. PERFORMING... ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT...six weeks, here circling near the center of an anti- cyclonic eddy seen in both analyses. A third drifter is moving southward past Coffs Harbour

  5. Oil combatting in a cold environment using bioremediation techniques

    International Nuclear Information System (INIS)

    Rytkoenen, J.; Liukkonen, S.; Levchenko, A.; Worthington, T.; Matishov, G.; Petrov, V.

    1995-01-01

    The clean-up of oil spills in the Arctic environment is often limited by severe and cold environmental conditions. Mechanical methods are usually considered to be most favorable for oil spill combatting. However, remote spill sites, long distances, severe environmental conditions and sensitive ecosystems mean that more advanced combatting techniques are also needed to back up conventional recovery and clean-up measures. This paper describes the results of macro-scale tests conducted by VTT Manufacturing Technology to study the effectiveness of biosorbent technology against marine oil spills. The use of biosorbents was studied as a joint research project involving VTT (Finland) and the Murmansk Marine Biological Institute (Russia). Selected biosorbent products of Marine Systems, U.S.A., and the Bios Group, Russia, were used in macro-scale tests conducted in a basin measuring 15.0 x 3.0 m in length and width, respectively. This paper outlines the macro-scale test project, including microbiological and chemical studies, supported by toxicity tests and various analyses to understand better the fate of oil, especially the degree of biodegradation during the test

  6. Scaling up quality care for mothers and newborns around the time of birth: an overview of methods and analyses of intervention-specific bottlenecks and solutions.

    Science.gov (United States)

    Dickson, Kim E; Kinney, Mary V; Moxon, Sarah G; Ashton, Joanne; Zaka, Nabila; Simen-Kapeu, Aline; Sharma, Gaurav; Kerber, Kate J; Daelmans, Bernadette; Gülmezoglu, A; Mathai, Matthews; Nyange, Christabel; Baye, Martina; Lawn, Joy E

    2015-01-01

    The Every Newborn Action Plan (ENAP) and Ending Preventable Maternal Mortality targets cannot be achieved without high quality, equitable coverage of interventions at and around the time of birth. This paper provides an overview of the methodology and findings of a nine paper series of in-depth analyses which focus on the specific challenges to scaling up high-impact interventions and improving quality of care for mothers and newborns around the time of birth, including babies born small and sick. The bottleneck analysis tool was applied in 12 countries in Africa and Asia as part of the ENAP process. Country workshops engaged technical experts to complete a tool designed to synthesise "bottlenecks" hindering the scale up of maternal-newborn intervention packages across seven health system building blocks. We used quantitative and qualitative methods and literature review to analyse the data and present priority actions relevant to different health system building blocks for skilled birth attendance, emergency obstetric care, antenatal corticosteroids (ACS), basic newborn care, kangaroo mother care (KMC), treatment of neonatal infections and inpatient care of small and sick newborns. The 12 countries included in our analysis account for the majority of global maternal (48%) and newborn (58%) deaths and stillbirths (57%). Our findings confirm previously published results that the interventions with the most perceived bottlenecks are facility-based where rapid emergency care is needed, notably inpatient care of small and sick newborns, ACS, treatment of neonatal infections and KMC. Health systems building blocks with the highest rated bottlenecks varied for different interventions. Attention needs to be paid to the context specific bottlenecks for each intervention to scale up quality care. Crosscutting findings on health information gaps inform two final papers on a roadmap for improvement of coverage data for newborns and indicate the need for leadership for

  7. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    Science.gov (United States)

    Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.

    2014-02-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.

  8. Simplified field-in-field technique for a large-scale implementation in breast radiation treatment

    International Nuclear Information System (INIS)

    Fournier-Bidoz, Nathalie; Kirova, Youlia M.; Campana, Francois; Dendale, Rémi; Fourquet, Alain

    2012-01-01

    We wanted to evaluate a simplified “field-in-field” technique (SFF) that was implemented in our department of Radiation Oncology for breast treatment. This study evaluated 15 consecutive patients treated with a simplified field in field technique after breast-conserving surgery for early-stage breast cancer. Radiotherapy consisted of whole-breast irradiation to the total dose of 50 Gy in 25 fractions, and a boost of 16 Gy in 8 fractions to the tumor bed. We compared dosimetric outcomes of SFF to state-of-the-art electronic surface compensation (ESC) with dynamic leaves. An analysis of early skin toxicity of a population of 15 patients was performed. The median volume receiving at least 95% of the prescribed dose was 763 mL (range, 347–1472) for SFF vs. 779 mL (range, 349–1494) for ESC. The median residual 107% isodose was 0.1 mL (range, 0–63) for SFF and 1.9 mL (range, 0–57) for ESC. Monitor units were on average 25% higher in ESC plans compared with SFF. No patient treated with SFF had acute side effects superior to grade 1-NCI scale. SFF created homogenous 3D dose distributions equivalent to electronic surface compensation with dynamic leaves. It allowed the integration of a forward planned concomitant tumor bed boost as an additional multileaf collimator subfield of the tangential fields. Compared with electronic surface compensation with dynamic leaves, shorter treatment times allowed better radiation protection to the patient. Low-grade acute toxicity evaluated weekly during treatment and 2 months after treatment completion justified the pursuit of this technique for all breast patients in our department.

  9. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    International Nuclear Information System (INIS)

    Rasam, A R A; Ghazali, R; Noor, A M M; Mohd, W M N W; Hamid, J R A; Bazlan, M J; Ahmad, N

    2014-01-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia

  10. Controlling for Response Bias in Self-Ratings of Personality: A Comparison of Impression Management Scales and the Overclaiming Technique.

    Science.gov (United States)

    Müller, Sascha; Moshagen, Morten

    2018-04-12

    Self-serving response distortions pose a threat to the validity of personality scales. A common approach to deal with this issue is to rely on impression management (IM) scales. More recently, the overclaiming technique (OCT) has been proposed as an alternative and arguably superior measure of such biases. In this study (N = 162), we tested these approaches in the context of self- and other-ratings using the HEXACO personality inventory. To the extent that the OCT and IM scales can be considered valid measures of response distortions, they are expected to account for inflated self-ratings in particular for those personality dimensions that are prone to socially desirable responding. However, the results show that neither the OCT nor IM account for overly favorable self-ratings. The validity of IM as a measure of response biases was further scrutinized by a substantial correlation with other-rated honesty-humility. As such, this study questions the use of both the OCT and IM to assess self-serving response distortions.

  11. The gold analyser: a tool for valuation and a means for improved mining decisions

    International Nuclear Information System (INIS)

    Stewart, J.M.; Nami, M.

    1986-01-01

    The erratic values of gold grade in Witwatersrand placer deposits necessitates the collection of large numbers of samples for accurate valuation and ore reserve estimation. Owing to manpower requirements current sampling techniques do not allow for the collection of sufficiently large numbers of samples. A portable gold analyser, which is at an advanced stage of development, is expected to alleviate this problem. It is a lightweight instrument, intended for one-man operation, and is based on energy dispersive X-ray fluorescence principles for determining gold and other mineral concentrations. The instrument is designed for in situ face scanning operations and provides a direct readout and internal storage of the measured gold concentration. The immediate availability of an estimate of gold grade should significantly improve the quality of short-term panel-scale mining decisions. Data are presented to show the improved precision in valuation using the gold analyser instead of conventional chip sampling

  12. Digital Image Correlation Techniques Applied to Large Scale Rocket Engine Testing

    Science.gov (United States)

    Gradl, Paul R.

    2016-01-01

    Rocket engine hot-fire ground testing is necessary to understand component performance, reliability and engine system interactions during development. The J-2X upper stage engine completed a series of developmental hot-fire tests that derived performance of the engine and components, validated analytical models and provided the necessary data to identify where design changes, process improvements and technology development were needed. The J-2X development engines were heavily instrumented to provide the data necessary to support these activities which enabled the team to investigate any anomalies experienced during the test program. This paper describes the development of an optical digital image correlation technique to augment the data provided by traditional strain gauges which are prone to debonding at elevated temperatures and limited to localized measurements. The feasibility of this optical measurement system was demonstrated during full scale hot-fire testing of J-2X, during which a digital image correlation system, incorporating a pair of high speed cameras to measure three-dimensional, real-time displacements and strains was installed and operated under the extreme environments present on the test stand. The camera and facility setup, pre-test calibrations, data collection, hot-fire test data collection and post-test analysis and results are presented in this paper.

  13. Fine scale analyses of a coralline bank mapped using multi-beam backscatter data

    Digital Repository Service at National Institute of Oceanography (India)

    Menezes, A.A.A.; Naik, M.; Fernandes, W.A.; Haris, K.; Chakraborty, B.; Estiberio, S.; Lohani, R.B.

    In this work, we have developed a classification technique to characterize the seafloor of the Gaveshani (coralline) bank area using multi-beam backscatter data. Softcomputational techniques like the artificial neural networks (ANNs) based...

  14. Multielemental analyses of isomorphous Indian garnet gemstones by XRD and external pixe techniques.

    Science.gov (United States)

    Venkateswarulu, P; Srinivasa Rao, K; Kasipathi, C; Ramakrishna, Y

    2012-12-01

    Garnet gemstones were collected from parts of Eastern Ghats geological formations of Andhra Pradesh, India and their gemological studies were carried out. Their study of chemistry is not possible as they represent mixtures of isomorphism nature, and none of the individual specimens indicate independent chemistry. Hence, non-destructive instrumental methodology of external PIXE technique was employed to understand their chemistry and identity. A 3 MeV proton beam was employed to excite the samples. In the present study geochemical characteristics of garnet gemstones were studied by proton induced X-ray emission. Almandine variety of garnet is found to be abundant in the present study by means of their chemical contents. The crystal structure and the lattice parameters were estimated using X-Ray Diffraction studies. The trace and minor elements are estimated using PIXE technique and major compositional elements are confirmed by XRD studies. The technique is found very useful in characterizing the garnet gemstones. The present work, thus establishes usefulness and versatility of the PIXE technique with external beam for research in Geo-scientific methodology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A novel household water insecurity scale: Procedures and psychometric analysis among postpartum women in western Kenya.

    Science.gov (United States)

    Boateng, Godfred O; Collins, Shalean M; Mbullo, Patrick; Wekesa, Pauline; Onono, Maricianah; Neilands, Torsten B; Young, Sera L

    2018-01-01

    Our ability to measure household-level food insecurity has revealed its critical role in a range of physical, psychosocial, and health outcomes. Currently, there is no analogous, standardized instrument for quantifying household-level water insecurity, which prevents us from understanding both its prevalence and consequences. Therefore, our objectives were to develop and validate a household water insecurity scale appropriate for use in our cohort in western Kenya. We used a range of qualitative techniques to develop a preliminary set of 29 household water insecurity questions and administered those questions at 15 and 18 months postpartum, concurrent with a suite of other survey modules. These data were complemented by data on quantity of water used and stored, and microbiological quality. Inter-item and item-total correlations were performed to reduce scale items to 20. Exploratory factor and parallel analyses were used to determine the latent factor structure; a unidimensional scale was hypothesized and tested using confirmatory factor and bifactor analyses, along with multiple statistical fit indices. Reliability was assessed using Cronbach's alpha and the coefficient of stability, which produced a coefficient alpha of 0.97 at 15 and 18 months postpartum and a coefficient of stability of 0.62. Predictive, convergent and discriminant validity of the final household water insecurity scale were supported based on relationships with food insecurity, perceived stress, per capita household water use, and time and money spent acquiring water. The resultant scale is a valid and reliable instrument. It can be used in this setting to test a range of hypotheses about the role of household water insecurity in numerous physical and psychosocial health outcomes, to identify the households most vulnerable to water insecurity, and to evaluate the effects of water-related interventions. To extend its applicability, we encourage efforts to develop a cross-culturally valid scale

  16. Numerical and physical testing of upscaling techniques for constitutive properties

    International Nuclear Information System (INIS)

    McKenna, S.A.; Tidwell, V.C.

    1995-01-01

    This paper evaluates upscaling techniques for hydraulic conductivity measurements based on accuracy and practicality for implementation in evaluating the performance of the potential repository at Yucca Mountain. Analytical and numerical techniques are compared to one another, to the results of physical upscaling experiments, and to the results obtained on the original domain. The results from different scaling techniques are then compared to the case where unscaled point scale statistics are used to generate realizations directly at the flow model grid-block scale. Initital results indicate that analytical techniques provide upscaling constitutive properties from the point measurement scale to the flow model grid-block scale. However, no single analytic technique proves to be adequate for all situations. Numerical techniques are also accurate, but they are time intensive and their accuracy is dependent on knowledge of the local flow regime at every grid-block

  17. Large-scale Reconstructions and Independent, Unbiased Clustering Based on Morphological Metrics to Classify Neurons in Selective Populations.

    Science.gov (United States)

    Bragg, Elise M; Briggs, Farran

    2017-02-15

    This protocol outlines large-scale reconstructions of neurons combined with the use of independent and unbiased clustering analyses to create a comprehensive survey of the morphological characteristics observed among a selective neuronal population. Combination of these techniques constitutes a novel approach for the collection and analysis of neuroanatomical data. Together, these techniques enable large-scale, and therefore more comprehensive, sampling of selective neuronal populations and establish unbiased quantitative methods for describing morphologically unique neuronal classes within a population. The protocol outlines the use of modified rabies virus to selectively label neurons. G-deleted rabies virus acts like a retrograde tracer following stereotaxic injection into a target brain structure of interest and serves as a vehicle for the delivery and expression of EGFP in neurons. Large numbers of neurons are infected using this technique and express GFP throughout their dendrites, producing "Golgi-like" complete fills of individual neurons. Accordingly, the virus-mediated retrograde tracing method improves upon traditional dye-based retrograde tracing techniques by producing complete intracellular fills. Individual well-isolated neurons spanning all regions of the brain area under study are selected for reconstruction in order to obtain a representative sample of neurons. The protocol outlines procedures to reconstruct cell bodies and complete dendritic arborization patterns of labeled neurons spanning multiple tissue sections. Morphological data, including positions of each neuron within the brain structure, are extracted for further analysis. Standard programming functions were utilized to perform independent cluster analyses and cluster evaluations based on morphological metrics. To verify the utility of these analyses, statistical evaluation of a cluster analysis performed on 160 neurons reconstructed in the thalamic reticular nucleus of the thalamus

  18. Mapping patient safety: a large-scale literature review using bibliometric visualisation techniques.

    Science.gov (United States)

    Rodrigues, S P; van Eck, N J; Waltman, L; Jansen, F W

    2014-03-13

    The amount of scientific literature available is often overwhelming, making it difficult for researchers to have a good overview of the literature and to see relations between different developments. Visualisation techniques based on bibliometric data are helpful in obtaining an overview of the literature on complex research topics, and have been applied here to the topic of patient safety (PS). On the basis of title words and citation relations, publications in the period 2000-2010 related to PS were identified in the Scopus bibliographic database. A visualisation of the most frequently cited PS publications was produced based on direct and indirect citation relations between publications. Terms were extracted from titles and abstracts of the publications, and a visualisation of the most important terms was created. The main PS-related topics studied in the literature were identified using a technique for clustering publications and terms. A total of 8480 publications were identified, of which the 1462 most frequently cited ones were included in the visualisation. The publications were clustered into 19 clusters, which were grouped into three categories: (1) magnitude of PS problems (42% of all included publications); (2) PS risk factors (31%) and (3) implementation of solutions (19%). In the visualisation of PS-related terms, five clusters were identified: (1) medication; (2) measuring harm; (3) PS culture; (4) physician; (5) training, education and communication. Both analysis at publication and term level indicate an increasing focus on risk factors. A bibliometric visualisation approach makes it possible to analyse large amounts of literature. This approach is very useful for improving one's understanding of a complex research topic such as PS and for suggesting new research directions or alternative research priorities. For PS research, the approach suggests that more research on implementing PS improvement initiatives might be needed.

  19. Mapping patient safety: a large-scale literature review using bibliometric visualisation techniques

    Science.gov (United States)

    Rodrigues, S P; van Eck, N J; Waltman, L; Jansen, F W

    2014-01-01

    Background The amount of scientific literature available is often overwhelming, making it difficult for researchers to have a good overview of the literature and to see relations between different developments. Visualisation techniques based on bibliometric data are helpful in obtaining an overview of the literature on complex research topics, and have been applied here to the topic of patient safety (PS). Methods On the basis of title words and citation relations, publications in the period 2000–2010 related to PS were identified in the Scopus bibliographic database. A visualisation of the most frequently cited PS publications was produced based on direct and indirect citation relations between publications. Terms were extracted from titles and abstracts of the publications, and a visualisation of the most important terms was created. The main PS-related topics studied in the literature were identified using a technique for clustering publications and terms. Results A total of 8480 publications were identified, of which the 1462 most frequently cited ones were included in the visualisation. The publications were clustered into 19 clusters, which were grouped into three categories: (1) magnitude of PS problems (42% of all included publications); (2) PS risk factors (31%) and (3) implementation of solutions (19%). In the visualisation of PS-related terms, five clusters were identified: (1) medication; (2) measuring harm; (3) PS culture; (4) physician; (5) training, education and communication. Both analysis at publication and term level indicate an increasing focus on risk factors. Conclusions A bibliometric visualisation approach makes it possible to analyse large amounts of literature. This approach is very useful for improving one's understanding of a complex research topic such as PS and for suggesting new research directions or alternative research priorities. For PS research, the approach suggests that more research on implementing PS improvement initiatives

  20. Gold analysis by the gamma absorption technique

    International Nuclear Information System (INIS)

    Kurtoglu, Arzu; Tugrul, A.B.

    2003-01-01

    Gold (Au) analyses are generally performed using destructive techniques. In this study, the Gamma Absorption Technique has been employed for gold analysis. A series of different gold alloys of known gold content were analysed and a calibration curve was obtained. This curve was then used for the analysis of unknown samples. Gold analyses can be made non-destructively, easily and quickly by the gamma absorption technique. The mass attenuation coefficients of the alloys were measured around the K-shell absorption edge of Au. Theoretical mass attenuation coefficient values were obtained using the WinXCom program and comparison of the experimental results with the theoretical values showed generally good and acceptable agreement

  1. Trace contaminant determination in fish scale by laser-ablation technique

    International Nuclear Information System (INIS)

    Lee, I.; Coutant, C.C.; Arakawa, E.T.

    1993-01-01

    Laser ablation on rings of fish scale has been used to analyze the historical accumulation of polychlorinated biphenyls (PCB) in striped bass in the Watts Bar Reservoir. Rings on a fish scale grow in a pattern that forms a record of the fish's chemical intake. In conjunction with the migration patterns of fish monitored by ecologists, relative PCB concentrations in the seasonal rings of fish scale can be used to study the PCB distribution in the reservoir. In this study, a tightly-focused laser beam from a XeCl excimer laser was used to ablate and ionize a small portion of a fish scale placed in a vacuum chamber. The ions were identified and quantified by a time-of-flight mass spectrometer. Studies of this type can provide valuable information for the Department of Energy (DOE) off-site clean-up efforts as well as identifying the impacts of other sources to local aquatic populations

  2. Standardized analyses of nuclear shipping containers

    International Nuclear Information System (INIS)

    Parks, C.V.; Hermann, O.W.; Petrie, L.M.; Hoffman, T.J.; Tang, J.S.; Landers, N.F.; Turner, W.D.

    1983-01-01

    This paper describes improved capabilities for analyses of nuclear fuel shipping containers within SCALE -- a modular code system for Standardized Computer Analyses for Licensing Evaluation. Criticality analysis improvements include the new KENO V, a code which contains an enhanced geometry package and a new control module which uses KENO V and allows a criticality search on optimum pitch (maximum k-effective) to be performed. The SAS2 sequence is a new shielding analysis module which couples fuel burnup, source term generation, and radial cask shielding. The SAS5 shielding sequence allows a multidimensional Monte Carlo analysis of a shipping cask with code generated biasing of the particle histories. The thermal analysis sequence (HTAS1) provides an easy-to-use tool for evaluating a shipping cask response to the accident capability of the SCALE system to provide the cask designer or evaluator with a computational system that provides the automated procedures and easy-to-understand input that leads to standarization

  3. Photographic and video techniques used in the 1/5-scale Mark I boiling water reactor pressure suppression experiment

    International Nuclear Information System (INIS)

    Dixon, D.; Lord, D.

    1978-01-01

    The report provides a description of the techniques and equipment used for the photographic and video recordings of the air test series conducted on the 1/5 scale Mark I boiling water reactor (BWR) pressure suppression experimental facility at Lawrence Livermore Laboratory (LLL) between March 4, 1977, and May 12, 1977. Lighting and water filtering are discussed in the photographic system section and are also applicable to the video system. The appendices contain information from the photographic and video camera logs

  4. Performance and Vibration Analyses of Lift-Offset Helicopters

    Directory of Open Access Journals (Sweden)

    Jeong-In Go

    2017-01-01

    Full Text Available A validation study on the performance and vibration analyses of the XH-59A compound helicopter is conducted to establish techniques for the comprehensive analysis of lift-offset compound helicopters. This study considers the XH-59A lift-offset compound helicopter using a rigid coaxial rotor system as a verification model. CAMRAD II (Comprehensive Analytical Method of Rotorcraft Aerodynamics and Dynamics II, a comprehensive analysis code, is used as a tool for the performance, vibration, and loads analyses. A general free wake model, which is a more sophisticated wake model than other wake models, is used to obtain good results for the comprehensive analysis. Performance analyses of the XH-59A helicopter with and without auxiliary propulsion are conducted in various flight conditions. In addition, vibration analyses of the XH-59A compound helicopter configuration are conducted in the forward flight condition. The present comprehensive analysis results are in good agreement with the flight test and previous analyses. Therefore, techniques for the comprehensive analysis of lift-offset compound helicopters are appropriately established. Furthermore, the rotor lifts are calculated for the XH-59A lift-offset compound helicopter in the forward flight condition to investigate the airloads characteristics of the ABC™ (Advancing Blade Concept rotor.

  5. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    Science.gov (United States)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the

  6. Clustering structures of large proteins using multifractal analyses based on a 6-letter model and hydrophobicity scale of amino acids

    International Nuclear Information System (INIS)

    Yang Jianyi; Yu Zuguo; Anh, Vo

    2009-01-01

    The Schneider and Wrede hydrophobicity scale of amino acids and the 6-letter model of protein are proposed to study the relationship between the primary structure and the secondary structural classification of proteins. Two kinds of multifractal analyses are performed on the two measures obtained from these two kinds of data on large proteins. Nine parameters from the multifractal analyses are considered to construct the parameter spaces. Each protein is represented by one point in these spaces. A procedure is proposed to separate large proteins in the α, β, α + β and α/β structural classes in these parameter spaces. Fisher's linear discriminant algorithm is used to assess our clustering accuracy on the 49 selected large proteins. Numerical results indicate that the discriminant accuracies are satisfactory. In particular, they reach 100.00% and 84.21% in separating the α proteins from the {β, α + β, α/β} proteins in a parameter space; 92.86% and 86.96% in separating the β proteins from the {α + β, α/β} proteins in another parameter space; 91.67% and 83.33% in separating the α/β proteins from the α + β proteins in the last parameter space.

  7. Scaling analysis in bepu licensing of LWR

    International Nuclear Information System (INIS)

    D'auria, Francesco; Lanfredini, Marco; Muellner, Nikolaus

    2012-01-01

    'Scaling' plays an important role for safety analyses in the licensing of water cooled nuclear power reactors. Accident analyses, a sub set of safety analyses, is mostly based on nuclear reactor system thermal hydraulics, and therefore based on an adequate experimental data base, and in recent licensing applications, on best estimate computer code calculations. In the field of nuclear reactor technology, only a small set of the needed experiments can be executed at a nuclear power plant; the major part of experiments, either because of economics or because of safety concerns, has to be executed at reduced scale facilities. How to address the scaling issue has been the subject of numerous investigations in the past few decades (a lot of work has been performed in the 80thies and 90thies of the last century), and is still the focus of many scientific studies. The present paper proposes a 'roadmap' to scaling. Key elements are the 'scaling-pyramid', related 'scaling bridges' and a logical path across scaling achievements (which constitute the 'scaling puzzle'). The objective is addressing the scaling issue when demonstrating the applicability of the system codes, the 'key-to-scaling', in the licensing process of a nuclear power plant. The proposed 'road map to scaling' aims at solving the 'scaling puzzle', by introducing a unified approach to the problem.

  8. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  9. Cross-section library and processing techniques within the SCALE system

    International Nuclear Information System (INIS)

    Westfall, R.M.

    1986-01-01

    A summary of each of the SCALE system features involved in problem-dependent cross section processing is presented. These features include criticality libraries, shielding libraries, the Standard Composition Library, the SCALE functional modules: BONAMI-S, NITAWL-S, XSDRNPM-S, ICE-S, and the Material Information Processor. The automated procedure for cross-section processing is described with examples. 15 refs

  10. Multidimensional scaling technique for analysis of magnetic storms ...

    Indian Academy of Sciences (India)

    R.Narasimhan(krishtel emaging) 1461 1996 Oct 15 13:05:22

    Multidimensional Scaling (MDS) comprises a set of models and associated methods for construct- ing a geometrical representation of proximity and dominance relationship between elements in one or more sets of entities. MDS can be applied to data that express two types of relationships: proxim- ity relations and ...

  11. Screening for depressed mood in an adolescent psychiatric context by brief self-assessment scales -- testing psychometric validity of WHO-5 and BDI-6 indices by latent trait analyses

    DEFF Research Database (Denmark)

    Blom, Eva Henje; Bech, Per; Högberg, Göran

    2012-01-01

    of two such scales, which may be used in a two-step screening procedure, the WHO-Five Well-being Index (WHO-5) and the six-item version of Beck's Depression Inventory (BDI-6). METHOD: 66 adolescent psychiatric patients with a clinical diagnosis of major depressive disorder (MDD), 60 girls and 6 boys......, aged 14--18 years, mean age 16.8 years, completed the WHO-5 scale as well as the BDI-6. Statistical validity was tested by Mokken and Rasch analyses. RESULTS: The correlation between WHO-5 and BDI-6 was -0.49 (p=0.0001). Mokken analyses showed a coefficient of homogeneity for the WHO-5 of 0.......52 and for the BDI-6 of 0.46. Rasch analysis also accepted unidimensionality when testing males versus females (p > 0.05). CONCLUSIONS: The WHO-5 is psychometrically valid in an adolescent psychiatric context including both genders to assess the wellness dimension and applicable as a first step in screening for MDD...

  12. A statistical forecast model using the time-scale decomposition technique to predict rainfall during flood period over the middle and lower reaches of the Yangtze River Valley

    Science.gov (United States)

    Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao

    2018-04-01

    In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.

  13. Exploring Chondrule and CAI Rims Using Micro- and Nano-Scale Petrological and Compositional Analysis

    Science.gov (United States)

    Cartwright, J. A.; Perez-Huerta, A.; Leitner, J.; Vollmer, C.

    2017-12-01

    As the major components within chondrites, chondrules (mm-sized droplets of quenched silicate melt) and calcium-aluminum-rich inclusions (CAI, refractory) represent the most abundant and the earliest materials that solidified from the solar nebula. However, the exact formation mechanisms of these clasts, and whether these processes are related, remains unconstrained, despite extensive petrological and compositional study. By taking advantage of recent advances in nano-scale tomographical techniques, we have undertaken a combined micro- and nano-scale study of CAI and chondrule rim morphologies, to investigate their formation mechanisms. The target lithologies for this research are Wark-Lovering rims (WLR), and fine-grained rims (FGR) around CAIs and chondrules respectively, present within many chondrites. The FGRs, which are up to 100 µm thick, are of particular interest as recent studies have identified presolar grains within them. These grains predate the formation of our Solar System, suggesting FGR formation under nebular conditions. By contrast, WLRs are 10-20 µm thick, made of different compositional layers, and likely formed by flash-heating shortly after CAI formation, thus recording nebular conditions. A detailed multi-scale study of these respective rims will enable us to better understand their formation histories and determine the potential for commonality between these two phases, despite reports of an observed formation age difference of up to 2-3 Myr. We are using a combination of complimentary techniques on our selected target areas: 1) Micro-scale characterization using standard microscopic and compositional techniques (SEM-EBSD, EMPA); 2) Nano-scale characterization of structures using transmission electron microscopy (TEM) and elemental, isotopic and tomographic analysis with NanoSIMS and atom probe tomography (APT). Preliminary nano-scale APT analysis of FGR morphologies within the Allende carbonaceous chondrite has successfully discerned

  14. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.

  15. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    International Nuclear Information System (INIS)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries

  16. Scaling analysis in bepu licensing of LWR

    Energy Technology Data Exchange (ETDEWEB)

    D' auria, Francesco; Lanfredini, Marco; Muellner, Nikolaus [University of Pisa, Pisa (Italy)

    2012-08-15

    'Scaling' plays an important role for safety analyses in the licensing of water cooled nuclear power reactors. Accident analyses, a sub set of safety analyses, is mostly based on nuclear reactor system thermal hydraulics, and therefore based on an adequate experimental data base, and in recent licensing applications, on best estimate computer code calculations. In the field of nuclear reactor technology, only a small set of the needed experiments can be executed at a nuclear power plant; the major part of experiments, either because of economics or because of safety concerns, has to be executed at reduced scale facilities. How to address the scaling issue has been the subject of numerous investigations in the past few decades (a lot of work has been performed in the 80thies and 90thies of the last century), and is still the focus of many scientific studies. The present paper proposes a 'roadmap' to scaling. Key elements are the 'scaling-pyramid', related 'scaling bridges' and a logical path across scaling achievements (which constitute the 'scaling puzzle'). The objective is addressing the scaling issue when demonstrating the applicability of the system codes, the 'key-to-scaling', in the licensing process of a nuclear power plant. The proposed 'road map to scaling' aims at solving the 'scaling puzzle', by introducing a unified approach to the problem.

  17. The role of the input scale in parton distribution analyses

    International Nuclear Information System (INIS)

    Jimenez-Delgado, Pedro

    2012-01-01

    A first systematic study of the effects of the choice of the input scale in global determinations of parton distributions and QCD parameters is presented. It is shown that, although in principle the results should not depend on these choices, in practice a relevant dependence develops as a consequence of what is called procedural bias. This uncertainty should be considered in addition to other theoretical and experimental errors, and a practical procedure for its estimation is proposed. Possible sources of mistakes in the determination of QCD parameter from parton distribution analysis are pointed out.

  18. Selection, rejection and optimisation of pyrolytic graphite (PG) crystal analysers for use on the new IRIS graphite analyser bank

    International Nuclear Information System (INIS)

    Marshall, P.J.; Sivia, D.S.; Adams, M.A.; Telling, M.T.F.

    2000-01-01

    This report discusses design problems incurred by equipping the IRIS high-resolution inelastic spectrometer at the ISIS pulsed neutron source, UK with a new 4212 piece pyrolytic graphite crystal analyser array. Of the 4212 graphite pieces required, approximately 2500 will be newly purchased PG crystals with the remainder comprising of the currently installed graphite analysers. The quality of the new analyser pieces, with respect to manufacturing specifications, is assessed, as is the optimum arrangement of new PG pieces amongst old to circumvent degradation of the spectrometer's current angular resolution. Techniques employed to achieve these criteria include accurate calliper measurements, FORTRAN programming and statistical analysis. (author)

  19. Application of a 2-D approximation technique for solving stress analyses problem in FEM

    Directory of Open Access Journals (Sweden)

    H Khawaja

    2016-10-01

    Full Text Available With the advent of computational techniques and methods like finite element method, complex engineering problems are no longer difficult to solve. These methods have helped engineers and designers to simulate and solve engineering problems in much more details than possible with experimental techniques. However, applying these techniques is not a simple task and require lots of acumen, understanding, and experience in obtaining a solution that is as close to an exact solution as possible with minimum computer resources. In this work using the finite element (FE method, stress analyzes of the low-pressure turbine of a small turbofan engine is carried out by employing two different techniques. Initially, a complete solid model of the turbine is prepared which is then finite element modelled with the eight-node brick element. Stresses are calculated using this model. Subsequently, the same turbine is modelled with four-node shell element for calculation of stresses. Material properties, applied loads (inertial, aerodynamic, and thermal, and constraints were same for both the cases. Authors have developed a “2-D approximation technique” to approximate a 3-D problem into a 2-D problem to study the saving invaluable computational time and resources. In this statistics technique, the 3-D domain of variable thickness is divided into many small areas of constant thickness. It is ensured that the value of the thickness for each sub-area is the correct representative thickness of that sub area, and it is within three sigma limit. The results revealed that technique developed is accurate, less time consuming and computational effort saving; the stresses obtained by 2-D technique are within five percent of 3-D results. The solution is obtained in CPU time which is six times less than the 3-D model. Similarly, the number of nodes and elements are more than ten times less than that of the 3-D model. ANSYS ® was used in this work.

  20. Sensitivity and uncertainty analyses applied to criticality safety validation. Volume 2

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Hopper, C.M.; Parks, C.V.

    1999-01-01

    This report presents the application of sensitivity and uncertainty (S/U) analysis methodologies developed in Volume 1 to the code/data validation tasks of a criticality safety computational study. Sensitivity and uncertainty analysis methods were first developed for application to fast reactor studies in the 1970s. This work has revitalized and updated the existing S/U computational capabilities such that they can be used as prototypic modules of the SCALE code system, which contains criticality analysis tools currently in use by criticality safety practitioners. After complete development, simplified tools are expected to be released for general use. The methods for application of S/U and generalized linear-least-square methodology (GLLSM) tools to the criticality safety validation procedures were described in Volume 1 of this report. Volume 2 of this report presents the application of these procedures to the validation of criticality safety analyses supporting uranium operations where enrichments are greater than 5 wt %. Specifically, the traditional k eff trending analyses are compared with newly developed k eff trending procedures, utilizing the D and c k coefficients described in Volume 1. These newly developed procedures are applied to a family of postulated systems involving U(11)O 2 fuel, with H/X values ranging from 0--1,000. These analyses produced a series of guidance and recommendations for the general usage of these various techniques. Recommendations for future work are also detailed

  1. Replicating the microbial community and water quality performance of full-scale slow sand filters in laboratory-scale filters.

    Science.gov (United States)

    Haig, Sarah-Jane; Quince, Christopher; Davies, Robert L; Dorea, Caetano C; Collins, Gavin

    2014-09-15

    Previous laboratory-scale studies to characterise the functional microbial ecology of slow sand filters have suffered from methodological limitations that could compromise their relevance to full-scale systems. Therefore, to ascertain if laboratory-scale slow sand filters (L-SSFs) can replicate the microbial community and water quality production of industrially operated full-scale slow sand filters (I-SSFs), eight cylindrical L-SSFs were constructed and were used to treat water from the same source as the I-SSFs. Half of the L-SSFs sand beds were composed of sterilized sand (sterile) from the industrial filters and the other half with sand taken directly from the same industrial filter (non-sterile). All filters were operated for 10 weeks, with the microbial community and water quality parameters sampled and analysed weekly. To characterize the microbial community phyla-specific qPCR assays and 454 pyrosequencing of the 16S rRNA gene were used in conjunction with an array of statistical techniques. The results demonstrate that it is possible to mimic both the water quality production and the structure of the microbial community of full-scale filters in the laboratory - at all levels of taxonomic classification except OTU - thus allowing comparison of LSSF experiments with full-scale units. Further, it was found that the sand type composing the filter bed (non-sterile or sterile), the water quality produced, the age of the filters and the depth of sand samples were all significant factors in explaining observed differences in the structure of the microbial consortia. This study is the first to the authors' knowledge that demonstrates that scaled-down slow sand filters can accurately reproduce the water quality and microbial consortia of full-scale slow sand filters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Comparing short forms of the Social Interaction Anxiety Scale and the Social Phobia Scale.

    Science.gov (United States)

    Carleton, R Nicholas; Thibodeau, Michel A; Weeks, Justin W; Teale Sapach, Michelle J N; McEvoy, Peter M; Horswill, Samantha C; Heimberg, Richard G

    2014-12-01

    The Social Interaction Anxiety Scale (SIAS) and the Social Phobia Scale (SPS; Mattick & Clarke, 1998) are companion scales developed to measure anxiety in social interaction and performance situations, respectively. The measures have strong discriminant and convergent validity; however, their factor structures remain debated, and furthermore, the combined administration length (i.e., 39 items) can be prohibitive for some settings. There have been 4 attempts to assess the factor structures of the scales and reduce the item content: the 14-item Social Interaction Phobia Scale (SIPS; Carleton et al., 2009), the 12-item SIAS-6/SPS-6 (Peters, Sunderland, Andrews, Rapee, & Mattick, 2012), the 21-item abbreviated SIAS/SPS (ASIAS/ASPS; Kupper & Denollet, 2012), and the 12-item Readability SIAS and SPS (RSIAS/RSPS; Fergus, Valentiner, McGrath, Gier-Lonsway, & Kim, 2012). The current study compared the short forms on (a) factor structure, (b) ability to distinguish between clinical and non-clinical populations, (c) sensitivity to change following therapy, and (d) convergent validity with related measures. Participants included 3,607 undergraduate students (55% women) and 283 patients with social anxiety disorder (43% women). Results of confirmatory factor analyses, sensitivity analyses, and correlation analyses support the robust utility of items in the SIPS and the SPS-6 and SIAS-6 relative to the other short forms; furthermore, the SIPS and the SPS-6 and SIAS-6 were also supported by convergent validity analyses within the undergraduate sample. The RSIAS/RSPS and the ASIAS/ASPS were least supported, based on the current results and the principle of parsimony. Accordingly, researchers and clinicians should consider carefully which of the short forms will best suit their needs. (c) 2014 APA, all rights reserved.

  3. [More than 10 years of follow up of the stop screw technique].

    Science.gov (United States)

    Calvo Calvo, S; Marti Ciruelos, R; Rasero Ponferrada, M; González de Orbe, G; Viña Fernández, R

    2016-01-01

    Infantile flexible flatfoot does not require treatment in most cases. Symptomatic flexible flat feet are treated orthopaedically and surgery is only indicated when orthosis fails. Cases who underwent surgical treatment with the stop screw technique at the 12 de Octubre Hospital between 1995 and 2002 are reported. Patient progress is also analysed. Six angles are measured on the x-ray prior to surgery and those same x-ray angles are measured again before material extraction. They are then compared to see if the correction achieved is statistically significant. A more reduced sample is currently being assessed with the same radiological measurements and two clinical assessment scales: Lickert, and Smith and Millar. The latest x-rays are analysed by two radiologists to determine if there is subtalar arthrosis. In the short term, statistically significant differences are observed in all angles. The comparison between the post-surgery angles and the current angles does not show differences, except for the Giannestras angle, which has statistically significantly worsened. Clinical results and patient satisfaction is good. Incipient subtalar arthrosis is present in 68.5% of current patient x-rays. Stop screw method is a cheap, simple and effective technique to correct symptomatic flexible flatfoot that has not improved with conservative treatment. This technique provides short-term foot correction which can be maintained over time. Copyright © 2015 SECOT. Published by Elsevier Espana. All rights reserved.

  4. Computer simulations for the nano-scale

    International Nuclear Information System (INIS)

    Stich, I.

    2007-01-01

    A review of methods for computations for the nano-scale is presented. The paper should provide a convenient starting point into computations for the nano-scale as well as a more in depth presentation for those already working in the field of atomic/molecular-scale modeling. The argument is divided in chapters covering the methods for description of the (i) electrons, (ii) ions, and (iii) techniques for efficient solving of the underlying equations. A fairly broad view is taken covering the Hartree-Fock approximation, density functional techniques and quantum Monte-Carlo techniques for electrons. The customary quantum chemistry methods, such as post Hartree-Fock techniques, are only briefly mentioned. Description of both classical and quantum ions is presented. The techniques cover Ehrenfest, Born-Oppenheimer, and Car-Parrinello dynamics. The strong and weak points of both principal and technical nature are analyzed. In the second part we introduce a number of applications to demonstrate the different approximations and techniques introduced in the first part. They cover a wide range of applications such as non-simple liquids, surfaces, molecule-surface interactions, applications in nano technology, etc. These more in depth presentations, while certainly not exhaustive, should provide information on technical aspects of the simulations, typical parameters used, and ways of analysis of the huge amounts of data generated in these large-scale supercomputer simulations. (author)

  5. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    Science.gov (United States)

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. The multianalyser system of the three axes neutron spectrometer PUMA: Pilot experiments with the innovative multiplex technique

    Energy Technology Data Exchange (ETDEWEB)

    Sobolev, Oleg; Hoffmann, Ron; Gibhardt, Holger [Institute for Physical Chemistry, Georg-August-University of Göttingen, Tammannstr. 6, D-37077 Göttingen (Germany); Jünke, Norbert [Forschungs-Neutronenquelle Heinz-Maier-Leibnitz, Technical University of Munich, Lichtenbergstr. 1, D-85748 Garching (Germany); Knorr, Andreas; Meyer, Volker [Institute for Physical Chemistry, Georg-August-University of Göttingen, Tammannstr. 6, D-37077 Göttingen (Germany); Eckold, Götz, E-mail: geckold@gwdg.de [Institute for Physical Chemistry, Georg-August-University of Göttingen, Tammannstr. 6, D-37077 Göttingen (Germany)

    2015-02-01

    A new type of multiplex technique for three axes neutron spectrometers has been realized and successfully commissioned at the PUMA spectrometer at FRM II. Consisting of eleven analyser-detector channels which can be configured individually, this technique is especially suitable for kinetic experiments where a single excitation spectrum is recorded as a function of time without the need to move the spectrometer. On a time-scale of seconds an entire spectrum can be recorded thus allowing users to monitor changes during fast kinetic processes in single shot experiments without the need for stroboscopic techniques. Moreover, the multianalyser system provides an efficient and rapid tool for mapping excitations in (Q,ω)-space. The results of pilot experiments demonstrate the performance of this new technique and a user-friendly software is presented which assists users during their experiments.

  7. A new look at the psychometrics of the parenting scale through the lens of item response theory.

    Science.gov (United States)

    Lorber, Michael F; Xu, Shu; Slep, Amy M Smith; Bulling, Lisanne; O'Leary, Susan G

    2014-01-01

    The psychometrics of the Parenting Scale's Overreactivity and Laxness subscales were evaluated using item response theory (IRT) techniques. The IRT analyses were based on 2 community samples of cohabiting parents of 3- to 8-year-old children, combined to yield a total sample size of 852 families. The results supported the utility of the Overreactivity and Laxness subscales, particularly in discriminating among parents in the mid to upper reaches of each construct. The original versions of the Overreactivity and Laxness subscales were more reliable than alternative, shorter versions identified in replicated factor analyses from previously published research and in IRT analyses in the present research. Moreover, in several cases, the original versions of these subscales, in comparison with the shortened versions, exhibited greater 6-month stabilities and correlations with child externalizing behavior and couple relationship satisfaction. Reliability was greater for the Laxness than for the Overreactivity subscale. Item performance on each subscale was highly variable. Together, the present findings are generally supportive of the psychometrics of the Parenting Scale, particularly for clinical research and practice. They also suggest areas for further development.

  8. Comparison of automated ribosomal intergenic spacer analysis (ARISA) and denaturing gradient gel electrophoresis (DGGE) techniques for analysing the influence of diet on ruminal bacterial diversity.

    Science.gov (United States)

    Saro, Cristina; Molina-Alcaide, Eduarda; Abecia, Leticia; Ranilla, María José; Carro, María Dolores

    2018-04-01

    The objective of this study was to compare the automated ribosomal intergenic spacer analysis (ARISA) and the denaturing gradient gel electrophoresis (DGGE) techniques for analysing the effects of diet on diversity in bacterial pellets isolated from the liquid (liquid-associated bacteria (LAB)) and solid (solid-associated bacteria (SAB)) phase of the rumen. The four experimental diets contained forage to concentrate ratios of 70:30 or 30:70 and had either alfalfa hay or grass hay as forage. Four rumen-fistulated animals (two sheep and two goats) received the diets in a Latin square design. Bacterial pellets (LAB and SAB) were isolated at 2 h post-feeding for DNA extraction and analysed by ARISA and DGGE. The number of peaks in individual samples ranged from 48 to 99 for LAB and from 41 to 95 for SAB with ARISA, and values of DGGE-bands ranged from 27 to 50 for LAB and from 18 to 45 for SAB. The LAB samples from high concentrate-fed animals tended (p forage-fed animals with ARISA, but no differences were identified with DGGE. The SAB samples from high concentrate-fed animals had lower (p forage diets with ARISA, but only a trend was noticed for these parameters with DGGE (p forage type on LAB diversity was detected by any technique. In this study, ARISA detected some changes in ruminal bacterial communities that were not detected by DGGE, and therefore ARISA was considered more appropriate for assessing bacterial diversity of ruminal bacterial pellets. The results highlight the impact of the fingerprinting technique used to draw conclusions on dietary factors affecting bacterial diversity in ruminal bacterial pellets.

  9. A QUANTITATIVE METHOD FOR ANALYSING 3-D BRANCHING IN EMBRYONIC KIDNEYS: DEVELOPMENT OF A TECHNIQUE AND PRELIMINARY DATA

    Directory of Open Access Journals (Sweden)

    Gabriel Fricout

    2011-05-01

    Full Text Available The normal human adult kidney contains between 300,000 and 1 million nephrons (the functional units of the kidney. Nephrons develop at the tips of the branching ureteric duct, and therefore ureteric duct branching morphogenesis is critical for normal kidney development. Current methods for analysing ureteric branching are mostly qualitative and those quantitative methods that do exist do not account for the 3- dimensional (3D shape of the ureteric "tree". We have developed a method for measuring the total length of the ureteric tree in 3D. This method is described and preliminary data are presented. The algorithm allows for performing a semi-automatic segmentation of a set of grey level confocal images and an automatic skeletonisation of the resulting binary object. Measurements of length are automatically obtained, and numbers of branch points are manually counted. The final representation can be reconstructed by means of 3D volume rendering software, providing a fully rotating 3D perspective of the skeletonised tree, making it possible to identify and accurately measure branch lengths. Preliminary data shows the total length estimates obtained with the technique to be highly reproducible. Repeat estimates of total tree length vary by just 1-2%. We will now use this technique to further define the growth of the ureteric tree in vitro, under both normal culture conditions, and in the presence of various levels of specific molecules suspected of regulating ureteric growth. The data obtained will provide fundamental information on the development of renal architecture, as well as the regulation of nephron number.

  10. Muon reconstruction efficiency, momentum scale and resolution in pp collisions at 8TeV with ATLAS

    CERN Document Server

    Dimitrievska, A; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment identifies and reconstructs muons with two high precision tracking systems, the Inner Detector and the Muon Spectrometer, which provide independent measurements of the muon momentum. This poster summarizes the performance of the combined muon reconstruction in terms of reconstruction efficiency, momentum scale and resolution. Data-driven techniques are used to derive corrections to be applied to simulation in order to reproduce the reconstruction efficiency, momentum scale and resolution as observed in experimental data, and to asses systematic uncertainties on these quantities. The analysed dataset corresponds to an integrated luminosity of 20.4 fb−1 from 8 TeV pp collisions recorded in 2012.

  11. Groundwater development stress: Global-scale indices compared to regional modeling

    Science.gov (United States)

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  12. Reliability analysis of large scaled structures by optimization technique

    International Nuclear Information System (INIS)

    Ishikawa, N.; Mihara, T.; Iizuka, M.

    1987-01-01

    This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)

  13. Principal Components Analyses of the MMPI-2 PSY-5 Scales. Identification of Facet Subscales

    Science.gov (United States)

    Arnau, Randolph C.; Handel, Richard W.; Archer, Robert P.

    2005-01-01

    The Personality Psychopathology Five (PSY-5) is a five-factor personality trait model designed for assessing personality pathology using quantitative dimensions. Harkness, McNulty, and Ben-Porath developed Minnesota Multiphasic Personality Inventory-2 (MMPI-2) scales based on the PSY-5 model, and these scales were recently added to the standard…

  14. State-of-the-Art Report on Multi-scale Modelling of Nuclear Fuels

    International Nuclear Information System (INIS)

    Bartel, T.J.; Dingreville, R.; Littlewood, D.; Tikare, V.; Bertolus, M.; Blanc, V.; Bouineau, V.; Carlot, G.; Desgranges, C.; Dorado, B.; Dumas, J.C.; Freyss, M.; Garcia, P.; Gatt, J.M.; Gueneau, C.; Julien, J.; Maillard, S.; Martin, G.; Masson, R.; Michel, B.; Piron, J.P.; Sabathier, C.; Skorek, R.; Toffolon, C.; Valot, C.; Van Brutzel, L.; Besmann, Theodore M.; Chernatynskiy, A.; Clarno, K.; Gorti, S.B.; Radhakrishnan, B.; Devanathan, R.; Dumont, M.; Maugis, P.; El-Azab, A.; Iglesias, F.C.; Lewis, B.J.; Krack, M.; Yun, Y.; Kurata, M.; Kurosaki, K.; Largenton, R.; Lebensohn, R.A.; Malerba, L.; Oh, J.Y.; Phillpot, S.R.; Tulenko, J. S.; Rachid, J.; Stan, M.; Sundman, B.; Tonks, M.R.; Williamson, R.; Van Uffelen, P.; Welland, M.J.; Valot, Carole; Stan, Marius; Massara, Simone; Tarsi, Reka

    2015-10-01

    Fuels is to document the development of multi-scale modelling approaches for fuels in support of current fuel optimisation programmes and innovative fuel designs. The objectives of the effort are: - assess international multi-scale modelling approaches devoted to nuclear fuels from the atomic to the macroscopic scale in order to share and promote such approaches; - address all types of fuels: both current (mainly oxide fuels) and advanced fuels (such as minor actinide containing oxide, carbide, nitride, or metal fuels); - address key engineering issues associated with each type of fuel; - assess the quality of existing links between the various scales and list needs for strengthening multi-scale modelling approaches; - identify the most relevant experimental data or experimental characterisation techniques that are missing for validation of fuel multi-scale modelling; - promote exchange between the actors involved at various scales; - promote exchange between multi-scale modelling experts and experimentalists; - exchange information with other expert groups of the WPMM. This report is organised as follows: - Part I lays out the different classes of phenomena relevant to nuclear fuel behaviour. Each chapter is further divided into topics relevant for each class of phenomena. - Part II is devoted to a description of the techniques used to obtain material properties necessary for describing the phenomena and their assessment. - Part III covers details relative to the principles and limits behind each modelling/computational technique as a reference for more detailed information. Included within the appropriate sections are critical analyses of the mid- and long-term challenges for the future (i.e., approximations, methods, scales, key experimental data, characterisation techniques missing or to be strengthened)

  15. Developing Techniques for Small Scale Indigenous Molybdenum-99 Production Using LEU Fission at Tajoura Research Center-Libya [Country report: Libya

    International Nuclear Information System (INIS)

    Alwaer, Sami M.

    2015-01-01

    The object of this work was to assist the IAEA by providing the Libyan country report about the Coordination Research Project (CRP), on the subject of “Developing techniques for small scale indigenous Mo-99 production using LEU-foil” which took place over five years and four RCMs. A CRP on this subject was approved in early 2005. The objectives of this CRP are to: transfer know-how in the area of 99 Mo production using LEU targets based on reference technologies from leading laboratories in the field to the participating laboratories in the CRP; develop national work plans based on various stages of technical development and objectives in this field; establish the procedures and protocols to be employed, including quality control and assurance procedures; establish the coordinated activities and programme for preparation, irradiation, and processing of LEU targets [a]; and to compare results obtained in the implementation of the technique in order to provide follow up advice and assistance. Technetium-99m ( 99m Tc), the daughter product of molybdenum-99 ( 99 Mo), is the most commonly utilized medical radioisotope in the world, used for approximately 20-25 million medical diagnostic procedures annually, comprising some 80% of all diagnostic nuclear medicine procedures. National and international efforts are underway to shift the production of medical isotopes from highly enriched uranium (HEU) to low enriched uranium (LEU) targets. A small but growing amount of the current global 99 Mo production is derived from the irradiation of LEU targets. The IAEA became aware of the interest of a number of developing Member States that are seeking to become small scale, indigenous producers of 99 Mo to meet local nuclear medicine requirements. The IAEA initiated Coordinated Research Project (CRP) T.1.20.18 “Developing techniques for small-scale indigenous production of Mo-99 using LEU or neutron activation” in order to assist countries in this field. The more

  16. Large scale applicability of a Fully Adaptive Non-Intrusive Spectral Projection technique: Sensitivity and uncertainty analysis of a transient

    International Nuclear Information System (INIS)

    Perkó, Zoltán; Lathouwers, Danny; Kloosterman, Jan Leen; Hagen, Tim van der

    2014-01-01

    Highlights: • Grid and basis adaptive Polynomial Chaos techniques are presented for S and U analysis. • Dimensionality reduction and incremental polynomial order reduce computational costs. • An unprotected loss of flow transient is investigated in a Gas Cooled Fast Reactor. • S and U analysis is performed with MC and adaptive PC methods, for 42 input parameters. • PC accurately estimates means, variances, PDFs, sensitivities and uncertainties. - Abstract: Since the early years of reactor physics the most prominent sensitivity and uncertainty (S and U) analysis methods in the nuclear community have been adjoint based techniques. While these are very effective for pure neutronics problems due to the linearity of the transport equation, they become complicated when coupled non-linear systems are involved. With the continuous increase in computational power such complicated multi-physics problems are becoming progressively tractable, hence affordable and easily applicable S and U analysis tools also have to be developed in parallel. For reactor physics problems for which adjoint methods are prohibitive Polynomial Chaos (PC) techniques offer an attractive alternative to traditional random sampling based approaches. At TU Delft such PC methods have been studied for a number of years and this paper presents a large scale application of our Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm for performing the sensitivity and uncertainty analysis of a Gas Cooled Fast Reactor (GFR) Unprotected Loss Of Flow (ULOF) transient. The transient was simulated using the Cathare 2 code system and a fully detailed model of the GFR2400 reactor design that was investigated in the European FP7 GoFastR project. Several sources of uncertainty were taken into account amounting to an unusually high number of stochastic input parameters (42) and numerous output quantities were investigated. The results show consistently good performance of the applied adaptive PC

  17. Self-concept in preadolescence: A brief version of AF5 scale

    Directory of Open Access Journals (Sweden)

    Pau García-Grau

    2014-06-01

    Full Text Available The purpose of this study was to analyze the psychometric properties of a brief version of the AF5 questionnaire (García & Musitu, 2001 using exploratory and confirmatory techniques on a preadolescent population in the Valencian community (Spain. The sample was made up of 541 participants between 10 and 12 years old, 55.1% (298 boys and 44.9% (243 girls. After observing the results of different reliability and validity analyses (exploratory factor analysis (EFA and confirmatory factor analysis (CFA, it was found that the reduced scale consisting of 20 items showed a similar reliability and validity to the original scale. The factorial structure also fits that of the original model established a priori. According to the results of the study, the use of this diagnostic tool with Spanish children seems justified.

  18. Stuttering, induced fluency, and natural fluency: a hierarchical series of activation likelihood estimation meta-analyses.

    Science.gov (United States)

    Budde, Kristin S; Barron, Daniel S; Fox, Peter T

    2014-12-01

    Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as "neural signatures of stuttering" (Brown, Ingham, Ingham, Laird, & Fox, 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: (1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and (2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state). Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Probing Mantle Heterogeneity Across Spatial Scales

    Science.gov (United States)

    Hariharan, A.; Moulik, P.; Lekic, V.

    2017-12-01

    Inferences of mantle heterogeneity in terms of temperature, composition, grain size, melt and crystal structure may vary across local, regional and global scales. Probing these scale-dependent effects require quantitative comparisons and reconciliation of tomographic models that vary in their regional scope, parameterization, regularization and observational constraints. While a range of techniques like radial correlation functions and spherical harmonic analyses have revealed global features like the dominance of long-wavelength variations in mantle heterogeneity, they have limited applicability for specific regions of interest like subduction zones and continental cratons. Moreover, issues like discrepant 1-D reference Earth models and related baseline corrections have impeded the reconciliation of heterogeneity between various regional and global models. We implement a new wavelet-based approach that allows for structure to be filtered simultaneously in both the spectral and spatial domain, allowing us to characterize heterogeneity on a range of scales and in different geographical regions. Our algorithm extends a recent method that expanded lateral variations into the wavelet domain constructed on a cubed sphere. The isolation of reference velocities in the wavelet scaling function facilitates comparisons between models constructed with arbitrary 1-D reference Earth models. The wavelet transformation allows us to quantify the scale-dependent consistency between tomographic models in a region of interest and investigate the fits to data afforded by heterogeneity at various dominant wavelengths. We find substantial and spatially varying differences in the spectrum of heterogeneity between two representative global Vp models constructed using different data and methodologies. Applying the orthonormality of the wavelet expansion, we isolate detailed variations in velocity from models and evaluate additional fits to data afforded by adding such complexities to long

  20. Development and Validation of Academic Dishonesty Scale (ADS): Presenting a Multidimensional Scale

    Science.gov (United States)

    Bashir, Hilal; Bala, Ranjan

    2018-01-01

    The purpose of the study was to develop a scale measuring academic dishonesty of undergraduate students. The sample of the study constitutes nine hundred undergraduate students selected via random sampling technique. After receiving expert's opinions for the face and content validity of the scale, the exploratory factor analysis (EFA) and…

  1. Comparisons of Particle Tracking Techniques and Galerkin Finite Element Methods in Flow Simulations on Watershed Scales

    Science.gov (United States)

    Shih, D.; Yeh, G.

    2009-12-01

    This paper applies two numerical approximations, the particle tracking technique and Galerkin finite element method, to solve the diffusive wave equation in both one-dimensional and two-dimensional flow simulations. The finite element method is one of most commonly approaches in numerical problems. It can obtain accurate solutions, but calculation times may be rather extensive. The particle tracking technique, using either single-velocity or average-velocity tracks to efficiently perform advective transport, could use larger time-step sizes than the finite element method to significantly save computational time. Comparisons of the alternative approximations are examined in this poster. We adapt the model WASH123D to examine the work. WASH123D is an integrated multimedia, multi-processes, physics-based computational model suitable for various spatial-temporal scales, was first developed by Yeh et al., at 1998. The model has evolved in design capability and flexibility, and has been used for model calibrations and validations over the course of many years. In order to deliver a locally hydrological model in Taiwan, the Taiwan Typhoon and Flood Research Institute (TTFRI) is working with Prof. Yeh to develop next version of WASH123D. So, the work of our preliminary cooperationx is also sketched in this poster.

  2. The ENIGMA Consortium : large-scale collaborative analyses of neuroimaging and genetic data

    NARCIS (Netherlands)

    Thompson, Paul M.; Stein, Jason L.; Medland, Sarah E.; Hibar, Derrek P.; Vasquez, Alejandro Arias; Renteria, Miguel E.; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J.; Martin, Nicholas G.; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C.; Andreassen, Ole A.; Apostolova, Liana G.; Appel, Katja; Armstrong, Nicola J.; Aribisala, Benjamin; Bastin, Mark E.; Bauer, Michael; Bearden, Carrie E.; Bergmann, Orjan; Binder, Elisabeth B.; Blangero, John; Bockholt, Henry J.; Boen, Erlend; Bois, Catherine; Boomsma, Dorret I.; Booth, Tom; Bowman, Ian J.; Bralten, Janita; Brouwer, Rachel M.; Brunner, Han G.; Brohawn, David G.; Buckner, Randy L.; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R.; Calhoun, Vince D.; Hartman, Catharina A.; Hoekstra, Pieter J.; Penninx, Brenda W.; Schmaal, Lianne; van Tol, Marie-Jose

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience,

  3. The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data

    NARCIS (Netherlands)

    Thompson, Paul M.; Stein, Jason L.; Medland, Sarah E.; Hibar, Derrek P.; Vasquez, Alejandro Arias; Renteria, Miguel E.; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J.; Martin, Nicholas G.; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C.; Andreassen, Ole A.; Apostolova, Liana G.; Appel, Katja; Armstrong, Nicola J.; Aribisala, Benjamin; Bastin, Mark E.; Bauer, Michael; Bearden, Carrie E.; Bergmann, Orjan; Binder, Elisabeth B.; Blangero, John; Bockholt, Henry J.; Bøen, Erlend; Bois, Catherine; Boomsma, Dorret I.; Booth, Tom; Bowman, Ian J.; Bralten, Janita; Brouwer, Rachel M.; Brunner, Han G.; Brohawn, David G.; Buckner, Randy L.; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R.; Calhoun, Vince D.; Cannon, Dara M.; Cantor, Rita M.; Carless, Melanie A.; Caseras, Xavier; Cavalleri, Gianpiero L.; Chakravarty, M. Mallar; Chang, Kiki D.; Ching, Christopher R. K.; Christoforou, Andrea; Cichon, Sven; Clark, Vincent P.; Conrod, Patricia; Coppola, Giovanni; Crespo-Facorro, Benedicto; Curran, Joanne E.; Czisch, Michael; Deary, Ian J.; de Geus, Eco J. C.; den Braber, Anouk; Delvecchio, Giuseppe; Depondt, Chantal; de Haan, Lieuwe; de Zubicaray, Greig I.; Dima, Danai; Dimitrova, Rali; Djurovic, Srdjan; Dong, Hongwei; Donohoe, Gary; Duggirala, Ravindranath; Dyer, Thomas D.; Ehrlich, Stefan; Ekman, Carl Johan; Elvsåshagen, Torbjørn; Emsell, Louise; Erk, Susanne; Espeseth, Thomas; Fagerness, Jesen; Fears, Scott; Fedko, Iryna; Fernández, Guillén; Fisher, Simon E.; Foroud, Tatiana; Fox, Peter T.; Francks, Clyde; Frangou, Sophia; Frey, Eva Maria; Frodl, Thomas; Frouin, Vincent; Garavan, Hugh; Giddaluru, Sudheer; Glahn, David C.; Godlewska, Beata; Goldstein, Rita Z.; Gollub, Randy L.; Grabe, Hans J.; Grimm, Oliver; Gruber, Oliver; Guadalupe, Tulio; Gur, Raquel E.; Gur, Ruben C.; Göring, Harald H. H.; Hagenaars, Saskia; Hajek, Tomas; Hall, Geoffrey B.; Hall, Jeremy; Hardy, John; Hartman, Catharina A.; Hass, Johanna; Hatton, Sean N.; Haukvik, Unn K.; Hegenscheid, Katrin; Heinz, Andreas; Hickie, Ian B.; Ho, Beng-Choon; Hoehn, David; Hoekstra, Pieter J.; Hollinshead, Marisa; Holmes, Avram J.; Homuth, Georg; Hoogman, Martine; Hong, L. Elliot; Hosten, Norbert; Hottenga, Jouke-Jan; Hulshoff Pol, Hilleke E.; Hwang, Kristy S.; Jack, Clifford R.; Jenkinson, Mark; Johnston, Caroline; Jönsson, Erik G.; Kahn, René S.; Kasperaviciute, Dalia; Kelly, Sinead; Kim, Sungeun; Kochunov, Peter; Koenders, Laura; Krämer, Bernd; Kwok, John B. J.; Lagopoulos, Jim; Laje, Gonzalo; Landen, Mikael; Landman, Bennett A.; Lauriello, John; Lawrie, Stephen M.; Lee, Phil H.; Le Hellard, Stephanie; Lemaître, Herve; Leonardo, Cassandra D.; Li, Chiang-Shan; Liberg, Benny; Liewald, David C.; Liu, Xinmin; Lopez, Lorna M.; Loth, Eva; Lourdusamy, Anbarasu; Luciano, Michelle; Macciardi, Fabio; Machielsen, Marise W. J.; Macqueen, Glenda M.; Malt, Ulrik F.; Mandl, René; Manoach, Dara S.; Martinot, Jean-Luc; Matarin, Mar; Mather, Karen A.; Mattheisen, Manuel; Mattingsdal, Morten; Meyer-Lindenberg, Andreas; McDonald, Colm; McIntosh, Andrew M.; McMahon, Francis J.; McMahon, Katie L.; Meisenzahl, Eva; Melle, Ingrid; Milaneschi, Yuri; Mohnke, Sebastian; Montgomery, Grant W.; Morris, Derek W.; Moses, Eric K.; Mueller, Bryon A.; Muñoz Maniega, Susana; Mühleisen, Thomas W.; Müller-Myhsok, Bertram; Mwangi, Benson; Nauck, Matthias; Nho, Kwangsik; Nichols, Thomas E.; Nilsson, Lars-Göran; Nugent, Allison C.; Nyberg, Lars; Olvera, Rene L.; Oosterlaan, Jaap; Ophoff, Roel A.; Pandolfo, Massimo; Papalampropoulou-Tsiridou, Melina; Papmeyer, Martina; Paus, Tomas; Pausova, Zdenka; Pearlson, Godfrey D.; Penninx, Brenda W.; Peterson, Charles P.; Pfennig, Andrea; Phillips, Mary; Pike, G. Bruce; Poline, Jean-Baptiste; Potkin, Steven G.; Pütz, Benno; Ramasamy, Adaikalavan; Rasmussen, Jerod; Rietschel, Marcella; Rijpkema, Mark; Risacher, Shannon L.; Roffman, Joshua L.; Roiz-Santiañez, Roberto; Romanczuk-Seiferth, Nina; Rose, Emma J.; Royle, Natalie A.; Rujescu, Dan; Ryten, Mina; Sachdev, Perminder S.; Salami, Alireza; Satterthwaite, Theodore D.; Savitz, Jonathan; Saykin, Andrew J.; Scanlon, Cathy; Schmaal, Lianne; Schnack, Hugo G.; Schork, Andrew J.; Schulz, S. Charles; Schür, Remmelt; Seidman, Larry; Shen, Li; Shoemaker, Jody M.; Simmons, Andrew; Sisodiya, Sanjay M.; Smith, Colin; Smoller, Jordan W.; Soares, Jair C.; Sponheim, Scott R.; Sprooten, Emma; Starr, John M.; Steen, Vidar M.; Strakowski, Stephen; Strike, Lachlan; Sussmann, Jessika; Sämann, Philipp G.; Teumer, Alexander; Toga, Arthur W.; Tordesillas-Gutierrez, Diana; Trabzuni, Daniah; Trost, Sarah; Turner, Jessica; van den Heuvel, Martijn; van der Wee, Nic J.; van Eijk, Kristel; van Erp, Theo G. M.; van Haren, Neeltje E. M.; van 't Ent, Dennis; van Tol, Marie-Jose; Valdés Hernández, Maria C.; Veltman, Dick J.; Versace, Amelia; Völzke, Henry; Walker, Robert; Walter, Henrik; Wang, Lei; Wardlaw, Joanna M.; Weale, Michael E.; Weiner, Michael W.; Wen, Wei; Westlye, Lars T.; Whalley, Heather C.; Whelan, Christopher D.; White, Tonya; Winkler, Anderson M.; Wittfeld, Katharina; Woldehawariat, Girma; Wolf, Christiane; Zilles, David; Zwiers, Marcel P.; Thalamuthu, Anbupalam; Schofield, Peter R.; Freimer, Nelson B.; Lawrence, Natalia S.; Drevets, Wayne

    2014-01-01

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience,

  4. The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data

    NARCIS (Netherlands)

    P.M. Thompson (Paul); J.L. Stein; S.E. Medland (Sarah Elizabeth); D.P. Hibar (Derrek); A.A. Vásquez (Arias); M.E. Rentería (Miguel); R. Toro (Roberto); N. Jahanshad (Neda); G. Schumann (Gunter); B. Franke (Barbara); M.J. Wright (Margaret); N.G. Martin (Nicholas); I. Agartz (Ingrid); M. Alda (Martin); S. Alhusaini (Saud); L. Almasy (Laura); K. Alpert (Kathryn); N.C. Andreasen; O.A. Andreassen (Ole); L.G. Apostolova (Liana); K. Appel (Katja); N.J. Armstrong (Nicola); B. Aribisala (Benjamin); M.E. Bastin (Mark); M. Bauer (Michael); C.E. Bearden (Carrie); Ø. Bergmann (Ørjan); E.B. Binder (Elisabeth); J. Blangero (John); H.J. Bockholt; E. Bøen (Erlend); M. Bois (Monique); D.I. Boomsma (Dorret); T. Booth (Tom); I.J. Bowman (Ian); L.B.C. Bralten (Linda); R.M. Brouwer (Rachel); H.G. Brunner; D.G. Brohawn (David); M. Buckner; J.K. Buitelaar (Jan); K. Bulayeva (Kazima); J. Bustillo; V.D. Calhoun (Vince); D.M. Cannon (Dara); R.M. Cantor; M.A. Carless (Melanie); X. Caseras (Xavier); G. Cavalleri (Gianpiero); M.M. Chakravarty (M. Mallar); K.D. Chang (Kiki); C.R.K. Ching (Christopher); A. Christoforou (Andrea); S. Cichon (Sven); V.P. Clark; P. Conrod (Patricia); D. Coppola (Domenico); B. Crespo-Facorro (Benedicto); J.E. Curran (Joanne); M. Czisch (Michael); I.J. Deary (Ian); E.J.C. de Geus (Eco); A. den Braber (Anouk); G. Delvecchio (Giuseppe); C. Depondt (Chantal); L. de Haan (Lieuwe); G.I. de Zubicaray (Greig); D. Dima (Danai); R. Dimitrova (Rali); S. Djurovic (Srdjan); H. Dong (Hongwei); D.J. Donohoe (Dennis); A. Duggirala (Aparna); M.D. Dyer (Matthew); S.M. Ehrlich (Stefan); C.J. Ekman (Carl Johan); T. Elvsåshagen (Torbjørn); L. Emsell (Louise); S. Erk; T. Espeseth (Thomas); J. Fagerness (Jesen); S. Fears (Scott); I. Fedko (Iryna); G. Fernandez (Guillén); S.E. Fisher (Simon); T. Foroud (Tatiana); P.T. Fox (Peter); C. Francks (Clyde); S. Frangou (Sophia); E.M. Frey (Eva Maria); T. Frodl (Thomas); V. Frouin (Vincent); H. Garavan (Hugh); S. Giddaluru (Sudheer); D.C. Glahn (David); B. Godlewska (Beata); R.Z. Goldstein (Rita); R.L. Gollub (Randy); H.J. Grabe (Hans Jörgen); O. Grimm (Oliver); O. Gruber (Oliver); T. Guadalupe (Tulio); R.E. Gur (Raquel); R.C. Gur (Ruben); H.H.H. Göring (Harald); S. Hagenaars (Saskia); T. Hajek (Tomas); G.B. Hall (Garry); J. Hall (Jeremy); J. Hardy (John); C.A. Hartman (Catharina); J. Hass (Johanna); W. Hatton; U.K. Haukvik (Unn); K. Hegenscheid (Katrin); J. Heinz (Judith); I.B. Hickie (Ian); B.C. Ho (Beng ); D. Hoehn (David); P.J. Hoekstra (Pieter); M. Hollinshead (Marisa); A.J. Holmes (Avram); G. Homuth (Georg); M. Hoogman (Martine); L.E. Hong (L.Elliot); N. Hosten (Norbert); J.J. Hottenga (Jouke Jan); H.E. Hulshoff Pol (Hilleke); K.S. Hwang (Kristy); C.R. Jack Jr. (Clifford); S. Jenkinson (Sarah); C. Johnston; E.G. Jönsson (Erik); R.S. Kahn (René); D. Kasperaviciute (Dalia); S. Kelly (Steve); S. Kim (Shinseog); P. Kochunov (Peter); L. Koenders (Laura); B. Krämer (Bernd); J.B.J. Kwok (John); J. Lagopoulos (Jim); G. Laje (Gonzalo); M. Landén (Mikael); B.A. Landman (Bennett); J. Lauriello; S. Lawrie (Stephen); P.H. Lee (Phil); S. Le Hellard (Stephanie); H. Lemaître (Herve); C.D. Leonardo (Cassandra); C.-S. Li (Chiang-shan); B. Liberg (Benny); D.C. Liewald (David C.); X. Liu (Xinmin); L.M. Lopez (Lorna); E. Loth (Eva); A. Lourdusamy (Anbarasu); M. Luciano (Michelle); F. MacCiardi (Fabio); M.W.J. Machielsen (Marise); G.M. MacQueen (Glenda); U.F. Malt (Ulrik); R. Mandl (René); D.S. Manoach (Dara); J.-L. Martinot (Jean-Luc); M. Matarin (Mar); R. Mather; M. Mattheisen (Manuel); M. Mattingsdal (Morten); A. Meyer-Lindenberg; C. McDonald (Colm); A.M. McIntosh (Andrew); F.J. Mcmahon (Francis J); K.L. Mcmahon (Katie); E. Meisenzahl (Eva); I. Melle (Ingrid); Y. Milaneschi (Yuri); S. Mohnke (Sebastian); G.W. Montgomery (Grant); D.W. Morris (Derek W); E.K. Moses (Eric); B.A. Mueller (Bryon ); S. Muñoz Maniega (Susana); T.W. Mühleisen (Thomas); B. Müller-Myhsok (Bertram); B. Mwangi (Benson); M. Nauck (Matthias); K. Nho (Kwangsik); T.E. Nichols (Thomas); L.G. Nilsson; A.C. Nugent (Allison); L. Nyberg (Lisa); R.L. Olvera (Rene); J. Oosterlaan (Jaap); R.A. Ophoff (Roel); M. Pandolfo (Massimo); M. Papalampropoulou-Tsiridou (Melina); M. Papmeyer (Martina); T. Paus (Tomas); Z. Pausova (Zdenka); G. Pearlson (Godfrey); B.W.J.H. Penninx (Brenda); C.P. Peterson (Charles); A. Pfennig (Andrea); M. Phillips (Mary); G.B. Pike (G Bruce); J.B. Poline (Jean Baptiste); S.G. Potkin (Steven); B. Pütz (Benno); A. Ramasamy (Adaikalavan); J. Rasmussen (Jerod); M. Rietschel (Marcella); M. Rijpkema (Mark); S.L. Risacher (Shannon); J.L. Roffman (Joshua); R. Roiz-Santiañez (Roberto); N. Romanczuk-Seiferth (Nina); E.J. Rose (Emma); N.A. Royle (Natalie); D. Rujescu (Dan); M. Ryten (Mina); P.S. Sachdev (Perminder); A. Salami (Alireza); T.D. Satterthwaite (Theodore); J. Savitz (Jonathan); A.J. Saykin (Andrew); C. Scanlon (Cathy); L. Schmaal (Lianne); H. Schnack (Hugo); N.J. Schork (Nicholas); S.C. Schulz (S.Charles); R. Schür (Remmelt); L.J. Seidman (Larry); L. Shen (Li); L. Shoemaker (Lawrence); A. Simmons (Andrew); S.M. Sisodiya (Sanjay); C. Smith (Colin); J.W. Smoller; J.C. Soares (Jair); S.R. Sponheim (Scott); R. Sprooten (Roy); J.M. Starr (John); V.M. Steen (Vidar); S. Strakowski (Stephen); L.T. Strike (Lachlan); J. Sussmann (Jessika); P.G. Sämann (Philipp); A. Teumer (Alexander); A.W. Toga (Arthur); D. Tordesillas-Gutierrez (Diana); D. Trabzuni (Danyah); S. Trost (Sarah); J. Turner (Jessica); M. van den Heuvel (Martijn); N.J. van der Wee (Nic); K.R. van Eijk (Kristel); T.G.M. van Erp (Theo G.); N.E.M. van Haren (Neeltje E.); D. van 't Ent (Dennis); M.J.D. van Tol (Marie-José); M.C. Valdés Hernández (Maria); D.J. Veltman (Dick); A. Versace (Amelia); H. Völzke (Henry); R. Walker (Robert); H.J. Walter (Henrik); L. Wang (Lei); J.M. Wardlaw (J.); M.E. Weale (Michael); M.W. Weiner (Michael); W. Wen (Wei); L.T. Westlye (Lars); H.C. Whalley (Heather); C.D. Whelan (Christopher); T.J.H. White (Tonya); A.M. Winkler (Anderson); K. Wittfeld (Katharina); G. Woldehawariat (Girma); A. Björnsson (Asgeir); D. Zilles (David); M.P. Zwiers (Marcel); A. Thalamuthu (Anbupalam); J.R. Almeida (Jorge); C.J. Schofield (Christopher); N.B. Freimer (Nelson); N.S. Lawrence (Natalia); D.A. Drevets (Douglas)

    2014-01-01

    textabstractThe Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in

  5. Use of acoustic emission technique to study the spalling behaviour of oxide scales on Ni-10Cr-8Al containing sulphur and/or yttrium impurity

    International Nuclear Information System (INIS)

    Khanna, A.S.; Quadakkers, W.J.; Jonas, H.

    1989-01-01

    It is now well established that the presence of small amounts of sulphur impurity in a NiCrAl-based alloy causes a deleterious effect on their high temperature oxidation behaviour. It is, however, not clear whether the adverse effect is due to a decrease in the spalling resistance of the oxide scale or due to an enhanced scale growth. In order to confirm which of the factors is dominating, two independent experimental techniques were used in the investigation of the oxidation behaviour of Ni-10Cr-8Al containing sulphur- and/or yttrium additions: conventional thermogravimetry, to study the scale growth rates and acoustic emission analysis to study the scale adherence. The results indicated that the dominant factor responsible for the deleterious effect of sulphur impurity on the oxidation of a Ni-10Cr-8Al alloy, was a significant change in the growth rate and the composition of the scale. Addition of yttrium improved the oxidation behaviour, not only by increasing the scale adherence, but also by reducing the scale growth due to gettering of sulphur. (orig.) [de

  6. Application of the Particle Swarm Optimization (PSO) technique to the thermal-hydraulics project of a PWR reactor core in reduced scale; Aplicacao da tecnica de otimizacao por enxame de particulas no projeto termo-hidraulico em escala reduzida do nucleo de um reator PWR

    Energy Technology Data Exchange (ETDEWEB)

    Lima Junior, Carlos Alberto de Souza

    2008-09-15

    The reduced scale models design have been employed by engineers from several different industries fields such as offshore, spatial, oil extraction, nuclear industries and others. Reduced scale models are used in experiments because they are economically attractive than its own prototype (real scale) because in many cases they are cheaper than a real scale one and most of time they are also easier to build providing a way to lead the real scale design allowing indirect investigations and analysis to the real scale system (prototype). A reduced scale model (or experiment) must be able to represent all physical phenomena that occurs and further will do in the real scale one under operational conditions, e.g., in this case the reduced scale model is called similar. There are some different methods to design a reduced scale model and from those two are basic: the empiric method based on the expert's skill to determine which physical measures are relevant to the desired model; and the differential equation method that is based on a mathematical description of the prototype (real scale system) to model. Applying a mathematical technique to the differential equation that describes the prototype then highlighting the relevant physical measures so the reduced scale model design problem may be treated as an optimization problem. Many optimization techniques as Genetic Algorithm (GA), for example, have been developed to solve this class of problems and have also been applied to the reduced scale model design problem as well. In this work, Particle Swarm Optimization (PSO) technique is investigated as an alternative optimization tool for such problem. In this investigation a computational approach, based on particle swarm optimization technique (PSO), is used to perform a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power operation on a forced flow cooling circulation and non-accidental operating conditions. A performance

  7. Investigation of flow behaviour of coal particles in a pilot-scale fluidized bed gasifier (FBG) using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Sharma, V K; Kamudu, M Vidya; Prakash, S G; Krishanamoorthy, S; Anandam, G; Rao, P Seshubabu; Ramani, N V S; Singh, Gursharan; Sonde, R R

    2009-09-01

    Knowledge of residence time distribution (RTD), mean residence time (MRT) and degree of axial mixing of solid phase is required for efficient operation of coal gasification process. Radiotracer technique was used to measure the RTD of coal particles in a pilot-scale fluidized bed gasifier (FBG). Two different radiotracers i.e. lanthanum-140 and gold-198 labeled coal particles (100 gm) were independently used as radiotracers. The radiotracer was instantaneously injected into the coal feed line and monitored at the ash extraction line at the bottom and gas outlet at the top of the gasifier using collimated scintillation detectors. The measured RTD data were treated and MRTs of coal/ash particles were determined. The treated data were simulated using tanks-in-series model. The simulation of RTD data indicated good degree of mixing with small fraction of the feed material bypassing/short-circuiting from the bottom of the gasifier. The results of the investigation were found useful for optimizing the design and operation of the FBG, and scale-up of the gasification process.

  8. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  9. Factor validation of the portuguese version of the social skills scale of the Preschool and Kindergarten Behavior Scales

    Directory of Open Access Journals (Sweden)

    Maria João Seabra-Santos

    2014-05-01

    Full Text Available The assessment of preschoolers’ social skills represents a topic of growing importance in research recently developed in the field. The purpose of this article is to present confirmatory factor analyses studies for the Social Skills scale of the Preschool and Kindergarten Behavior Scales – Second Edition (PKBS-2, a behavior rating scale that evaluates social skills and problem behaviors, adapted and validated for Portuguese preschool children. The 34 items of the Social Skills scale, distributed on three subscales (Social Cooperation/Adjustment, Social Interaction/Empathy and Social Independence/Assertiveness, were grouped into item-parcels. Model adjustment was analyzed for the total sample (N = 2000 and the analyses were replicated for the subsamples collected in the home (n = 1000 and school settings (n = 1000. The factor structure was very stable for the three samples, with high internal consistency levels and correlations between parcels/scales. The results highlight the utility/validity of the Social Skills scale of the PKBS-2 (Portuguese version.

  10. Muon reconstruction efficiency, momentum scale and resolution in pp collisions at 8TeV with ATLAS

    CERN Document Server

    Dimitrievska, A; The ATLAS collaboration; Sforza, F

    2014-01-01

    The ATLAS experiment identifies and reconstructs muons with two high precision tracking systems, the inner detector and the muon spectrometer, which provide independent measurements of the muon momentum. This poster summarizes the performance of the combined muon reconstruction in terms of reconstruction efficiency, momentum scale and resolution. Data-driven techniques are used to derive corrections to be applied to simulation in order to reproduce the reconstruction efficiency, momentum scale and resolution as observed in experimental data, and to asses systematic uncertainties on these quantities. The analysed dataset corresponds to an integrated luminosity of 20.4 fb−1 from pp collisions at center of mass enegy of 8 TeV recorded in 2012.

  11. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  12. Impact and fracture analysis of fish scales from Arapaima gigas.

    Science.gov (United States)

    Torres, F G; Malásquez, M; Troncoso, O P

    2015-06-01

    Fish scales from the Amazonian fish Arapaima gigas have been characterised to study their impact and fracture behaviour at three different environmental conditions. Scales were cut in two different directions to analyse the influence of the orientation of collagen layers. The energy absorbed during impact tests was measured for each sample and SEM images were taken after each test in order to analyse the failure mechanisms. The results showed that scales tested at cryogenic temperatures display fragile behaviour, while scales tested at room temperature did not fracture. Different failure mechanisms have been identified, analysed and compared with the failure modes that occur in bone. The impact energy obtained for fish scales was two to three times higher than the values reported for bone in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Scaling model for prediction of radionuclide activity in cooling water using a regression triplet technique

    International Nuclear Information System (INIS)

    Silvia Dulanska; Lubomir Matel; Milan Meloun

    2010-01-01

    The decommissioning of the nuclear power plant (NPP) A1 Jaslovske Bohunice (Slovakia) is a complicated set of problems that is highly demanding both technically and financially. The basic goal of the decommissioning process is the total elimination of radioactive materials from the nuclear power plant area, and radwaste treatment to a form suitable for its safe disposal. The initial conditions of decommissioning also include elimination of the operational events, preparation and transport of the fuel from the plant territory, radiochemical and physical-chemical characterization of the radioactive wastes. One of the problems was and still is the processing of the liquid radioactive wastes. Such media is also the cooling water of the long-term storage of spent fuel. A suitable scaling model for predicting the activity of hard-to-detect radionuclides 239,240 Pu, 90 Sr and summary beta in cooling water using a regression triplet technique has been built using the regression triplet analysis and regression diagnostics. (author)

  14. The Development and Validation of the Comprehensive Intellectual Humility Scale.

    Science.gov (United States)

    Krumrei-Mancuso, Elizabeth J; Rouse, Steven V

    2016-01-01

    A series of studies was conducted to create the 22-item Comprehensive Intellectual Humility Scale on the basis of theoretical descriptions of intellectual humility, expert reviews, pilot studies, and exploratory and confirmatory factor analyses. The scale measures 4 distinct but intercorrelated aspects of intellectual humility, including independence of intellect and ego, openness to revising one's viewpoint, respect for others' viewpoints, and lack of intellectual overconfidence. Internal consistency and test-retest analyses provided reliable scale and subscale scores within numerous independent samples. Validation data were obtained from multiple, independent samples, supporting appropriate levels of convergent, discriminant, and predictive validity. The analyses suggest that the scale has utility as a self-report measure for future research.

  15. Structural validity of the Wechsler Intelligence Scale for Children-Fifth Edition: Confirmatory factor analyses with the 16 primary and secondary subtests.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C

    2017-04-01

    The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Challenging the assumptions for thermal sensation scales

    DEFF Research Database (Denmark)

    Schweiker, Marcel; Fuchs, Xaver; Becker, Susanne

    2016-01-01

    Scales are widely used to assess the personal experience of thermal conditions in built environments. Most commonly, thermal sensation is assessed, mainly to determine whether a particular thermal condition is comfortable for individuals. A seven-point thermal sensation scale has been used...... extensively, which is suitable for describing a one-dimensional relationship between physical parameters of indoor environments and subjective thermal sensation. However, human thermal comfort is not merely a physiological but also a psychological phenomenon. Thus, it should be investigated how scales for its...... assessment could benefit from a multidimensional conceptualization. The common assumptions related to the usage of thermal sensation scales are challenged, empirically supported by two analyses. These analyses show that the relationship between temperature and subjective thermal sensation is non...

  17. Preparation of Kepler light curves for asteroseismic analyses

    NARCIS (Netherlands)

    García, R.A.; Hekker, S.; Stello, D.; Gutiérrez-Soto, J.; Handberg, R.; Huber, D.; Karoff, C.; Uytterhoeven, K.; Appourchaux, T.; Chaplin, W.J.; Elsworth, Y.; Mathur, S.; Ballot, J.; Christensen-Dalsgaard, J.; Gilliland, R.L.; Houdek, G.; Jenkins, J.M.; Kjeldsen, H.; McCauliff, S.; Metcalfe, T.; Middour, C.K.; Molenda-Zakowicz, J.; Monteiro, M.J.P.F.G.; Smith, J.C.; Thompson, M.J.

    2011-01-01

    The Kepler mission is providing photometric data of exquisite quality for the asteroseismic study of different classes of pulsating stars. These analyses place particular demands on the pre-processing of the data, over a range of time-scales from minutes to months. Here, we describe processing

  18. Check-all-that-apply data analysed by Partial Least Squares regression

    DEFF Research Database (Denmark)

    Rinnan, Åsmund; Giacalone, Davide; Frøst, Michael Bom

    2015-01-01

    are analysed by multivariate techniques. CATA data can be analysed both by setting the CATA as the X and the Y. The former is the PLS-Discriminant Analysis (PLS-DA) version, while the latter is the ANOVA-PLS (A-PLS) version. We investigated the difference between these two approaches, concluding...

  19. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.

  20. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    International Nuclear Information System (INIS)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR

  1. The development of an on-line gold analyser

    International Nuclear Information System (INIS)

    Robert, R.V.D.; Ormrod, G.T.W.

    1982-01-01

    An on-line analyser to monitor the gold in solutions from the carbon-in-pulp process is described. The automatic system is based on the delivery of filtered samples of the solutions to a distribution valve for measurement by flameless atomic-absorption spectrophotometry. The samples is introduced by the aerosol-deposition method. Operation of the analyser on a pilot plant and on a full-scale carbon-in-pulp plant has shown that the system is economically feasible and capable of providing a continuous indication of the efficiency of the extraction process

  2. Three-dimensional micro-scale strain mapping in living biological soft tissues.

    Science.gov (United States)

    Moo, Eng Kuan; Sibole, Scott C; Han, Sang Kuy; Herzog, Walter

    2018-04-01

    Non-invasive characterization of the mechanical micro-environment surrounding cells in biological tissues at multiple length scales is important for the understanding of the role of mechanics in regulating the biosynthesis and phenotype of cells. However, there is a lack of imaging methods that allow for characterization of the cell micro-environment in three-dimensional (3D) space. The aims of this study were (i) to develop a multi-photon laser microscopy protocol capable of imprinting 3D grid lines onto living tissue at a high spatial resolution, and (ii) to develop image processing software capable of analyzing the resulting microscopic images and performing high resolution 3D strain analyses. Using articular cartilage as the biological tissue of interest, we present a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning length scales from the tissue to the cell level. Using custom image processing software, we provide accurate and robust 3D micro-strain analysis that allows for detailed qualitative and quantitative assessment of the 3D tissue kinematics. This novel technique preserves tissue structural integrity post-scanning, therefore allowing for multiple strain measurements at different time points in the same specimen. The proposed technique is versatile and opens doors for experimental and theoretical investigations on the relationship between tissue deformation and cell biosynthesis. Studies of this nature may enhance our understanding of the mechanisms underlying cell mechano-transduction, and thus, adaptation and degeneration of soft connective tissues. We presented a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning from tissue length scale to cellular length scale. Using a custom image processing software (lsmgridtrack), we provide accurate and robust micro

  3. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1998-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  4. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1997-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  5. Comparing effects of land reclamation techniques on water pollution and fishery loss for a large-scale offshore airport island in Jinzhou Bay, Bohai Sea, China.

    Science.gov (United States)

    Yan, Hua-Kun; Wang, Nuo; Yu, Tiao-Lan; Fu, Qiang; Liang, Chen

    2013-06-15

    Plans are being made to construct Dalian Offshore Airport in Jinzhou Bay with a reclamation area of 21 km(2). The large-scale reclamation can be expected to have negative effects on the marine environment, and these effects vary depending on the reclamation techniques used. Water quality mathematical models were developed and biology resource investigations were conducted to compare effects of an underwater explosion sediment removal and rock dumping technique and a silt dredging and rock dumping technique on water pollution and fishery loss. The findings show that creation of the artificial island with the underwater explosion sediment removal technique would greatly impact the marine environment. However, the impact for the silt dredging technique would be less. The conclusions from this study provide an important foundation for the planning of Dalian Offshore Airport and can be used as a reference for similar coastal reclamation and marine environment protection. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  6. Evaluating the factor structure, item analyses, and internal consistency of hospital anxiety and depression scale in Iranian infertile patients

    Directory of Open Access Journals (Sweden)

    Payam Amini

    2017-09-01

    Full Text Available Background: The hospital anxiety and depression scale (HADS is a common screening tool designed to measure the level of anxiety and depression in different factor structures and has been extensively used in non-psychiatric populations and individuals experiencing fertility problems. Objective: The aims of this study were to evaluate the factor structure, item analyses, and internal consistency of HADS in Iranian infertile patients. Materials and Methods: This cross-sectional study included 651 infertile patients (248 men and 403 women referred to a referral infertility Center in Tehran, Iran between January 2014 and January 2015. Confirmatory factor analysis was used to determine the underlying factor structure of the HADS among one, two, and threefactor models. Several goodness of fit indices were utilized such as comparative, normed and goodness of fit indices, Akaike information criterion, and the root mean squared error of approximation. In addition to HADS, the Satisfaction with Life Scale questionnaires as well as demographic and clinical information were administered to all patients. Results: The goodness of fit indices through CFAs exposed that three and onefactor model provided the best and worst fit to the total, male and female datasets compared to the other factor structure models for the infertile patients. The Cronbach’s alpha for anxiety and depression subscales were 0.866 and 0.753 respectively. The HADS subscales significantly correlated with SWLS, indicating an acceptable convergent validity. Conclusion: The HADS was found to be a three-factor structure screening instrument in the field of infertility.

  7. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  8. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.

  9. Analyses of PWR spent fuel composition using SCALE and SWAT code systems to find correction factors for criticality safety applications adopting burnup credit

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Hee Sung; Suyama, Kenya; Mochizuki, Hiroki; Okuno, Hiroshi; Nomura, Yasushi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    The isotopic composition calculations were performed for 26 spent fuel samples from the Obrigheim PWR reactor and 55 spent fuel samples from 7 PWR reactors using the SAS2H module of the SCALE4.4 code system with 27, 44 and 238 group cross-section libraries and the SWAT code system with the 107 group cross-section library. For the analyses of samples from the Obrigheim PWR reactor, geometrical models were constructed for each of SCALE4.4/SAS2H and SWAT. For the analyses of samples from 7 PWR reactors, the geometrical model already adopted in the SCALE/SAS2H was directly converted to the model of SWAT. The four kinds of calculation results were compared with the measured data. For convenience, the ratio of the measured to calculated values was used as a parameter. When the ratio is less than unity, the calculation overestimates the measurement, and the ratio becomes closer to unity, they have a better agreement. For many important nuclides for burnup credit criticality safety evaluation, the four methods applied in this study showed good coincidence with measurements in general. More precise observations showed, however: (1) Less unity ratios were found for Pu-239 and -241 for selected 16 samples out of the 26 samples from the Obrigheim reactor (10 samples were deselected because their burnups were measured with Cs-137 non-destructive method, less reliable than Nd-148 method the rest 16 samples were measured with); (2) Larger than unity ratios were found for Am-241 and Cm-242 for both the 16 and 55 samples; (3) Larger than unity ratios were found for Sm-149 for the 55 samples; (4) SWAT was generally accompanied by larger ratios than those of SAS2H with some exceptions. Based on the measured-to-calculated ratios for 71 samples of a combined set in which 16 selected samples and 55 samples were included, the correction factors that should be multiplied to the calculated isotopic compositions were generated for a conservative estimate of the neutron multiplication factor

  10. Large scale distribution monitoring of FRP-OF based on BOTDR technique for infrastructures

    Science.gov (United States)

    Zhou, Zhi; He, Jianping; Yan, Kai; Ou, Jinping

    2007-04-01

    BOTDA(R) sensing technique is considered as one of the most practical solution for large-sized structures as the instrument. However, there is still a big obstacle to apply BOTDA(R) in large-scale area due to the high cost and the reliability problem of sensing head which is associated to the sensor installation and survival. In this paper, we report a novel low-cost and high reliable BOTDA(R) sensing head using FRP(Fiber Reinforced Polymer)-bare optical fiber rebar, named BOTDA(R)-FRP-OF. We investigated the surface bonding and its mechanical strength by SEM and intensity experiments. Considering the strain difference between OF and host matrix which may result in measurement error, the strain transfer from host to OF have been theoretically studied. Furthermore, GFRP-OFs sensing properties of strain and temperature at different gauge length were tested under different spatial and readout resolution using commercial BOTDA. Dual FRP-OFs temperature compensation method has also been proposed and analyzed. And finally, BOTDA(R)-OFs have been applied in Tiyu west road civil structure at Guangzhou and Daqing Highway. This novel FRP-OF rebar shows both high strengthen and good sensing properties, which can be used in long-term SHM for civil infrastructures.

  11. Assessing public speaking fear with the short form of the Personal Report of Confidence as a Speaker scale: confirmatory factor analyses among a French-speaking community sample.

    Science.gov (United States)

    Heeren, Alexandre; Ceschi, Grazia; Valentiner, David P; Dethier, Vincent; Philippot, Pierre

    2013-01-01

    The main aim of this study was to assess the reliability and structural validity of the French version of the 12-item version of the Personal Report of Confidence as Speaker (PRCS), one of the most promising measurements of public speaking fear. A total of 611 French-speaking volunteers were administered the French versions of the short PRCS, the Liebowitz Social Anxiety Scale, the Fear of Negative Evaluation scale, as well as the Trait version of the Spielberger State-Trait Anxiety Inventory and the Beck Depression Inventory-II, which assess the level of anxious and depressive symptoms, respectively. Regarding its structural validity, confirmatory factor analyses indicated a single-factor solution, as implied by the original version. Good scale reliability (Cronbach's alpha = 0.86) was observed. The item discrimination analysis suggested that all the items contribute to the overall scale score reliability. The French version of the short PRCS showed significant correlations with the Liebowitz Social Anxiety Scale (r = 0.522), the Fear of Negative Evaluation scale (r = 0.414), the Spielberger State-Trait Anxiety Inventory (r = 0.516), and the Beck Depression Inventory-II (r = 0.361). The French version of the short PRCS is a reliable and valid measure for the evaluation of the fear of public speaking among a French-speaking sample. These findings have critical consequences for the measurement of psychological and pharmacological treatment effectiveness in public speaking fear among a French-speaking sample.

  12. Scale-up of miscible flood processes for heterogeneous reservoirs. 1993 annual report

    Energy Technology Data Exchange (ETDEWEB)

    Orr, F.M. Jr.

    1994-05-01

    Progress is reported for a comprehensive investigation of the scaling behavior of gas injection processes in heterogeneous reservoirs. The interplay of phase behavior, viscous fingering, gravity segregation, capillary imbibition and drainage, and reservoir heterogeneity is examined in a series of simulations and experiments. Compositional and first-contact miscable simulations of viscous fingering and gravity segregation are compared to show that the two techniques can give very different results. Also, analyzed are two-dimensional and three-dimensional flows in which gravity segregation and viscous fingering interact. The simulations show that 2D and 3D flows can differ significantly. A comparison of analytical solutions for three-component two-phase flow with experimental results for oil/water/alcohol systems is reported. While the experiments and theory show reasonable agreement, some differences remain to be explained. The scaling behavior of the interaction of gravity segregation and capillary forces is investigated through simulations and through scaling arguments based on analysis of the differential equations. The simulations show that standard approaches do not agree well with results of low IFT displacements. The scaling analyses, however, reveal flow regimes where capillary, gravity, or viscous forces dominate the flow.

  13. Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame

    Science.gov (United States)

    Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank

    2017-10-01

    This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.

  14. Kinematics and strain analyses of the eastern segment of the Pernicana Fault (Mt. Etna, Italy derived from geodetic techniques (1997-2005

    Directory of Open Access Journals (Sweden)

    M. Mattia

    2006-06-01

    Full Text Available This paper analyses the ground deformations occurring on the eastern part of the Pernicana Fault from 1997 to 2005. This segment of the fault was monitored with three local networks based on GPS and EDM techniques. More than seventy GPS and EDM surveys were carried out during the considered period, in order to achieve a higher temporal detail of ground deformation affecting the structure. We report the comparisons among GPS and EDM surveys in terms of absolute horizontal displacements of each GPS benchmark and in terms of strain parameters for each GPS and EDM network. Ground deformation measurements detected a continuous left-lateral movement of the Pernicana Fault. We conclude that, on the easternmost part of the Pernicana Fault, where it branches out into two segments, the deformation is transferred entirely SE-wards by a splay fault.

  15. Stargate: Energy Management Techniques

    OpenAIRE

    Vijay Raghunathan; Mani Srivastava; Trevor Pering; Roy Want

    2004-01-01

    This poster presents techniques for energy efficient operation of the Stargate wireless platform. In addition to conventional power management techniques such as dynamic voltage and scaling and processor shutdown, the Stargate features several mechanisms for energy efficient operation of the communication subsystem, such as support for hierarchical radios, Bluetooth based remote wakeup, mote based wakeup, etc. Finally, design optimizations including the use of power gating, and provision for ...

  16. Cardiac Depression Scale: Mokken scaling in heart failure patients

    Directory of Open Access Journals (Sweden)

    Ski Chantal F

    2012-11-01

    Full Text Available Abstract Background There is a high prevalence of depression in patients with heart failure (HF that is associated with worsening prognosis. The value of using a reliable and valid instrument to measure depression in this population is therefore essential. We validated the Cardiac Depression Scale (CDS in heart failure patients using a model of ordinal unidimensional measurement known as Mokken scaling. Findings We administered in face-to-face interviews the CDS to 603 patients with HF. Data were analysed using Mokken scale analysis. Items of the CDS formed a statistically significant unidimensional Mokken scale of low strength (H0.8. Conclusions The CDS has a hierarchy of items which can be interpreted in terms of the increasingly serious effects of depression occurring as a result of HF. Identifying an appropriate instrument to measure depression in patients with HF allows for early identification and better medical management.

  17. A simplified technique for shakedown load determination

    International Nuclear Information System (INIS)

    Abdalla, H.F.; Younan, M.Y.A.; Megahed, M.M.

    2005-01-01

    In this paper a simple technique is presented to determine the limit shakedown load of a structure or a component using the finite element method. Through the proposed technique, the limit shakedown load is determined without performing time consuming cyclic loading simulations or iterative elastic techniques. Instead, it is determined by performing only two analyses namely, an elastic analysis and an elastic-plastic analysis. By extracting the results of the two analyses, the limit shakedown load of the structure is determined through the calculation of the residual stresses. The technique is applied and verified using two bench mark shakedown problems namely: the two-bar structure subjected to constant axial force and cyclic thermal loading, and the Bree cylinder subjected to constant internal pressure and cyclic high heat fluxes across its wall. The results of the proposed technique showed very good correlation with the, analytically determined, Bree diagrams of both structures. Moreover, the outcomes of the proposed technique showed very good results in comparison to full cyclic loading elasto-plastic finite element simulations of both structures. (authors)

  18. Bridging the scales in atmospheric composition simulations using a nudging technique

    Science.gov (United States)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean

  19. Evaluation of a modified 16-item Readiness for Interprofessional Learning Scale (RIPLS): Exploratory and confirmatory factor analyses.

    Science.gov (United States)

    Yu, Tzu-Chieh; Jowsey, Tanisha; Henning, Marcus

    2018-04-18

    The Readiness for Interprofessional Learning Scale (RIPLS) was developed to assess undergraduate readiness for engaging in interprofessional education (IPE). It has become an accepted and commonly used instrument. To determine utility of a modified 16-item RIPLS instrument, exploratory and confirmatory factor analyses were performed. Data used were collected from a pre- and post-intervention study involving 360 New Zealand undergraduate students from one university. Just over half of the participants were enrolled in medicine (51%) while the remainder were in pharmacy (27%) and nursing (22%). The intervention was a two-day simulation-based IPE course focused on managing unplanned acute medical problems in hospital wards ("ward calls"). Immediately prior to the course, 288 RIPLS were collected and immediately afterwards, 322 (response rates 80% and 89%, respectively). Exploratory factor analysis involving principal axis factoring with an oblique rotation method was conducted using pre-course data. The scree plot suggested a three-factor solution over two- and four-factor solutions. Subsequent confirmatory factor analysis performed using post-course data demonstrated partial goodness-of-fit for this suggested three-factor model. Based on these findings, further robust psychometric testing of the RIPLS or modified versions of it is recommended before embarking on its use in evaluative research in various healthcare education settings.

  20. Discovery and characterisation of dietary patterns in two Nordic countries. Using non-supervised and supervised multivariate statistical techniques to analyse dietary survey data

    DEFF Research Database (Denmark)

    Edberg, Anna; Freyhult, Eva; Sand, Salomon

    - and inter-national data excerpts. For example, major PCA loadings helped deciphering both shared and disparate features, relating to food groups, across Danish and Swedish preschool consumers. Data interrogation, reliant on the above-mentioned composite techniques, disclosed one outlier dietary prototype...... prototype with the latter property was identified also in the Danish data material, but without low consumption of Vegetables or Fruit & berries. The second MDA-type of data interrogation involved Supervised Learning, also known as Predictive Modelling. These exercises involved the Random Forest (RF...... not elaborated on in-depth, output from several analyses suggests a preference for energy-based consumption data for Cluster Analysis and Predictive Modelling, over those appearing as weight....

  1. SCALE-4 [Standardized Computer Analyses for Licensing Evaluation]: An improved computational system for spent-fuel cask analysis

    International Nuclear Information System (INIS)

    Parks, C.V.

    1989-01-01

    The purpose of this paper is to provide specific information regarding improvements available with Version 4.0 of the SCALE system and discuss the future of SCALE within the current computing and regulatory environment. The emphasis focuses on the improvements in SCALE-4 over that available in SCALE-3. 10 refs., 1 fig., 1 tab

  2. How to analyse a Big Bang of data: the mammoth project at the Cern physics laboratory in Geneva to recreate the conditions immediately after the universe began requires computing power on an unprecedented scale

    CERN Multimedia

    Thomas, Kim

    2005-01-01

    How to analyse a Big Bang of data: the mammoth project at the Cern physics laboratory in Geneva to recreate the conditions immediately after the universe began requires computing power on an unprecedented scale

  3. Red, Straight, no bends: primordial power spectrum reconstruction from CMB and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Ravenni, Andrea [Dipartimento di Fisica e Astronomia ' ' G. Galilei' ' , Università degli Studi di Padova, via Marzolo 8, I-35131, Padova (Italy); Verde, Licia; Cuesta, Antonio J., E-mail: andrea.ravenni@pd.infn.it, E-mail: liciaverde@icc.ub.edu, E-mail: ajcuesta@icc.ub.edu [Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona (IEEC-UB), Martí i Franquès 1, E08028 Barcelona (Spain)

    2016-08-01

    We present a minimally parametric, model independent reconstruction of the shape of the primordial power spectrum. Our smoothing spline technique is well-suited to search for smooth features such as deviations from scale invariance, and deviations from a power law such as running of the spectral index or small-scale power suppression. We use a comprehensive set of the state-of the art cosmological data: Planck observations of the temperature and polarisation anisotropies of the cosmic microwave background, WiggleZ and Sloan Digital Sky Survey Data Release 7 galaxy power spectra and the Canada-France-Hawaii Lensing Survey correlation function. This reconstruction strongly supports the evidence for a power law primordial power spectrum with a red tilt and disfavours deviations from a power law power spectrum including small-scale power suppression such as that induced by significantly massive neutrinos. This offers a powerful confirmation of the inflationary paradigm, justifying the adoption of the inflationary prior in cosmological analyses.

  4. Next generation initiation techniques

    Science.gov (United States)

    Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans

    1993-01-01

    Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The

  5. Uncertainties in the simulation of groundwater recharge at different scales

    Directory of Open Access Journals (Sweden)

    H. Bogena

    2005-01-01

    Full Text Available Digital spatial data always imply some kind of uncertainty. The source of this uncertainty can be found in their compilation as well as the conceptual design that causes a more or less exact abstraction of the real world, depending on the scale under consideration. Within the framework of hydrological modelling, in which numerous data sets from diverse sources of uneven quality are combined, the various uncertainties are accumulated. In this study, the GROWA model is taken as an example to examine the effects of different types of uncertainties on the calculated groundwater recharge. Distributed input errors are determined for the parameters' slope and aspect using a Monte Carlo approach. Landcover classification uncertainties are analysed by using the conditional probabilities of a remote sensing classification procedure. The uncertainties of data ensembles at different scales and study areas are discussed. The present uncertainty analysis showed that the Gaussian error propagation method is a useful technique for analysing the influence of input data on the simulated groundwater recharge. The uncertainties involved in the land use classification procedure and the digital elevation model can be significant in some parts of the study area. However, for the specific model used in this study it was shown that the precipitation uncertainties have the greatest impact on the total groundwater recharge error.

  6. Developing of Individual Instrument Performance Anxiety Scale: ValidityReliability Study

    Directory of Open Access Journals (Sweden)

    Esra DALKIRAN

    2016-07-01

    Full Text Available In this study, it is intended to develop a scale unique to our culture, concerning individual instrument performance anxiety of the students who are getting instrument training in the Department of Music Education. In the study, the descriptive research model is used and qualitative research techniques are utilized. The study population consists of the students attending the 23 universities which has Music Education Department. The sample of the study consists of 438 girls and 312 boys, totally 750 students who are studying in the Department of Music Education of randomly selected 10 universities. As a result of the explanatory and confirmatory factor analyses that were performed, a onedimensional structure consisting of 14 items was obtained. Also, t-scores and the coefficient scores of total item correlation concerning the distinguishing power of the items, the difference in the scores of the set of lower and upper 27% was calculated, and it was observed that the items are distinguishing as a result of both analyses. Of the scale, Cronbach's alpha coefficient of internal consistency was calculated as .94, and test-retest reliability coefficient was calculated as .93. As a result, a valid and reliable assessment and evaluation instrument that measures the exam performance anxiety of the students studying in the Department of Music Education, has been developed.

  7. A Scale of Mobbing Impacts

    Science.gov (United States)

    Yaman, Erkan

    2012-01-01

    The aim of this research was to develop the Mobbing Impacts Scale and to examine its validity and reliability analyses. The sample of study consisted of 509 teachers from Sakarya. In this study construct validity, internal consistency, test-retest reliabilities and item analysis of the scale were examined. As a result of factor analysis for…

  8. Maintaining scale as a realiable computational system for criticality safety analysis

    International Nuclear Information System (INIS)

    Bowmann, S.M.; Parks, C.V.; Martin, S.K.

    1995-01-01

    Accurate and reliable computational methods are essential for nuclear criticality safety analyses. The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer code system was originally developed at Oak Ridge National Laboratory (ORNL) to enable users to easily set up and perform criticality safety analyses, as well as shielding, depletion, and heat transfer analyses. Over the fifteen-year life of SCALE, the mainstay of the system has been the criticality safety analysis sequences that have featured the KENO-IV and KENO-V.A Monte Carlo codes and the XSDRNPM one-dimensional discrete-ordinates code. The criticality safety analysis sequences provide automated material and problem-dependent resonance processing for each criticality calculation. This report details configuration management which is essential because SCALE consists of more than 25 computer codes (referred to as modules) that share libraries of commonly used subroutines. Changes to a single subroutine in some cases affect almost every module in SCALE exclamation point Controlled access to program source and executables and accurate documentation of modifications are essential to maintaining SCALE as a reliable code system. The modules and subroutine libraries in SCALE are programmed by a staff of approximately ten Code Managers. The SCALE Software Coordinator maintains the SCALE system and is the only person who modifies the production source, executables, and data libraries. All modifications must be authorized by the SCALE Project Leader prior to implementation

  9. Scale-dependent Patterns in One-dimensional Fracture Spacing and Aperture Data

    Science.gov (United States)

    Roy, A.; Perfect, E.

    2013-12-01

    One-dimensional scanline data about fracture spacing and size attributes such as aperture or length are mostly considered in separate studies that compute the cumulative frequency of these attributes without regard to their actual spatial sequence. In a previous study, we showed that spacing data can be analyzed using lacunarity to identify whether fractures occur in clusters. However, to determine if such clusters also contain the largest fractures in terms of a size attribute such as aperture, it is imperative that data about the size attribute be integrated with information about fracture spacing. While for example, some researchers have considered aperture in conjunction with spacing, their analyses were either applicable only to a specific type of data (e.g. multifractal) or failed to characterize the data at different scales. Lacunarity is a technique for analyzing multi-scale non-binary data and is ideally-suited for characterizing scanline data with spacing and aperture values. We present a technique that can statistically delineate the relationship between size attributes and spatial clustering. We begin by building a model scanline that has complete partitioning of fractures with small and large apertures between the intercluster regions and clusters. We demonstrate that the ratio of lacunarity for this model to that of its counterpart for a completely randomized sequence of apertures can be used to determine whether large-aperture fractures preferentially occur next to each other. The technique is then applied to two natural fracture scanline datasets, one with most of the large apertures occurring in fracture clusters, and the other with more randomly-spaced fractures, without any specific ordering of aperture values. The lacunarity ratio clearly discriminates between these two datasets and, in the case of the first example, it is also able to identify the range of scales over which the widest fractures are clustered. The technique thus developed for

  10. Stable isotope analyses of feather amino acids identify penguin migration strategies at ocean basin scales.

    Science.gov (United States)

    Polito, Michael J; Hinke, Jefferson T; Hart, Tom; Santos, Mercedes; Houghton, Leah A; Thorrold, Simon R

    2017-08-01

    Identifying the at-sea distribution of wide-ranging marine predators is critical to understanding their ecology. Advances in electronic tracking devices and intrinsic biogeochemical markers have greatly improved our ability to track animal movements on ocean-wide scales. Here, we show that, in combination with direct tracking, stable carbon isotope analysis of essential amino acids in tail feathers provides the ability to track the movement patterns of two, wide-ranging penguin species over ocean basin scales. In addition, we use this isotopic approach across multiple breeding colonies in the Scotia Arc to evaluate migration trends at a regional scale that would be logistically challenging using direct tracking alone. © 2017 The Author(s).

  11. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  12. CPN Tools for Editing, Simulating, and Analysing Coloured Petri Nets

    DEFF Research Database (Denmark)

    Ratzer, Anne Vinter; Wells, Lisa Marie; Lassen, Henry Michael

    2003-01-01

    CPN Tools is a tool for editing, simulating and analysing Coloured Petri Nets. The GUI is based on advanced interaction techniques, such as toolglasses, marking menus, and bi-manual interaction. Feedback facilities provide contextual error messages and indicate dependency relationships between ne...... information such as boundedness properties and liveness properties. The functionality of the simulation engine and state space facilities are similar to the corresponding components in Design/CPN, which is a widespread tool for Coloured Petri Nets.......CPN Tools is a tool for editing, simulating and analysing Coloured Petri Nets. The GUI is based on advanced interaction techniques, such as toolglasses, marking menus, and bi-manual interaction. Feedback facilities provide contextual error messages and indicate dependency relationships between net...

  13. Scaling dimensions in spectroscopy of soil and vegetation

    Science.gov (United States)

    Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.

    2007-05-01

    The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling

  14. Microcomputer-controlled thermoluminescent analyser IJS MR-200

    International Nuclear Information System (INIS)

    Mihelic, M.; Miklavzic, U.; Rupnik, Z.; Satalic, P.; Spreizer, F.; Zerovnik, I.

    1985-01-01

    Performances and concept of the multipurpose, microcomputer-controlled thermoluminescent analyser, designed for use in laboratory work TL dosemeters as well as for routine dose readings in the range from ecological to accident doses is described. The main features of the analyser are: time-linear sampling, digitalisation, storing, and subsequent displaying on the monitor time scale of the glow and and temperature curve of the TL material; digital stabilization, control and diagnostic of the analog unit; ability of storing 7 different 8-parametric heating programs; ability of storing 15 evaluation programs defined by 2 or 4 parameters and 3 different algorithms (altogether 5 types of evaluations). Analyser has several features intended for routine work: 9 function keys and possibilities of file forming on cassette or display disc, of dose calculation and averaging, of printing reports with names, and possibility of additional programming in Basic. (author)

  15. Phase-relationships between scales in the perturbed turbulent boundary layer

    Science.gov (United States)

    Jacobi, I.; McKeon, B. J.

    2017-12-01

    The phase-relationship between large-scale motions and small-scale fluctuations in a non-equilibrium turbulent boundary layer was investigated. A zero-pressure-gradient flat plate turbulent boundary layer was perturbed by a short array of two-dimensional roughness elements, both statically, and under dynamic actuation. Within the compound, dynamic perturbation, the forcing generated a synthetic very-large-scale motion (VLSM) within the flow. The flow was decomposed by phase-locking the flow measurements to the roughness forcing, and the phase-relationship between the synthetic VLSM and remaining fluctuating scales was explored by correlation techniques. The general relationship between large- and small-scale motions in the perturbed flow, without phase-locking, was also examined. The synthetic large scale cohered with smaller scales in the flow via a phase-relationship that is similar to that of natural large scales in an unperturbed flow, but with a much stronger organizing effect. Cospectral techniques were employed to describe the physical implications of the perturbation on the relative orientation of large- and small-scale structures in the flow. The correlation and cospectral techniques provide tools for designing more efficient control strategies that can indirectly control small-scale motions via the large scales.

  16. Approximate Computing Techniques for Iterative Graph Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram

    2017-12-18

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.

  17. Towards improved hydrologic predictions using data assimilation techniques for water resource management at the continental scale

    Science.gov (United States)

    Naz, Bibi; Kurtz, Wolfgang; Kollet, Stefan; Hendricks Franssen, Harrie-Jan; Sharples, Wendy; Görgen, Klaus; Keune, Jessica; Kulkarni, Ketan

    2017-04-01

    More accurate and reliable hydrologic simulations are important for many applications such as water resource management, future water availability projections and predictions of extreme events. However, simulation of spatial and temporal variations in the critical water budget components such as precipitation, snow, evaporation and runoff is highly uncertain, due to errors in e.g. model structure and inputs (hydrologic parameters and forcings). In this study, we use data assimilation techniques to improve the predictability of continental-scale water fluxes using in-situ measurements along with remotely sensed information to improve hydrologic predications for water resource systems. The Community Land Model, version 3.5 (CLM) integrated with the Parallel Data Assimilation Framework (PDAF) was implemented at spatial resolution of 1/36 degree (3 km) over the European CORDEX domain. The modeling system was forced with a high-resolution reanalysis system COSMO-REA6 from Hans-Ertel Centre for Weather Research (HErZ) and ERA-Interim datasets for time period of 1994-2014. A series of data assimilation experiments were conducted to assess the efficiency of assimilation of various observations, such as river discharge data, remotely sensed soil moisture, terrestrial water storage and snow measurements into the CLM-PDAF at regional to continental scales. This setup not only allows to quantify uncertainties, but also improves streamflow predictions by updating simultaneously model states and parameters utilizing observational information. The results from different regions, watershed sizes, spatial resolutions and timescales are compared and discussed in this study.

  18. Effect of Variable Spatial Scales on USLE-GIS Computations

    Science.gov (United States)

    Patil, R. J.; Sharma, S. K.

    2017-12-01

    Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.

  19. A Simple, Reliable Precision Time Analyser

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, B. V.; Nargundkar, V. R.; Subbarao, K.; Kamath, M. S.; Eligar, S. K. [Atomic Energy Establishment Trombay, Bombay (India)

    1966-06-15

    A 30-channel time analyser is described. The time analyser was designed and built for pulsed neutron research but can be applied to other uses. Most of the logic is performed by means of ferrite memory core and transistor switching circuits. This leads to great versatility, low power consumption, extreme reliability and low cost. The analyser described provides channel Widths from 10 {mu}s to 10 ms; arbitrarily wider channels are easily obtainable. It can handle counting rates up to 2000 counts/min in each channel with less than 1% dead time loss. There is a provision for an initial delay equal to 100 channel widths. An input pulse de-randomizer unit using tunnel diodes ensures exactly equal channel widths. A brief description of the principles involved in core switching circuitry is given. The core-transistor transfer loop is compared with the usual core-diode loops and is shown to be more versatile and better adapted to the making of a time analyser. The circuits derived from the basic loop are described. These include the scale of ten, the frequency dividers and the delay generator. The current drivers developed for driving the cores are described. The crystal-controlled clock which controls the width of the time channels and synchronizes the operation of the various circuits is described. The detector pulse derandomizer unit using tunnel diodes is described. The scheme of the time analyser is then described showing how the various circuits can be integrated together to form a versatile time analyser. (author)

  20. New SCALE graphical interface for criticality safety

    International Nuclear Information System (INIS)

    Bowman, Stephen M.; Horwedel, James E.

    2003-01-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. SCALE includes the well-known KENO V.a and KENO-VI three-dimensional (3-D) Monte Carlo criticality computer codes. One of the current development efforts aimed at making SCALE easier to use is the SCALE Graphically Enhanced Editing Wizard (GeeWiz). GeeWiz is compatible with SCALE 5 and runs on Windows personal computers. GeeWiz provides input menus and context-sensitive help to guide users through the setup of their input. It includes a direct link to KENO3D to allow the user to view the components of their geometry model as it is constructed. Once the input is complete, the user can click a button to run SCALE and another button to view the output. KENO3D has also been upgraded for compatibility with SCALE 5 and interfaces directly with GeeWiz. GeeWiz and KENO3D for SCALE 5 are planned for release in late 2003. The presentation of this paper is designed as a live demonstration of GeeWiz and KENO3D for SCALE 5. (author)

  1. Distributed and hierarchical control techniques for large-scale power plant systems

    International Nuclear Information System (INIS)

    Raju, G.V.S.; Kisner, R.A.

    1985-08-01

    In large-scale systems, integrated and coordinated control functions are required to maximize plant availability, to allow maneuverability through various power levels, and to meet externally imposed regulatory limitations. Nuclear power plants are large-scale systems. Prime subsystems are those that contribute directly to the behavior of the plant's ultimate output. The prime subsystems in a nuclear power plant include reactor, primary and intermediate heat transport, steam generator, turbine generator, and feedwater system. This paper describes and discusses the continuous-variable control system developed to supervise prime plant subsystems for optimal control and coordination

  2. Clinical and molecular analyses of Beckwith-Wiedemann syndrome: Comparison between spontaneous conception and assisted reproduction techniques.

    Science.gov (United States)

    Tenorio, Jair; Romanelli, Valeria; Martin-Trujillo, Alex; Fernández, García-Moya; Segovia, Mabel; Perandones, Claudia; Pérez Jurado, Luis A; Esteller, Manel; Fraga, Mario; Arias, Pedro; Gordo, Gema; Dapía, Irene; Mena, Rocío; Palomares, María; Pérez de Nanclares, Guiomar; Nevado, Julián; García-Miñaur, Sixto; Santos-Simarro, Fernando; Martinez-Glez, Víctor; Vallespín, Elena; Monk, David; Lapunzina, Pablo

    2016-10-01

    Beckwith-Wiedemann syndrome (BWS) is an overgrowth syndrome characterized by an excessive prenatal and postnatal growth, macrosomia, macroglossia, and hemihyperplasia. The molecular basis of this syndrome is complex and heterogeneous, involving genes located at 11p15.5. BWS is correlated with assisted reproductive techniques. BWS in individuals born following assisted reproductive techniques has been found to occur four to nine times higher compared to children with to BWS born after spontaneous conception. Here, we report a series of 187 patients with to BWS born either after assisted reproductive techniques or conceived naturally. Eighty-eight percent of BWS patients born via assisted reproductive techniques had hypomethylation of KCNQ1OT1:TSS-DMR in comparison with 49% for patients with BWS conceived naturally. None of the patients with BWS born via assisted reproductive techniques had hypermethylation of H19/IGF2:IG-DMR, neither CDKN1 C mutations nor patUPD11. We did not find differences in the frequency of multi-locus imprinting disturbances between groups. Patients with BWS born via assisted reproductive techniques had an increased frequency of advanced bone age, congenital heart disease, and decreased frequency of earlobe anomalies but these differences may be explained by the different molecular background compared to those with BWS and spontaneous fertilization. We conclude there is a correlation of the molecular etiology of BWS with the type of conception. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  4. IRT analyses of the Swedish Dark Triad Dirty Dozen

    Directory of Open Access Journals (Sweden)

    Danilo Garcia

    2018-03-01

    Full Text Available Background: The Dark Triad (i.e., Machiavellianism, narcissism, and psychopathy can be captured quickly with 12 items using the Dark Triad Dirty Dozen (Jonason and Webster, 2010. Previous Item Response Theory (IRT analyses of the original English Dark Triad Dirty Dozen have shown that all three subscales adequately tap into the dark domains of personality. The aim of the present study was to analyze the Swedish version of the Dark Triad Dirty Dozen using IRT. Method: 570 individuals (nmales = 326, nfemales = 242, and 2 unreported, including university students and white-collar workers with an age range between 19 and 65 years, responded to the Swedish version of the Dark Triad Dirty Dozen (Garcia et al., 2017a,b. Results: Contrary to previous research, we found that the narcissism scale provided most information, followed by psychopathy, and finally Machiavellianism. Moreover, the psychopathy scale required a higher level of the latent trait for endorsement of its items than the narcissism and Machiavellianism scales. Overall, all items provided reasonable amounts of information and are thus effective for discriminating between individuals. The mean item discriminations (alphas were 1.92 for Machiavellianism, 2.31 for narcissism, and 1.99 for psychopathy. Conclusion: This is the first study to provide IRT analyses of the Swedish version of the Dark Triad Dirty Dozen. Our findings add to a growing literature on the Dark Triad Dirty Dozen scale in different cultures and highlight psychometric characteristics, which can be used for comparative studies. Items tapping into psychopathy showed higher thresholds for endorsement than the other two scales. Importantly, the narcissism scale seems to provide more information about a lack of narcissism, perhaps mirroring cultural conditions. Keywords: Psychology, Psychiatry, Clinical psychology

  5. Hydraulic and thermal conduction phenomena in soils at the particle-scale: Towards realistic FEM simulations

    International Nuclear Information System (INIS)

    Narsilio, G A; Yun, T S; Kress, J; Evans, T M

    2010-01-01

    This paper summarizes a method to characterize conduction properties in soils at the particle-scale. The method set the bases for an alternative way to estimate conduction parameters such as thermal conductivity and hydraulic conductivity, with the potential application to hard-to-obtain samples, where traditional experimental testing on large enough specimens becomes much more expensive. The technique is exemplified using 3D synthetic grain packings generated with discrete element methods, from which 3D granular images are constructed. Images are then imported into the finite element analyses to solve the corresponding governing partial differential equations of hydraulic and thermal conduction. High performance computing is implemented to meet the demanding 3D numerical calculations of the complex geometrical domains. The effects of void ratio and inter-particle contacts in hydraulic and thermal conduction are explored. Laboratory measurements support the numerically obtained results and validate the viability of the new methods used herein. The integration of imaging with rigorous numerical simulations at the pore-scale also enables fundamental observation of particle-scale mechanisms of macro-scale manifestation.

  6. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  7. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  8. Fractal scaling behavior of heart rate variability in response to meditation techniques

    International Nuclear Information System (INIS)

    Alvarez-Ramirez, J.; Rodríguez, E.; Echeverría, J.C.

    2017-01-01

    Highlights: • The scaling properties of heart rate variability in premeditation and meditation states were studied. • Mindfulness meditation induces a decrement of the HRV long-range scaling correlations. • Mindfulness meditation can be regarded as a type of induced deep sleep-like dynamics. - Abstract: The rescaled range (R/S) analysis was used for analyzing the fractal scaling properties of heart rate variability (HRV) of subjects undergoing premeditation and meditation states. Eight novice subjects and four advanced practitioners were considered. The corresponding pre-meditation and meditation HRV data were obtained from the Physionet database. The results showed that mindfulness meditation induces a decrement of the HRV long-range scaling correlations as quantified with the time-variant Hurst exponent. The Hurst exponent for advanced meditation practitioners decreases up to values of 0.5, reflecting uncorrelated (e.g., white noise-like) HRV dynamics. Some parallelisms between mindfulness meditation and deep sleep (Stage 4) are discussed, suggesting that the former can be regarded as a type of induced deep sleep-like dynamics.

  9. Scale-up from microtiter plate to laboratory fermenter: evaluation by online monitoring techniques of growth and protein expression in Escherichia coli and Hansenula polymorpha fermentations

    Directory of Open Access Journals (Sweden)

    Engelbrecht Christoph

    2009-12-01

    Full Text Available Abstract Background In the past decade, an enormous number of new bioprocesses have evolved in the biotechnology industry. These bioprocesses have to be developed fast and at a maximum productivity. Up to now, only few microbioreactors were developed to fulfill these demands and to facilitate sample processing. One predominant reaction platform is the shaken microtiter plate (MTP, which provides high-throughput at minimal expenses in time, money and work effort. By taking advantage of this simple and efficient microbioreactor array, a new online monitoring technique for biomass and fluorescence, called BioLector, has been recently developed. The combination of high-throughput and high information content makes the BioLector a very powerful tool in bioprocess development. Nevertheless, the scalabilty of results from the micro-scale to laboratory or even larger scales is very important for short development times. Therefore, engineering parameters regarding the reactor design and its operation conditions play an important role even on a micro-scale. In order to evaluate the scale-up from a microtiter plate scale (200 μL to a stirred tank fermenter scale (1.4 L, two standard microbial expression systems, Escherichia coli and Hansenula polymorpha, were fermented in parallel at both scales and compared with regard to the biomass and protein formation. Results Volumetric mass transfer coefficients (kLa ranging from 100 to 350 1/h were obtained in 96-well microtiter plates. Even with a suboptimal mass transfer condition in the microtiter plate compared to the stirred tank fermenter (kLa = 370-600 1/h, identical growth and protein expression kinetics were attained in bacteria and yeast fermentations. The bioprocess kinetics were evaluated by optical online measurements of biomass and protein concentrations exhibiting the same fermentation times and maximum signal deviations below 10% between the scales. In the experiments, the widely applied green

  10. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  11. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  12. The use of a resource-based relative value scale (RBRVS) to determine practice expense costs: a novel technique of practice management for the vascular surgeon.

    Science.gov (United States)

    Mabry, C D

    2001-03-01

    Vascular surgeons have had to contend with rising costs while their reimbursements have undergone steady reductions. The use of newer accounting techniques can help vascular surgeons better manage their practices, plan for future expansion, and control costs. This article reviews traditional accounting methods, together with activity-based costing (ABC) principles that have been used in the past for practice expense analysis. The main focus is on a new technique-resource-based costing (RBC)-which uses the widely available Resource-Based Relative Value Scale (RBRVS) as its basis. The RBC technique promises easier implementation as well as more flexibility in determining true costs of performing various procedures, as opposed to more traditional accounting methods. It is hoped that RBC will assist vascular surgeons in coping with decreasing reimbursement. Copyright 2001 by W.B. Saunders Company

  13. Scaling law systematics

    International Nuclear Information System (INIS)

    Pfirsch, D.; Duechs, D.F.

    1985-01-01

    A number of statistical implications of empirical scaling laws in form of power products obtained by linear regression are analysed. The sensitivity of the error against a change of exponents is described by a sensitivity factor and the uncertainty of predictions by a ''range of predictions factor''. Inner relations in the statistical material is discussed, as well as the consequences of discarding variables.A recipe is given for the computations to be done. The whole is exemplified by considering scaling laws for the electron energy confinement time of ohmically heated tokamak plasmas. (author)

  14. A FIRST LOOK AT CREATING MOCK CATALOGS WITH MACHINE LEARNING TECHNIQUES

    International Nuclear Information System (INIS)

    Xu Xiaoying; Ho, Shirley; Trac, Hy; Schneider, Jeff; Ntampaka, Michelle; Poczos, Barnabas

    2013-01-01

    We investigate machine learning (ML) techniques for predicting the number of galaxies (N gal ) that occupy a halo, given the halo's properties. These types of mappings are crucial for constructing the mock galaxy catalogs necessary for analyses of large-scale structure. The ML techniques proposed here distinguish themselves from traditional halo occupation distribution (HOD) modeling as they do not assume a prescribed relationship between halo properties and N gal . In addition, our ML approaches are only dependent on parent halo properties (like HOD methods), which are advantageous over subhalo-based approaches as identifying subhalos correctly is difficult. We test two algorithms: support vector machines (SVM) and k-nearest-neighbor (kNN) regression. We take galaxies and halos from the Millennium simulation and predict N gal by training our algorithms on the following six halo properties: number of particles, M 200 , σ v , v max , half-mass radius, and spin. For Millennium, our predicted N gal values have a mean-squared error (MSE) of ∼0.16 for both SVM and kNN. Our predictions match the overall distribution of halos reasonably well and the galaxy correlation function at large scales to ∼5%-10%. In addition, we demonstrate a feature selection algorithm to isolate the halo parameters that are most predictive, a useful technique for understanding the mapping between halo properties and N gal . Lastly, we investigate these ML-based approaches in making mock catalogs for different galaxy subpopulations (e.g., blue, red, high M star , low M star ). Given its non-parametric nature as well as its powerful predictive and feature selection capabilities, ML offers an interesting alternative for creating mock catalogs

  15. The USANS technique for the investigation of structure from hydrated gels to porous rock

    International Nuclear Information System (INIS)

    Crompton, Kylie; Forsythe, John; Bertram, Willem; Knott, R.B.; Barker, John

    2005-01-01

    Full text: The Ultra Small Angle Neutron Scattering (USANS) technique extends the range of the Small Angle Neutron Scattering (SANS) technique into the tens of micron size range. This is extremely useful for many systems particularly those where sample preparation for optical or electron microscopy can cause major changes to the microstructure under investigation. Two examples will be presented to highlight different aspects of the technique. Firstly, the structure was investigated of a full hydrated polymer scaffold for stem cells constructed from chitosan. Stem cells interact with the scaffold on the micron scale however information on the nanoscale (i e individual chitosan polymer chains) is also required in order the tailor the scaffold structure. The soft, hydrated gel is unsuitable for optical or electron microscopy. Secondly, the structure was investigated of natural oil-bearing and synthetic rock. The scattering data from different thickness of rock was analysed using a Fourier Transform method to remove multiple scattering effects and to simulate scattering from a thin rock. In this case bulk properties such as porosity are of interest. (authors)

  16. Validation of the Work-Life Balance Culture Scale (WLBCS).

    Science.gov (United States)

    Nitzsche, Anika; Jung, Julia; Kowalski, Christoph; Pfaff, Holger

    2014-01-01

    The purpose of this paper is to describe the theoretical development and initial validation of the newly developed Work-Life Balance Culture Scale (WLBCS), an instrument for measuring an organizational culture that promotes the work-life balance of employees. In Study 1 (N=498), the scale was developed and its factorial validity tested through exploratory factor analyses. In Study 2 (N=513), confirmatory factor analysis (CFA) was performed to examine model fit and retest the dimensional structure of the instrument. To assess construct validity, a priori hypotheses were formulated and subsequently tested using correlation analyses. Exploratory and confirmatory factor analyses revealed a one-factor model. Results of the bivariate correlation analyses may be interpreted as preliminary evidence of the scale's construct validity. The five-item WLBCS is a new and efficient instrument with good overall quality. Its conciseness makes it particularly suitable for use in employee surveys to gain initial insight into a company's perceived work-life balance culture.

  17. Planning for Increased Bioenergy use - Strategies for Minimising Environmental Impacts and Analysing the Consequences

    International Nuclear Information System (INIS)

    Jonsson, Anna

    2006-08-01

    There are several goals aimed at increasing the use of renewable energy in the Swedish energy system. Bioenergy is one important renewable energy source and there is a potential to increase its use in the future. This thesis aimed to develop and analyse strategies and tools that could be used when planning for conversion to bioenergy-based heating systems and the building of new residential areas with bioenergy-based heating. The goal was to enable the increase of bioenergy and simultaneously minimise the negative health effects caused by emissions associated with the combustion of bioenergy. The thesis consists of two papers. Paper I concerned existing residential areas and conversion from electric heating and individual heating systems, such as firewood and oil boilers, to more modern and low-emitting pellet techniques and small-scale district heating. Paper II concerned new residential areas and how to integrate bioenergy-based heating systems that cause impacts on local air quality into the physical planning process through using Geographical Information Systems (GIS) and a meteorological dispersion model, ALARM. The results from Paper I indicated that it was possible to convert areas currently using electric heating to pellet techniques and small-scale district heating without degrading local air quality. Furthermore, it was possible to decrease high emissions caused by firewood boilers by replacing them with pellet boilers. The results from Paper II highlighted that GIS and ALARM were advantageous for analysing local air quality characteristics when planning for new residential areas and before a residential area is built: thus, avoiding negative impacts caused by bioenergy-based combustion. In conclusion, the work procedures developed in this thesis can be used to counteract negative impacts on local air quality with increasing use of bioenergy in the heating system. Analysis of potentially negative aspects before conversion to bioenergy-based heating

  18. Research and realization of ten-print data quality control techniques for imperial scale automated fingerprint identification system

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2017-01-01

    Full Text Available As the first individualization-information processing equipment put into practical service worldwide, Automated Fingerprint Identification System (AFIS has always been regarded as the first choice in individualization of criminal suspects or those who died in mass disasters. By integrating data within the existing regional large-scale AFIS database, many countries are constructing an ultra large state-of-the-art AFIS (or Imperial Scale AFIS system. Therefore, it is very important to develop a series of ten-print data quality controlling process for this system of this type, which would insure a substantial matching efficiency, as the pouring data come into this imperial scale being. As the image quality of ten-print data is closely relevant to AFIS matching proficiency, a lot of police departments have allocated huge amount of human and financial resources over this issue by carrying out manual verification works for years. Unfortunately, quality control method above is always proved to be inadequate because it is an astronomical task involved, in which it has always been problematic and less affiant for potential errors. Hence, we will implement quality control in the above procedure with supplementary-acquisition effect caused by the delay of feedback instructions sent from the human verification teams. In this article, a series of fingerprint image quality supervising techniques has been put forward, which makes it possible for computer programs to supervise the ten-print image quality in real-time and more accurate manner as substitute for traditional manual verifications. Besides its prominent advantages in the human and financial expenditures, it has also been proved to obviously improve the image quality of the AFIS ten-print database, which leads up to a dramatic improvement in the AFIS-matching accuracy as well.

  19. Full-scale vibration tests of Atucha II N.P.P. Part I: objectives, instrumentation and test description

    International Nuclear Information System (INIS)

    Konno, T.; Tsugawa, T.; Sala, G.; Friebe, T.M.; Prato, C.A.; Godoy, A.R.

    1995-01-01

    The main purpose of the tests was to provide experimental data on the dynamic characteristics of the main reactor building and adjacent structures of a full-scale nuclear power plant built on deep Quaternary soil deposits. Test results were intended to provide a benchmark case for control and calibration of state-of-the-art numerical techniques used for engineering design of new plants and assessment of existing facilities. Interpretation of test results and calibration of numerical analyses are described in other associated papers. (author). 5 figs

  20. SU-G-IeP3-14: Updating Tools for Radiographic Technique Charts

    Energy Technology Data Exchange (ETDEWEB)

    Walz-Flannigan, A; Lucas, J; Buchanan, K; Schueler, B [Mayo Clinic, Rochester, MN (United States)

    2016-06-15

    Purpose: Manual technique selection in radiography is needed for imaging situations where there is difficulty in proper positioning for AEC, prosthesis, for non-bucky imaging, or for guiding image repeats. Basic information about how to provide consistent image signal and contrast for various kV and tissue thickness is needed to create manual technique charts, and relevant for physicists involved in technique chart optimization. Guidance on technique combinations and rules-of-thumb to provide consistent image signal still in use today are based on measurements with optical density of screen-film combinations and older generation x-ray systems. Tools such as a kV-scale chart can be useful to know how to modify mAs when kV is changed in order to maintain consistent image receptor signal level. We evaluate these tools for modern equipment for use in optimizing proper size scaled techniques. Methods: We used a water phantom to measure calibrated signal change for CR and DR (with grid) for various beam energies. Tube current values were calculated that would yield a consistent image signal response. Data was fit to provide sufficient granularity of detail to compose technique-scale chart. Tissue thickness approximated equivalence to 80% of water depth. Results: We created updated technique-scale charts, providing mAs and kV combinations to achieve consistent signal for CR and DR for various tissue equivalent thicknesses. We show how this information can be used to create properly scaled size-based manual technique charts. Conclusion: Relative scaling of mAs and kV for constant signal (i.e. the shape of the curve) appears substantially similar between film-screen and CR/DR. This supports the notion that image receptor related differences are minor factors for relative (not absolute) changes in mAs with varying kV. However, as demonstrated creation of these difficult to find detailed technique-scales are useful tools for manual chart optimization.

  1. Deconfinement phase transition and finite-size scaling in SU(2) lattice gauge theory

    International Nuclear Information System (INIS)

    Mogilevskij, O.A.

    1988-01-01

    Calculation technique for deconfinement phase transition parameters based on application of finite-size scaling theory is suggested. The essence of the technique lies in plotting of universal scaling function on the basis of numerical data obtained at different-size final lattices and discrimination of phase transition parameters for infinite lattice system. Finite-size scaling technique was developed as applied to spin system theory. β critical index for Polyakov loop and SU(2) deconfinement temperature of lattice gauge theory are calculated on the basis of finite-size scaling technique. The obtained value agrees with critical index of magnetization in Ising three-dimensional model

  2. Techniques for precise mapping of 226Ra and 228Ra in the ocean

    International Nuclear Information System (INIS)

    Moore, W.S.; Key, R.M.; Sarmiento, J.L.

    1985-01-01

    Improvements in the analyses of 226 Ra and 228 Ra in seawater made possible by better extraction and processing techniques reduce significantly the errors associated with these measurements. These improvements and the extensive sampling for Ra isotopes conducted on the TTO North Atlantic Study should enable us to use the distribution of 228 Ra to study mixing processes on a 3-15 year time scale in both the upper and deep North Atlantic. The 228 Ra profiles already analyzed show a closer resemblance to GEOSECS tritium data than to TTO tritium data in the upper ocean. This is because the transient tracer tritium was responding on a 10-year time scale during GEOSECS and a 20-year time scale during TTO. The steady state tracer 228 Ra should always respond on a time scale of 8 years. Thus the 228 Ra data obtained on TTO should provide a means to extend the features of the GEOSECS tritium field to the regions of the TTO study. The 226 Ra data are of high enough quality to identify features associated with different water masses. Changes in the positions of the deep-water masses since the GEOSECS cruise are revealed by the 226 Radata

  3. Investigation of the factor structure of the Wechsler Adult Intelligence Scale--Fourth Edition (WAIS-IV): exploratory and higher order factor analyses.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W

    2010-12-01

    The present study examined the factor structure of the Wechsler Adult Intelligence Scale--Fourth Edition (WAIS-IV; D. Wechsler, 2008a) standardization sample using exploratory factor analysis, multiple factor extraction criteria, and higher order exploratory factor analysis (J. Schmid & J. M. Leiman, 1957) not included in the WAIS-IV Technical and Interpretation Manual (D. Wechsler, 2008b). Results indicated that the WAIS-IV subtests were properly associated with the theoretically proposed first-order factors, but all but one factor-extraction criterion recommended extraction of one or two factors. Hierarchical exploratory analyses with the Schmid and Leiman procedure found that the second-order g factor accounted for large portions of total and common variance, whereas the four first-order factors accounted for small portions of total and common variance. It was concluded that the WAIS-IV provides strong measurement of general intelligence, and clinical interpretation should be primarily at that level.

  4. NM-Scale Anatomy of an Entire Stardust Carrot Track

    Science.gov (United States)

    Nakamura-Messenger, K.; Keller, L. P.; Clemett, S. J.; Messenger, S.

    2009-01-01

    Comet Wild-2 samples collected by NASA s Stardust mission are extremely complex, heterogeneous, and have experienced wide ranges of alteration during the capture process. There are two major types of track morphologies: "carrot" and "bulbous," that reflect different structural/compositional properties of the impactors. Carrot type tracks are typically produced by compact or single mineral grains which survive essentially intact as a single large terminal particle. Bulbous tracks are likely produced by fine-grained or organic-rich impactors [1]. Owing to their challenging nature and especially high value of Stardust samples, we have invested considerable effort in developing both sample preparation and analytical techniques tailored for Stardust sample analyses. Our report focuses on our systematic disassembly and coordinated analysis of Stardust carrot track #112 from the mm to nm-scale.

  5. Systems Analyses Reveal Shared and Diverse Attributes of Oct4 Regulation in Pluripotent Cells

    DEFF Research Database (Denmark)

    Ding, Li; Paszkowski-Rogacz, Maciej; Winzi, Maria

    2015-01-01

    We combine a genome-scale RNAi screen in mouse epiblast stem cells (EpiSCs) with genetic interaction, protein localization, and "protein-level dependency" studies-a systematic technique that uncovers post-transcriptional regulation-to delineate the network of factors that control the expression...... of Oct4, a key regulator of pluripotency. Our data signify that there are similarities, but also fundamental differences in Oct4 regulation in EpiSCs versus embryonic stem cells (ESCs). Through multiparametric data analyses, we predict that Tox4 is associating with the Paf1C complex, which maintains cell...... identity in both cell types, and validate that this protein-protein interaction exists in ESCs and EpiSCs. We also identify numerous knockdowns that increase Oct4 expression in EpiSCs, indicating that, in stark contrast to ESCs, Oct4 is under active repressive control in EpiSCs. These studies provide...

  6. Scale invariance from phase transitions to turbulence

    CERN Document Server

    Lesne, Annick

    2012-01-01

    During a century, from the Van der Waals mean field description (1874) of gases to the introduction of renormalization group (RG techniques 1970), thermodynamics and statistical physics were just unable to account for the incredible universality which was observed in numerous critical phenomena. The great success of RG techniques is not only to solve perfectly this challenge of critical behaviour in thermal transitions but to introduce extremely useful tools in a wide field of daily situations where a system exhibits scale invariance. The introduction of scaling, scale invariance and universality concepts has been a significant turn in modern physics and more generally in natural sciences. Since then, a new "physics of scaling laws and critical exponents", rooted in scaling approaches, allows quantitative descriptions of numerous phenomena, ranging from phase transitions to earthquakes, polymer conformations, heartbeat rhythm, diffusion, interface growth and roughening, DNA sequence, dynamical systems, chaos ...

  7. A Study for Visual Realism of Designed Pictures on Computer Screens by Investigation and Brain-Wave Analyses.

    Science.gov (United States)

    Wang, Lan-Ting; Lee, Kun-Chou

    2016-08-01

    In this article, the visual realism of designed pictures on computer screens is studied by investigation and brain-wave analyses. The practical electroencephalogram (EEG) measurement is always time-varying and fluctuating so that conventional statistical techniques are not adequate for analyses. This study proposes a new scheme based on "fingerprinting" to analyze the EEG. Fingerprinting is a technique of probabilistic pattern recognition used in electrical engineering, very like the identification of human fingerprinting in a criminal investigation. The goal of this study was to assess whether subjective preference for pictures could be manifested physiologically by EEG fingerprinting analyses. The most important advantage of the fingerprinting technique is that it does not require accurate measurement. Instead, it uses probabilistic classification. Participants' preference for pictures can be assessed using fingerprinting analyses of physiological EEG measurements. © The Author(s) 2016.

  8. Mokken scaling of the Myocardial Infarction Dimensional Assessment Scale (MIDAS).

    Science.gov (United States)

    Thompson, David R; Watson, Roger

    2011-02-01

    The purpose of this study was to examine the hierarchical and cumulative nature of the 35 items of the Myocardial Infarction Dimensional Assessment Scale (MIDAS), a disease-specific health-related quality of life measure. Data from 668 participants who completed the MIDAS were analysed using the Mokken Scaling Procedure, which is a computer program that searches polychotomous data for hierarchical and cumulative scales on the basis of a range of diagnostic criteria. Fourteen MIDAS items were retained in a Mokken scale and these items included physical activity, insecurity, emotional reaction and dependency items but excluded items related to diet, medication or side-effects. Item difficulty, in item response theory terms, ran from physical activity items (low difficulty) to insecurity, suggesting that the most severe quality of life effect of myocardial infarction is loneliness and isolation. Items from the MIDAS form a strong and reliable Mokken scale, which provides new insight into the relationship between items in the MIDAS and the measurement of quality of life after myocardial infarction. © 2010 Blackwell Publishing Ltd.

  9. Techniques for inventorying manmade impacts in roadway environments.

    Science.gov (United States)

    Dale R. Potter; J. Alan. Wagar

    1971-01-01

    Four techniques for inventorying manmade impacts along roadway corridors were devised and compared. Ground surveillance and ground photography techniques recorded impacts within the corridor visible from the road. Techniques on large- and small-scale aerial photography recorded impacts within a more complete corridor that included areas screened from the road by...

  10. Radiochemical determination of 210 Pb and 226Ra in petroleum sludges and scales

    International Nuclear Information System (INIS)

    Araujo, Andressa Arruda de

    2005-01-01

    The oil extraction and production, both onshore and offshore, can generate different types of residues, such as sludge, that is deposited in the water/oil separators, valves and storage tanks and scales, which form i the inner surface of ducts and equipment. Analyses already carried out through gamma spectrometry indicated the existence of high radioisotope concentration. However, radionuclides emitting low-energy gamma-rays, such as 210 Pb, are hardly detected by that technique. Consequently, there is a need to test alternative techniques to determine this and other radionuclides from the 238 U series. This work, therefore, focuses on the radiochemical determination of the concentration of 210 Pb, and 226 Ra in samples of sludge and scale from the oil processing stations of the UN-SEAL, a PETROBRAS unit responsible for the exploration and production of petroleum in Sergipe and Alagoas. The sludge and scale samples went through a preliminary process of extraction of oil, in order to separate the solid phase, where the largest fraction of the radioactivity is concentrated. After oil removal, the samples were digested using alkaline fusion as an option for dissolution. Finally, their activity concentration was determined for the samples of sludge and scales, using and alternative radiochemical method, which is based on ionic exchange. The activity concentration found for 210 Pb varied from 1,14 to 507,3 kBq kg -1 . The values for 226 Ra were higher, varying from 4,36 to 3.445 kBq kg -1 . The results for 226 Ra were then compared with the ones found for the same samples of sludge and scales using gamma spectrometry. The results of the comparison confirm the efficiency of the methodology used int hi work, that is, radiochemical determination by means of ionic exchange. (author)

  11. Development of proton-induced x-ray emission techniques with application to multielement analyses of human autopsy tissues and obsidian artifacts

    International Nuclear Information System (INIS)

    Nielson, K.K.

    1975-01-01

    A method of trace element analysis using proton-induced x-ray emission (PIXE) techniques with energy dispersive x-ray detection methods is described. Data were processed using the computer program ANALEX. PIXE analysis methods were applied to the analysis of liver, spleen, aorta, kidney medulla, kidney cortex, abdominal fat, pancreas, and hair from autopsies of Pima Indians. Tissues were freeze dried and low temperature ashed before analysis. Concentrations were tabulated for K, Ca, Ti, Mn, Fe, Co, Ni, Cu, Zn, Pb, Se, Br, Rb, Sr, Cd, and Cs and examined for significant differences related to diabetes. Concentrations of Ca and Sr in aorta, Fe and Rb in spleen and Mn in liver had different patterns in diabetics than in nondiabetics. High Cs concentrations were also observed in the kidneys of two subjects who died of renal disorders. Analyses by atomic absorption and PIXE methods were compared. PIXE methods were also applied to elemental analysis of obsidian artifacts from Campeche, Mexico. Based on K, Ba, Mn, Fe, Rb, Sr and Zr concentrations, the artifacts were related to several Guatemalan sources. (Diss. Abstr. Int., B)

  12. Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle

    2018-01-01

    For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...

  13. Stepwise integral scaling method and its application to severe accident phenomena

    International Nuclear Information System (INIS)

    Ishii, M.; Zhang, G.

    1993-10-01

    Severe accidents in light water reactors are characterized by an occurrence of multiphase flow with complicated phase changes, chemical reaction and various bifurcation phenomena. Because of the inherent difficulties associated with full-scale testing, scaled down and simulation experiments are essential part of the severe accident analyses. However, one of the most significant shortcomings in the area is the lack of well-established and reliable scaling method and scaling criteria. In view of this, the stepwise integral scaling method is developed for severe accident analyses. This new scaling method is quite different from the conventional approach. However, its focus on dominant transport mechanisms and use of the integral response of the system make this method relatively simple to apply to very complicated multi-phase flow problems. In order to demonstrate its applicability and usefulness, three case studies have been made. The phenomena considered are (1) corium dispersion in DCH, (2) corium spreading in BWR MARK-I containment, and (3) incore boil-off and heating process. The results of these studies clearly indicate the effectiveness of their stepwise integral scaling method. Such a simple and systematic scaling method has not been previously available to severe accident analyses

  14. Scaling satan.

    Science.gov (United States)

    Wilson, K M; Huff, J L

    2001-05-01

    The influence on social behavior of beliefs in Satan and the nature of evil has received little empirical study. Elaine Pagels (1995) in her book, The Origin of Satan, argued that Christians' intolerance toward others is due to their belief in an active Satan. In this study, more than 200 college undergraduates completed the Manitoba Prejudice Scale and the Attitudes Toward Homosexuals Scale (B. Altemeyer, 1988), as well as the Belief in an Active Satan Scale, developed by the authors. The Belief in an Active Satan Scale demonstrated good internal consistency and temporal stability. Correlational analyses revealed that for the female participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men and intolerance toward ethnic minorities. For the male participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men but was not significantly related to intolerance toward ethnic minorities. Results of this research showed that it is possible to meaningfully measure belief in an active Satan and that such beliefs may encourage intolerance toward others.

  15. Impact of speculator's expectations of returns and time scales of investment on crude oil price behaviors

    International Nuclear Information System (INIS)

    He, Ling-Yun; Fan, Ying; Wei, Yi-Ming

    2009-01-01

    Based on time series of crude oil prices (daily spot), this paper analyses price fluctuation with two significant parameters τ (speculators' time scales of investment) and ε (speculators' expectations of return) by using Zipf analysis technique, specifically, by mapping τ-returns of prices into 3-alphabeted sequences (absolute frequencies) and 2-alphabeted sequences (relative frequencies), containing the fundamental information of price fluctuations. This paper empirically explores parameters and identifies various types of speculators' cognition patterns of price behavior. In order to quantify the degree of distortion, a feasible reference is proposed: an ideal speculator. Finally, this paper discusses the similarities and differences between those cognition patterns of speculators' and those of an ideal speculator. The resultant analyses identify the possible distortion of price behaviors by their patterns. (author)

  16. Optimized evaporation technique for leachate treatment: Small scale implementation.

    Science.gov (United States)

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature. Copyright © 2016. Published by Elsevier Ltd.

  17. Static analysis: from theory to practice; Static analysis of large-scale embedded code, generation of abstract domains; Analyse statique: de la theorie a la pratique; analyse statique de code embarque de grande taille, generation de domaines abstraits

    Energy Technology Data Exchange (ETDEWEB)

    Monniaux, D.

    2009-06-15

    Software operating critical systems (aircraft, nuclear power plants) should not fail - whereas most computerised systems of daily life (personal computer, ticket vending machines, cell phone) fail from time to time. This is not a simple engineering problem: it is known, since the works of Turing and Cook, that proving that programs work correctly is intrinsically hard. In order to solve this problem, one needs methods that are, at the same time, efficient (moderate costs in time and memory), safe (all possible failures should be found), and precise (few warnings about nonexistent failures). In order to reach a satisfactory compromise between these goals, one can research fields as diverse as formal logic, numerical analysis or 'classical' algorithmics. From 2002 to 2007 I participated in the development of the Astree static analyser. This suggested to me a number of side projects, both theoretical and practical (use of formal proof techniques, analysis of numerical filters...). More recently, I became interested in modular analysis of numerical property and in the applications to program analysis of constraint solving techniques (semi-definite programming, SAT and SAT modulo theory). (author)

  18. A simplified technique for shakedown limit load determination

    International Nuclear Information System (INIS)

    Abdalla, Hany F.; Megahed, Mohammad M.; Younan, Maher Y.A.

    2007-01-01

    In this paper, a simplified technique is presented to determine the shakedown limit load of a structure using the finite element method. The simplified technique determines the shakedown limit load without performing lengthy time consuming full elastic-plastic cyclic loading simulations or conventional iterative elastic techniques. Instead, the shakedown limit load is determined by performing two analyses namely: an elastic analysis and an elastic-plastic analysis. By extracting the results of the two analyses, the shakedown limit load is determined through the calculation of the residual stresses developed within the structure. The simplified technique is applied and verified using two bench mark shakedown problems namely: the two-bar structure subjected to constant axial force and cyclic thermal loading, and the Bree cylinder subjected to constant internal pressure and cyclic high temperature variation across its wall. The results of the simplified technique showed very good correlation with the, analytically determined, Bree diagrams of both structures. In order to gain confidence in the simplified technique, the shakedown limit loads output by the simplified technique are used to perform full elastic-plastic cyclic loading simulations to check for shakedown behavior of both structures

  19. POC-scale testing of an advanced fine coal dewatering equipment/technique

    Energy Technology Data Exchange (ETDEWEB)

    Groppo, J.G.; Parekh, B.K. [Univ. of Kentucky, Lexington, KY (United States); Rawls, P. [Department of Energy, Pittsburgh, PA (United States)

    1995-11-01

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 {mu}m) clean coal. Economical dewatering of an ultra-fine clean coal product to a 20 percent level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy`s program to show that ultra-clean coal could be effectively dewatered to 20 percent or lower moisture using either conventional or advanced dewatering techniques. As the contract title suggests, the main focus of the program is on proof-of-concept testing of a dewatering technique for a fine clean coal product. The coal industry is reluctant to use the advanced fine coal recovery technology due to the non-availability of an economical dewatering process. in fact, in a recent survey conducted by U.S. DOE and Battelle, dewatering of fine clean coal was identified as the number one priority for the coal industry. This project will attempt to demonstrate an efficient and economic fine clean coal slurry dewatering process.

  20. Construction of a Scale-Questionnaire on the Attitude of the Teaching Staff as Opposed to the Educative Innovation by Means of Techniques of Cooperative Work (CAPIC

    Directory of Open Access Journals (Sweden)

    Joan Andrés Traver Martí

    2007-05-01

    Full Text Available In the present work the construction process of a scale-questionnaire is described to measure the attitude of the teaching staff as opposed to the educational innovation by means of techniques of cooperative work (CAPIC.  In order to carry out its design and elaboration we need on the one hand a model of analysis of the attitudes and an instrument of measurement of the same ones capable of guiding its practical dynamics.  The Theory of the Reasoned Action of Fisbhein and Ajzen (1975, 1980 and the summative scales (Likert have fulfilled, in both cases, this paper.

  1. Comparison of digital and conventional impression techniques: evaluation of patients' perception, treatment comfort, effectiveness and clinical outcomes.

    Science.gov (United States)

    Yuzbasioglu, Emir; Kurt, Hanefi; Turunc, Rana; Bilir, Halenur

    2014-01-30

    The purpose of this study was to compare two impression techniques from the perspective of patient preferences and treatment comfort. Twenty-four (12 male, 12 female) subjects who had no previous experience with either conventional or digital impression participated in this study. Conventional impressions of maxillary and mandibular dental arches were taken with a polyether impression material (Impregum, 3 M ESPE), and bite registrations were made with polysiloxane bite registration material (Futar D, Kettenbach). Two weeks later, digital impressions and bite scans were performed using an intra-oral scanner (CEREC Omnicam, Sirona). Immediately after the impressions were made, the subjects' attitudes, preferences and perceptions towards impression techniques were evaluated using a standardized questionnaire. The perceived source of stress was evaluated using the State-Trait Anxiety Scale. Processing steps of the impression techniques (tray selection, working time etc.) were recorded in seconds. Statistical analyses were performed with the Wilcoxon Rank test, and p < 0.05 was considered significant. There were significant differences among the groups (p < 0.05) in terms of total working time and processing steps. Patients stated that digital impressions were more comfortable than conventional techniques. Digital impressions resulted in a more time-efficient technique than conventional impressions. Patients preferred the digital impression technique rather than conventional techniques.

  2. GT-WGS: an efficient and economic tool for large-scale WGS analyses based on the AWS cloud service.

    Science.gov (United States)

    Wang, Yiqi; Li, Gen; Ma, Mark; He, Fazhong; Song, Zhuo; Zhang, Wei; Wu, Chengkun

    2018-01-19

    Whole-genome sequencing (WGS) plays an increasingly important role in clinical practice and public health. Due to the big data size, WGS data analysis is usually compute-intensive and IO-intensive. Currently it usually takes 30 to 40 h to finish a 50× WGS analysis task, which is far from the ideal speed required by the industry. Furthermore, the high-end infrastructure required by WGS computing is costly in terms of time and money. In this paper, we aim to improve the time efficiency of WGS analysis and minimize the cost by elastic cloud computing. We developed a distributed system, GT-WGS, for large-scale WGS analyses utilizing the Amazon Web Services (AWS). Our system won the first prize on the Wind and Cloud challenge held by Genomics and Cloud Technology Alliance conference (GCTA) committee. The system makes full use of the dynamic pricing mechanism of AWS. We evaluate the performance of GT-WGS with a 55× WGS dataset (400GB fastq) provided by the GCTA 2017 competition. In the best case, it only took 18.4 min to finish the analysis and the AWS cost of the whole process is only 16.5 US dollars. The accuracy of GT-WGS is 99.9% consistent with that of the Genome Analysis Toolkit (GATK) best practice. We also evaluated the performance of GT-WGS performance on a real-world dataset provided by the XiangYa hospital, which consists of 5× whole-genome dataset with 500 samples, and on average GT-WGS managed to finish one 5× WGS analysis task in 2.4 min at a cost of $3.6. WGS is already playing an important role in guiding therapeutic intervention. However, its application is limited by the time cost and computing cost. GT-WGS excelled as an efficient and affordable WGS analyses tool to address this problem. The demo video and supplementary materials of GT-WGS can be accessed at https://github.com/Genetalks/wgs_analysis_demo .

  3. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  4. Lichen techniques of pollution assessment

    Energy Technology Data Exchange (ETDEWEB)

    O' Hare, G

    1973-01-01

    Available techniques for determining air pollution by sulfur dioxide using lichens are described. An application of these methods - species distributions, zone mapping and analyses of total sulfur content - in the west central Scotland area, is briefly reported.

  5. Experimental investigations of micro-scale flow and heat transfer phenomena by using molecular tagging techniques

    International Nuclear Information System (INIS)

    Hu, Hui; Jin, Zheyan; Lum, Chee; Nocera, Daniel; Koochesfahani, Manoochehr

    2010-01-01

    Recent progress made in the development of novel molecule-based flow diagnostic techniques, including molecular tagging velocimetry (MTV) and lifetime-based molecular tagging thermometry (MTT), to achieve simultaneous measurements of multiple important flow variables for micro-flows and micro-scale heat transfer studies is reported in this study. The focus of the work described here is the particular class of molecular tagging tracers that relies on phosphorescence. Instead of using tiny particles, especially designed phosphorescent molecules, which can be turned into long-lasting glowing marks upon excitation by photons of appropriate wavelength, are used as tracers for both flow velocity and temperature measurements. A pulsed laser is used to 'tag' the tracer molecules in the regions of interest, and the tagged molecules are imaged at two successive times within the photoluminescence lifetime of the tracer molecules. The measured Lagrangian displacement of the tagged molecules provides the estimate of the fluid velocity. The simultaneous temperature measurement is achieved by taking advantage of the temperature dependence of phosphorescence lifetime, which is estimated from the intensity ratio of the tagged molecules in the acquired two phosphorescence images. The implementation and application of the molecular tagging approach for micro-scale thermal flow studies are demonstrated by two examples. The first example is to conduct simultaneous flow velocity and temperature measurements inside a microchannel to quantify the transient behavior of electroosmotic flow (EOF) to elucidate underlying physics associated with the effects of Joule heating on electrokinematically driven flows. The second example is to examine the time evolution of the unsteady heat transfer and phase changing process inside micro-sized, icing water droplets, which is pertinent to the ice formation and accretion processes as water droplets impinge onto cold wind turbine blades

  6. Coalescent-based genome analyses resolve the early branches of the euarchontoglires.

    Directory of Open Access Journals (Sweden)

    Vikas Kumar

    Full Text Available Despite numerous large-scale phylogenomic studies, certain parts of the mammalian tree are extraordinarily difficult to resolve. We used the coding regions from 19 completely sequenced genomes to study the relationships within the super-clade Euarchontoglires (Primates, Rodentia, Lagomorpha, Dermoptera and Scandentia because the placement of Scandentia within this clade is controversial. The difficulty in resolving this issue is due to the short time spans between the early divergences of Euarchontoglires, which may cause incongruent gene trees. The conflict in the data can be depicted by network analyses and the contentious relationships are best reconstructed by coalescent-based analyses. This method is expected to be superior to analyses of concatenated data in reconstructing a species tree from numerous gene trees. The total concatenated dataset used to study the relationships in this group comprises 5,875 protein-coding genes (9,799,170 nucleotides from all orders except Dermoptera (flying lemurs. Reconstruction of the species tree from 1,006 gene trees using coalescent models placed Scandentia as sister group to the primates, which is in agreement with maximum likelihood analyses of concatenated nucleotide sequence data. Additionally, both analytical approaches favoured the Tarsier to be sister taxon to Anthropoidea, thus belonging to the Haplorrhine clade. When divergence times are short such as in radiations over periods of a few million years, even genome scale analyses struggle to resolve phylogenetic relationships. On these short branches processes such as incomplete lineage sorting and possibly hybridization occur and make it preferable to base phylogenomic analyses on coalescent methods.

  7. Does technique matter; a pilot study exploring weighting techniques for a multi-criteria decision support framework.

    Science.gov (United States)

    van Til, Janine; Groothuis-Oudshoorn, Catharina; Lieferink, Marijke; Dolan, James; Goetghebeur, Mireille

    2014-01-01

    There is an increased interest in the use of multi-criteria decision analysis (MCDA) to support regulatory and reimbursement decision making. The EVIDEM framework was developed to provide pragmatic multi-criteria decision support in health care, to estimate the value of healthcare interventions, and to aid in priority-setting. The objectives of this study were to test 1) the influence of different weighting techniques on the overall outcome of an MCDA exercise, 2) the discriminative power in weighting different criteria of such techniques, and 3) whether different techniques result in similar weights in weighting the criteria set proposed by the EVIDEM framework. A sample of 60 Dutch and Canadian students participated in the study. Each student used an online survey to provide weights for 14 criteria with two different techniques: a five-point rating scale and one of the following techniques selected randomly: ranking, point allocation, pairwise comparison and best worst scaling. The results of this study indicate that there is no effect of differences in weights on value estimates at the group level. On an individual level, considerable differences in criteria weights and rank order occur as a result of the weight elicitation method used, and the ability of different techniques to discriminate in criteria importance. Of the five techniques tested, the pair-wise comparison of criteria has the highest ability to discriminate in weights when fourteen criteria are compared. When weights are intended to support group decisions, the choice of elicitation technique has negligible impact on criteria weights and the overall value of an innovation. However, when weights are used to support individual decisions, the choice of elicitation technique influences outcome and studies that use dissimilar techniques cannot be easily compared. Weight elicitation through pairwise comparison of criteria is preferred when taking into account its superior ability to discriminate between

  8. Scale-up of miscible flood processes. Quarterly report, July 1, 1993--September 30, 1993

    Energy Technology Data Exchange (ETDEWEB)

    Orr, F.M. Jr.

    1993-12-31

    Progress is reported for a comprehensive investigation of the scaling behavior of gas injection processes in heterogeneous reservoirs. The interplay of phase behavior, viscous fingering, gravity segregation, capillary imbibition and drainage, and reservoir heterogeneity is examined in a series of simulations and experiments. Compositional and first-contact miscible simulations of viscous fingering and gravity segregation are compared to show that the two techniques can give very different results. Also, analyzed are two-dimensional and three-dimensional flows in which gravity segregation and viscous fingering interact. The simulations show that 2D and 3D flows can differ significantly. A comparison of analytical solutions for three-component two-phase flow with experimental results for oil/water/alcohol systems is reported. While the experiments and theory show reasonable agreement, some differences remain to be explained. The scaling behavior of the interaction of gravity segregation and capillary forces is investigated through simulations and through scaling arguments based on analysis of the differential equations. The simulations show that standard approaches do not agree well with results of low IFT displacements. The scaling analyses, however, reveal flow regimes where capillary, gravity, or viscous forces dominate the flow.

  9. Symbolic Multidimensional Scaling

    NARCIS (Netherlands)

    P.J.F. Groenen (Patrick); Y. Terada

    2015-01-01

    markdownabstract__Abstract__ Multidimensional scaling (MDS) is a technique that visualizes dissimilarities between pairs of objects as distances between points in a low dimensional space. In symbolic MDS, a dissimilarity is not just a value but can represent an interval or even a histogram. Here,

  10. Development of the Teacher Candidates’ Level of being Affected from Public Personnel Selection Examination Scale

    Directory of Open Access Journals (Sweden)

    Fatma SUSAR KIRMIZI

    2016-07-01

    Full Text Available This study aimed to develop a scale to evaluate teacher candidates' level of being affected from the public personnel selection examination. The participants of the study consisted of the final year students at Pamukkale University Education Faculty. The participants were 207 teacher candidates, of whom 143 were female and 64 were male. The validity and reliability study of the scale was conducted on the data gathered from teacher candidates studying at Art Teaching, Music Teaching, Turkish Language Teaching, Social Studies Education, Science Teaching, Psychological Counseling and Guidance Education, Elementary Education and Preschool Education departments of Pamukkale University Education Faculty. The Lawshe technique was used in the evaluation of the scale by experts. To determine the construct validity, factor analysis was performed on the data, and two sub-scales were identified. The factor loading values of the items in the first sub-scale ranged between 0,65 and 0,35, and those in the second sub-scale between 0,75 and 0,39. As a result of the analyses, the "Teacher Candidates' Level of Being Affected From Public Personnel Selection Examination Scale" (TCLBAPPSES including 33 items, 23 negative and 10 positive, and two sub-scales was produced. The Cronbach's Alpha reliability coefficient was found as 0,86 for the first sub-dimension, 0,73 for the second sub-dimension, and 0,91 for the whole scale. As a result, it can be argued that the scale is reliable

  11. Innovative SU-8 Lithography Techniques and Their Applications

    Directory of Open Access Journals (Sweden)

    Jeong Bong Lee

    2014-12-01

    Full Text Available SU-8 has been widely used in a variety of applications for creating structures in micro-scale as well as sub-micron scales for more than 15 years. One of the most common structures made of SU-8 is tall (up to millimeters high-aspect-ratio (up to 100:1 3D microstructure, which is far better than that made of any other photoresists. There has been a great deal of efforts in developing innovative unconventional lithography techniques to fully utilize the thick high aspect ratio nature of the SU-8 photoresist. Those unconventional lithography techniques include inclined ultraviolet (UV exposure, back-side UV exposure, drawing lithography, and moving-mask UV lithography. In addition, since SU-8 is a negative-tone photoresist, it has been a popular choice of material for multiple-photon interference lithography for the periodic structure in scales down to deep sub-microns such as photonic crystals. These innovative lithography techniques for SU-8 have led to a lot of unprecedented capabilities for creating unique micro- and nano-structures. This paper reviews such innovative lithography techniques developed in the past 15 years or so.

  12. The Rayleigh curve as a model for effort distribution over the life of medium scale software systems. M.S. Thesis - Maryland Univ.

    Science.gov (United States)

    Picasso, G. O.; Basili, V. R.

    1982-01-01

    It is noted that previous investigations into the applicability of Rayleigh curve model to medium scale software development efforts have met with mixed results. The results of these investigations are confirmed by analyses of runs and smoothing. The reasons for the models' failure are found in the subcycle effort data. There are four contributing factors: uniqueness of the environment studied, the influence of holidays, varying management techniques and differences in the data studied.

  13. Failure Mechanism of Rock Bridge Based on Acoustic Emission Technique

    Directory of Open Access Journals (Sweden)

    Guoqing Chen

    2015-01-01

    Full Text Available Acoustic emission (AE technique is widely used in various fields as a reliable nondestructive examination technology. Two experimental tests were carried out in a rock mechanics laboratory, which include (1 small scale direct shear tests of rock bridge with different lengths and (2 large scale landslide model with locked section. The relationship of AE event count and record time was analyzed during the tests. The AE source location technology and comparative analysis with its actual failure model were done. It can be found that whether it is small scale test or large scale landslide model test, AE technique accurately located the AE source point, which reflected the failure generation and expansion of internal cracks in rock samples. Large scale landslide model with locked section test showed that rock bridge in rocky slope has typical brittle failure behavior. The two tests based on AE technique well revealed the rock failure mechanism in rocky slope and clarified the cause of high speed and long distance sliding of rocky slope.

  14. Scaling of differential equations

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    The book serves both as a reference for various scaled models with corresponding dimensionless numbers, and as a resource for learning the art of scaling. A special feature of the book is the emphasis on how to create software for scaled models, based on existing software for unscaled models. Scaling (or non-dimensionalization) is a mathematical technique that greatly simplifies the setting of input parameters in numerical simulations. Moreover, scaling enhances the understanding of how different physical processes interact in a differential equation model. Compared to the existing literature, where the topic of scaling is frequently encountered, but very often in only a brief and shallow setting, the present book gives much more thorough explanations of how to reason about finding the right scales. This process is highly problem dependent, and therefore the book features a lot of worked examples, from very simple ODEs to systems of PDEs, especially from fluid mechanics. The text is easily accessible and exam...

  15. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  16. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    OpenAIRE

    S Safinaz; A V Ravi Kumar

    2017-01-01

    In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames t...

  17. Transcorporeal cervical foraminotomy: description of technique and results

    Directory of Open Access Journals (Sweden)

    Guilherme Pereira Corrêa Meyer

    2014-09-01

    Full Text Available OBJECTIVE: Retrospective analyses of 216 patients undergoing foraminal decompression with transcorporeal approach and review of the surgical technique. METHOD: 216 patients with minimum follow-up of 2 years and an average of 41.8 months were included in the study. The clinical records of these patients were reviewed for complications, NDI (neck disability index and VAS (visual analogue scale. Pre and post-operative radiographs were used to evaluate the disc height. RESULTS: At the end of follow-up patients had significant clinical improvement with reduction of NDI of 88.3% and 86.5% and 68.3% of the VAS for neck and upper limb, respectively (p<0.05. A reduction of 8.8% of the disc height was observed without other complications associated (p<0.05. CONCLUSION: Radicular decompression through a transcorporeal approach is an alternative that provides good clinical results without the need for a fusion and with few complications.

  18. Fundamental issues in finite element analyses of localization of deformation

    NARCIS (Netherlands)

    Borst, de R.; Sluys, L.J.; Mühlhaus, H.-B.; Pamin, J.

    1993-01-01

    Classical continuum models, i.e. continuum models that do not incorporate an internal length scale, suffer from excessive mesh dependence when strain-softening models are used in numerical analyses and cannot reproduce the size effect commonly observed in quasi-brittle failure. In this contribution

  19. Effective combination of DIC, AE, and UPV nondestructive techniques on a scaled model of the Belgian nuclear waste container

    Science.gov (United States)

    Iliopoulos, Sokratis N.; Areias, Lou; Pyl, Lincy; Vantomme, John; Van Marcke, Philippe; Coppens, Erik; Aggelis, Dimitrios G.

    2015-03-01

    Protecting the environment and future generations against the potential hazards arising from high-level and heat emitting radioactive waste is a worldwide concern. Following this direction, the Belgian Agency for Radioactive Waste and Enriched Fissile Materials has come up with the reference design which considers the geological disposal of the waste in purely indurated clay. In this design the wastes are first post-conditioned in massive concrete structures called Supercontainers before being transported to the underground repositories. The Supercontainers are cylindrical structures which consist of four engineering barriers that from the inner to the outer surface are namely: the overpack, the filler, the concrete buffer and possibly the envelope. The overpack, which is made of carbon steel, is the place where the vitrified wastes and spent fuel are stored. The buffer, which is made of concrete, creates a highly alkaline environment ensuring slow and uniform overpack corrosion as well as radiological shielding. In order to evaluate the feasibility to construct such Supercontainers two scaled models have so far been designed and tested. The first scaled model indicated crack formation on the surface of the concrete buffer but the absence of a crack detection and monitoring system precluded defining the exact time of crack initiation, as well as the origin, the penetration depth, the crack path and the propagation history. For this reason, the second scaled model test was performed to obtain further insight by answering to the aforementioned questions using the Digital Image Correlation, Acoustic Emission and Ultrasonic Pulse Velocity nondestructive testing techniques.

  20. Insights into SCP/TAPS proteins of liver flukes based on large-scale bioinformatic analyses of sequence datasets.

    Directory of Open Access Journals (Sweden)

    Cinzia Cantacessi

    Full Text Available BACKGROUND: SCP/TAPS proteins of parasitic helminths have been proposed to play key roles in fundamental biological processes linked to the invasion of and establishment in their mammalian host animals, such as the transition from free-living to parasitic stages and the modulation of host immune responses. Despite the evidence that SCP/TAPS proteins of parasitic nematodes are involved in host-parasite interactions, there is a paucity of information on this protein family for parasitic trematodes of socio-economic importance. METHODOLOGY/PRINCIPAL FINDINGS: We conducted the first large-scale study of SCP/TAPS proteins of a range of parasitic trematodes of both human and veterinary importance (including the liver flukes Clonorchis sinensis, Opisthorchis viverrini, Fasciola hepatica and F. gigantica as well as the blood flukes Schistosoma mansoni, S. japonicum and S. haematobium. We mined all current transcriptomic and/or genomic sequence datasets from public databases, predicted secondary structures of full-length protein sequences, undertook systematic phylogenetic analyses and investigated the differential transcription of SCP/TAPS genes in O. viverrini and F. hepatica, with an emphasis on those that are up-regulated in the developmental stages infecting the mammalian host. CONCLUSIONS: This work, which sheds new light on SCP/TAPS proteins, guides future structural and functional explorations of key SCP/TAPS molecules associated with diseases caused by flatworms. Future fundamental investigations of these molecules in parasites and the integration of structural and functional data could lead to new approaches for the control of parasitic diseases.

  1. Scaling trajectories in civil aircraft (1913-1997)

    NARCIS (Netherlands)

    Frenken, K.; Leydesdorff, L.

    2000-01-01

    Using entropy statistics we analyse scaling patterns in terms of changes in the ratios among product characteristics of 143 designs in civil aircraft. Two allegedly dominant designs, the piston propeller DC3 and the turbofan Boeing 707, are shown to have triggered a scaling trajectory at the level

  2. Assessing public speaking fear with the short form of the Personal Report of Confidence as a Speaker scale: confirmatory factor analyses among a French-speaking community sample

    Directory of Open Access Journals (Sweden)

    Heeren A

    2013-05-01

    Full Text Available Alexandre Heeren,1,2 Grazia Ceschi,3 David P Valentiner,4 Vincent Dethier,1 Pierre Philippot11Université Catholique de Louvain, Louvain-la-Neuve, Belgium; 2National Fund for Scientific Research, Brussels, Belgium; 3Department of Psychology, University of Geneva, Geneva, Switzerland; 4Department of Psychology, Northern Illinois University, DeKalb, IL, USABackground: The main aim of this study was to assess the reliability and structural validity of the French version of the 12-item version of the Personal Report of Confidence as Speaker (PRCS, one of the most promising measurements of public speaking fear.Methods: A total of 611 French-speaking volunteers were administered the French versions of the short PRCS, the Liebowitz Social Anxiety Scale, the Fear of Negative Evaluation scale, as well as the Trait version of the Spielberger State-Trait Anxiety Inventory and the Beck Depression Inventory-II, which assess the level of anxious and depressive symptoms, respectively.Results: Regarding its structural validity, confirmatory factor analyses indicated a single-factor solution, as implied by the original version. Good scale reliability (Cronbach’s alpha = 0.86 was observed. The item discrimination analysis suggested that all the items contribute to the overall scale score reliability. The French version of the short PRCS showed significant correlations with the Liebowitz Social Anxiety Scale (r = 0.522, the Fear of Negative Evaluation scale (r = 0.414, the Spielberger State-Trait Anxiety Inventory (r = 0.516, and the Beck Depression Inventory-II (r = 0.361.Conclusion: The French version of the short PRCS is a reliable and valid measure for the evaluation of the fear of public speaking among a French-speaking sample. These findings have critical consequences for the measurement of psychological and pharmacological treatment effectiveness in public speaking fear among a French-speaking sample.Keywords: social phobia, public speaking, confirmatory

  3. Making systems with mutually exclusive events analysable by standard fault tree analysis tools

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    2001-01-01

    Methods are developed for analysing systems that comprise mutually exclusive events by fault tree techniques that accept only statistically independent basic events. Techniques based on equivalent models and numerical transformations are presented for phased missions and for systems with component-caused system-level common cause failures. Numerical examples illustrate the methods

  4. Masonry structures built with fictile tubules: Experimental and numerical analyses

    Science.gov (United States)

    Tiberti, Simone; Scuro, Carmelo; Codispoti, Rosamaria; Olivito, Renato S.; Milani, Gabriele

    2017-11-01

    Masonry structures with fictile tubules were a distinctive building technique of the Mediterranean area. This technique dates back to Roman and early Christian times, used to build vaulted constructions and domes with various geometrical forms by virtue of their modular structure. In the present work, experimental tests were carried out to identify the mechanical properties of hollow clay fictile tubules and a possible reinforcing technique for existing buildings employing such elements. The experimental results were then validated by devising and analyzing numerical models with the FE software Abaqus, also aimed at investigating the structural behavior of an arch via linear and nonlinear static analyses.

  5. Non-invasive PGAA, PIXE and ToF-ND analyses on Hungarian Bronze Age defensive armour

    International Nuclear Information System (INIS)

    Marianne Moedlinger; Imre Kovacs; Zoltan Szoekefalvi-Nagy; Ziad El Morr

    2014-01-01

    Non-invasive, archaeometric analyses on selected Hungarian Bronze Age defensive armour is presented here: three greaves, three helmets two shields as well as one vessel fragment were analysed with PIXE, PGAA and TOF-ND. The detected alloy elemental and phase composition as well as its intergranular or spatial concentration distribution reveals important insights into the alloys used and the manufacturing techniques applied c. 1200-950 BC, and allows to reconstruct the production techniques used during the Late Bronze Age. (author)

  6. Developments in functional neuroimaging techniques

    International Nuclear Information System (INIS)

    Aine, C.J.

    1995-01-01

    A recent review of neuroimaging techniques indicates that new developments have primarily occurred in the area of data acquisition hardware/software technology. For example, new pulse sequences on standard clinical imagers and high-powered, rapidly oscillating magnetic field gradients used in echo planar imaging (EPI) have advanced MRI into the functional imaging arena. Significant developments in tomograph design have also been achieved for monitoring the distribution of positron-emitting radioactive tracers in the body (PET). Detector sizes, which pose a limit on spatial resolution, have become smaller (e.g., 3--5 mm wide) and a new emphasis on volumetric imaging has emerged which affords greater sensitivity for determining locations of positron annihilations and permits smaller doses to be utilized. Electromagnetic techniques have also witnessed growth in the ability to acquire data from the whole head simultaneously. EEG techniques have increased their electrode coverage (e.g., 128 channels rather than 16 or 32) and new whole-head systems are now in use for MEG. But the real challenge now is in the design and implementation of more sophisticated analyses to effectively handle the tremendous amount of physiological/anatomical data that can be acquired. Furthermore, such analyses will be necessary for integrating data across techniques in order to provide a truly comprehensive understanding of the functional organization of the human brain

  7. A FIRST LOOK AT CREATING MOCK CATALOGS WITH MACHINE LEARNING TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Xu Xiaoying; Ho, Shirley; Trac, Hy; Schneider, Jeff; Ntampaka, Michelle [McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Poczos, Barnabas [School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States)

    2013-08-01

    We investigate machine learning (ML) techniques for predicting the number of galaxies (N{sub gal}) that occupy a halo, given the halo's properties. These types of mappings are crucial for constructing the mock galaxy catalogs necessary for analyses of large-scale structure. The ML techniques proposed here distinguish themselves from traditional halo occupation distribution (HOD) modeling as they do not assume a prescribed relationship between halo properties and N{sub gal}. In addition, our ML approaches are only dependent on parent halo properties (like HOD methods), which are advantageous over subhalo-based approaches as identifying subhalos correctly is difficult. We test two algorithms: support vector machines (SVM) and k-nearest-neighbor (kNN) regression. We take galaxies and halos from the Millennium simulation and predict N{sub gal} by training our algorithms on the following six halo properties: number of particles, M{sub 200}, {sigma}{sub v}, v{sub max}, half-mass radius, and spin. For Millennium, our predicted N{sub gal} values have a mean-squared error (MSE) of {approx}0.16 for both SVM and kNN. Our predictions match the overall distribution of halos reasonably well and the galaxy correlation function at large scales to {approx}5%-10%. In addition, we demonstrate a feature selection algorithm to isolate the halo parameters that are most predictive, a useful technique for understanding the mapping between halo properties and N{sub gal}. Lastly, we investigate these ML-based approaches in making mock catalogs for different galaxy subpopulations (e.g., blue, red, high M{sub star}, low M{sub star}). Given its non-parametric nature as well as its powerful predictive and feature selection capabilities, ML offers an interesting alternative for creating mock catalogs.

  8. Analyses of Effects of Cutting Parameters on Cutting Edge Temperature Using Inverse Heat Conduction Technique

    Directory of Open Access Journals (Sweden)

    Marcelo Ribeiro dos Santos

    2014-01-01

    Full Text Available During machining energy is transformed into heat due to plastic deformation of the workpiece surface and friction between tool and workpiece. High temperatures are generated in the region of the cutting edge, which have a very important influence on wear rate of the cutting tool and on tool life. This work proposes the estimation of heat flux at the chip-tool interface using inverse techniques. Factors which influence the temperature distribution at the AISI M32C high speed steel tool rake face during machining of a ABNT 12L14 steel workpiece were also investigated. The temperature distribution was predicted using finite volume elements. A transient 3D numerical code using irregular and nonstaggered mesh was developed to solve the nonlinear heat diffusion equation. To validate the software, experimental tests were made. The inverse problem was solved using the function specification method. Heat fluxes at the tool-workpiece interface were estimated using inverse problems techniques and experimental temperatures. Tests were performed to study the effect of cutting parameters on cutting edge temperature. The results were compared with those of the tool-work thermocouple technique and a fair agreement was obtained.

  9. Diagnostic Comparison of Meteorological Analyses during the 2002 Antarctic Winter

    Science.gov (United States)

    Manney, Gloria L.; Allen, Douglas R.; Kruger, Kirstin; Naujokat, Barbara; Santee, Michelle L.; Sabutis, Joseph L.; Pawson, Steven; Swinbank, Richard; Randall, Cora E.; Simmons, Adrian J.; hide

    2005-01-01

    Several meteorological datasets, including U.K. Met Office (MetO), European Centre for Medium-Range Weather Forecasts (ECMWF), National Centers for Environmental Prediction (NCEP), and NASA's Goddard Earth Observation System (GEOS-4) analyses, are being used in studies of the 2002 Southern Hemisphere (SH) stratospheric winter and Antarctic major warming. Diagnostics are compared to assess how these studies may be affected by the meteorological data used. While the overall structure and evolution of temperatures, winds, and wave diagnostics in the different analyses provide a consistent picture of the large-scale dynamics of the SH 2002 winter, several significant differences may affect detailed studies. The NCEP-NCAR reanalysis (REAN) and NCEP-Department of Energy (DOE) reanalysis-2 (REAN-2) datasets are not recommended for detailed studies, especially those related to polar processing, because of lower-stratospheric temperature biases that result in underestimates of polar processing potential, and because their winds and wave diagnostics show increasing differences from other analyses between similar to 30 and 10 hPa (their top level). Southern Hemisphere polar stratospheric temperatures in the ECMWF 40-Yr Re-analysis (ERA-40) show unrealistic vertical structure, so this long-term reanalysis is also unsuited for quantitative studies. The NCEP/Climate Prediction Center (CPC) objective analyses give an inferior representation of the upper-stratospheric vortex. Polar vortex transport barriers are similar in all analyses, but there is large variation in the amount, patterns, and timing of mixing, even among the operational assimilated datasets (ECMWF, MetO, and GEOS-4). The higher-resolution GEOS-4 and ECMWF assimilations provide significantly better representation of filamentation and small-scale structure than the other analyses, even when fields gridded at reduced resolution are studied. The choice of which analysis to use is most critical for detailed transport

  10. Three-dimensional nanometer scale analyses of precipitate structures and local compositions in titanium aluminide engineering alloys

    Science.gov (United States)

    Gerstl, Stephan S. A.

    Titanium aluminide (TiAl) alloys are among the fastest developing class of materials for use in high temperature structural applications. Their low density and high strength make them excellent candidates for both engine and airframe applications. Creep properties of TiAl alloys, however, have been a limiting factor in applying the material to a larger commercial market. In this research, nanometer scale compositional and structural analyses of several TiAl alloys, ranging from model Ti-Al-C ternary alloys to putative commercial alloys with 10 components are investigated utilizing three dimensional atom probe (3DAP) and transmission electron microscopies. Nanometer sized borides, silicides, and carbide precipitates are involved in strengthening TiAl alloys, however, chemical partitioning measurements reveal oxygen concentrations up to 14 at. % within the precipitate phases, resulting in the realization of oxycarbide formation contributing to the precipitation strengthening of TiAl alloys. The local compositions of lamellar microstructures and a variety of precipitates in the TiAl system, including boride, silicide, binary carbides, and intermetallic carbides are investigated. Chemical partitioning of the microalloying elements between the alpha2/gamma lamellar phases, and the precipitate/gamma-matrix phases are determined. Both W and Hf have been shown to exhibit a near interfacial excess of 0.26 and 0.35 atoms nm-2 respectively within ca. 7 nm of lamellar interfaces in a complex TiAl alloy. In the case of needle-shaped perovskite Ti3AlC carbide precipitates, periodic domain boundaries are observed 5.3+/-0.8 nm apart along their growth axis parallel to the TiAl[001] crystallographic direction with concomitant composition variations after 24 hrs. at 800°C.

  11. Comparative analyses of population-scale phenomic data in electronic medical records reveal race-specific disease networks

    Science.gov (United States)

    Glicksberg, Benjamin S.; Li, Li; Badgeley, Marcus A.; Shameer, Khader; Kosoy, Roman; Beckmann, Noam D.; Pho, Nam; Hakenberg, Jörg; Ma, Meng; Ayers, Kristin L.; Hoffman, Gabriel E.; Dan Li, Shuyu; Schadt, Eric E.; Patel, Chirag J.; Chen, Rong; Dudley, Joel T.

    2016-01-01

    Motivation: Underrepresentation of racial groups represents an important challenge and major gap in phenomics research. Most of the current human phenomics research is based primarily on European populations; hence it is an important challenge to expand it to consider other population groups. One approach is to utilize data from EMR databases that contain patient data from diverse demographics and ancestries. The implications of this racial underrepresentation of data can be profound regarding effects on the healthcare delivery and actionability. To the best of our knowledge, our work is the first attempt to perform comparative, population-scale analyses of disease networks across three different populations, namely Caucasian (EA), African American (AA) and Hispanic/Latino (HL). Results: We compared susceptibility profiles and temporal connectivity patterns for 1988 diseases and 37 282 disease pairs represented in a clinical population of 1 025 573 patients. Accordingly, we revealed appreciable differences in disease susceptibility, temporal patterns, network structure and underlying disease connections between EA, AA and HL populations. We found 2158 significantly comorbid diseases for the EA cohort, 3265 for AA and 672 for HL. We further outlined key disease pair associations unique to each population as well as categorical enrichments of these pairs. Finally, we identified 51 key ‘hub’ diseases that are the focal points in the race-centric networks and of particular clinical importance. Incorporating race-specific disease comorbidity patterns will produce a more accurate and complete picture of the disease landscape overall and could support more precise understanding of disease relationships and patient management towards improved clinical outcomes. Contacts: rong.chen@mssm.edu or joel.dudley@mssm.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307606

  12. Evolution of feeding specialization in Tanganyikan scale-eating cichlids: a molecular phylogenetic approach

    Directory of Open Access Journals (Sweden)

    Nishida Mutsumi

    2007-10-01

    Full Text Available Abstract Background Cichlid fishes in Lake Tanganyika exhibit remarkable diversity in their feeding habits. Among them, seven species in the genus Perissodus are known for their unique feeding habit of scale eating with specialized feeding morphology and behaviour. Although the origin of the scale-eating habit has long been questioned, its evolutionary process is still unknown. In the present study, we conducted interspecific phylogenetic analyses for all nine known species in the tribe Perissodini (seven Perissodus and two Haplotaxodon species using amplified fragment length polymorphism (AFLP analyses of the nuclear DNA. On the basis of the resultant phylogenetic frameworks, the evolution of their feeding habits was traced using data from analyses of stomach contents, habitat depths, and observations of oral jaw tooth morphology. Results AFLP analyses resolved the phylogenetic relationships of the Perissodini, strongly supporting monophyly for each species. The character reconstruction of feeding ecology based on the AFLP tree suggested that scale eating evolved from general carnivorous feeding to highly specialized scale eating. Furthermore, scale eating is suggested to have evolved in deepwater habitats in the lake. Oral jaw tooth shape was also estimated to have diverged in step with specialization for scale eating. Conclusion The present evolutionary analyses of feeding ecology and morphology based on the obtained phylogenetic tree demonstrate for the first time the evolutionary process leading from generalised to highly specialized scale eating, with diversification in feeding morphology and behaviour among species.

  13. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  14. Application of the thermal plasma technique in the treatment of stone surfaces

    International Nuclear Information System (INIS)

    Gonzalez A, Z.I.

    2000-01-01

    The stone materials which form part of the cultural heritage of Mexico, are degraded under the united action of water, atmospheric gases, air pollution, temperature changes and the microorganisms action; provoking on the stone: fissures, crevices, scalings, fragmentations, pulverizations, etc. Therefore, the purpose of this work is to study the possibilities to apply a protective coating on the stone surfaces, previously clean and consolidated, through the thermal plasma technique. The purpose is to analyse the physical and chemical properties of three types of stone materials: quarry, tezontle and chiluca, usually used in constructions of cultural interest such as: historical monuments, churches, sculptures, etc., before and after to be submitted to the action of thermal plasma in order to examine the feasibility in the use of this coating technique in this type of applications. The application of conventional techniques to determine: porosity, density, absorption, low pressure water absorption and crystallization by total immersion of nuclear techniques such as: neutron activation analysis, x-ray diffraction and scanning electron microscopy as well as of instrumental techniques: optical microscopy, mechanical assays of compression, flexure and surface area calculations, allowed to know the chemical and physical properties of the stone material before and after to be treated through the thermal plasma technique, projecting quartz on the stones surface at different distances and current intensity and showing the effect caused by the modifications or surface alterations present by cause of the application of that coating. the obtained results provide a general panorama of the application of this technique as an alternative to the maintenance of the architectural inheritance built in stone. (Author)

  15. Scaling images using their background ratio. An application in statistical comparisons of images

    International Nuclear Information System (INIS)

    Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J

    2003-01-01

    Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases

  16. Scaling images using their background ratio. An application in statistical comparisons of images.

    Science.gov (United States)

    Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J

    2003-06-07

    Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases.

  17. The role of CFD computer analyses in hydrogen safety management

    International Nuclear Information System (INIS)

    Komen, E.M.J; Visser, D.C; Roelofs, F.; Te Lintelo, J.G.T

    2014-01-01

    The risks of hydrogen release and combustion during a severe accident in a light water reactor have attracted considerable attention after the Fukushima accident in Japan. Reliable computer analyses are needed for the optimal design of hydrogen mitigation systems, like e.g. passive autocatalytic recombiners (PARs), and for the assessment of the associated residual risk of hydrogen combustion. Traditionally, so-called Lumped Parameter (LP) computer codes are being used for these purposes. In the last decade, significant progress has been made in the development, validation, and application of more detailed, three-dimensional Computational Fluid Dynamics (CFD) simulations for hydrogen safety analyses. The objective of the current paper is to address the following questions: - When are CFD computer analyses needed complementary to the traditional LP code analyses for hydrogen safety management? - What is the validation status of the CFD computer code for hydrogen distribution, mitigation, and combustion analyses? - Can CFD computer analyses nowadays be executed in practical and reliable way for full scale containments? The validation status and reliability of CFD code simulations will be illustrated by validation analyses performed for experiments executed in the PANDA, THAI, and ENACCEF facilities. (authors)

  18. Magnetic Field Studies in BL Lacertae through Faraday Rotation and a Novel Astrometric Technique

    Directory of Open Access Journals (Sweden)

    Sol N. Molina

    2017-12-01

    Full Text Available It is thought that dynamically important helical magnetic fields twisted by the differential rotation of the black hole’s accretion disk or ergosphere play an important role in the launching, acceleration, and collimation of active galactic nuclei (AGN jets. We present multi-frequency astrometric and polarimetric Very Long Baseline Array (VLBA images at 15, 22, and 43 GHz, as well as Faraday rotation analyses of the jet in BL Lacertae as part of a sample of AGN jets aimed to probe the magnetic field structure at the innermost scales to test jet formation models. The novel astrometric technique applied allows us to obtain the absolute position at mm wavelengths without any external calibrator.

  19. Applicability and sensitivity of gamma transmission and radiotracer techniques for mineral scaling studies

    Energy Technology Data Exchange (ETDEWEB)

    Bjoernstad, Tor; Stamatakis, Emanuel

    2006-05-15

    Mineral scaling in petroleum and geothermal production systems creates a substantial problem of flow impairment. It is a priority to develop methods for scale inhibition. To study scaling rates and mechanisms in laboratory flow experiments under simulated reservoir conditions two nuclear methods have been introduced and tested. The first applies the principle of gamma transmission to measure mass increase. Here, we use a 30 MBq source of 133Ba. The other method applies radioactive tracers of one or more of the scaling components. We have used the study of CaC03-precipitation, as an example of the applicability of the method where the main tracer used is 47Ca2+. While the first method must be regarded as an indirect method, the latter is a direct method where the reactions of specific components may be studied. Both methods are on-line, continuous and non-destructive, and capable to study scaling of liquids with saturation ratios as low as SR=1.5 or lower. A lower limit of detection for the transmission method in sand-packed columns with otherwise reasonable experimental parameters is less than 1 mg CaC03 in a 1 cm section of the tube packed with silica sand SiO2. A lower limit of detection for the tracer method with reasonable experimental parameters is less than 1 microgram in the same tube section. (author) (tk)

  20. Applicability and sensitivity of gamma transmission and radiotracer techniques for mineral scaling studies

    International Nuclear Information System (INIS)

    Bjoernstad, Tor; Stamatakis, Emanuel

    2006-05-01

    Mineral scaling in petroleum and geothermal production systems creates a substantial problem of flow impairment. It is a priority to develop methods for scale inhibition. To study scaling rates and mechanisms in laboratory flow experiments under simulated reservoir conditions two nuclear methods have been introduced and tested. The first applies the principle of gamma transmission to measure mass increase. Here, we use a 30 MBq source of 133Ba. The other method applies radioactive tracers of one or more of the scaling components. We have used the study of CaC03-precipitation, as an example of the applicability of the method where the main tracer used is 47Ca2+. While the first method must be regarded as an indirect method, the latter is a direct method where the reactions of specific components may be studied. Both methods are on-line, continuous and non-destructive, and capable to study scaling of liquids with saturation ratios as low as SR=1.5 or lower. A lower limit of detection for the transmission method in sand-packed columns with otherwise reasonable experimental parameters is less than 1 mg CaC03 in a 1 cm section of the tube packed with silica sand SiO2. A lower limit of detection for the tracer method with reasonable experimental parameters is less than 1 microgram in the same tube section. (author) (tk)

  1. Results and analysis of high heat flux tests on a full-scale vertical target prototype of ITER divertor

    International Nuclear Information System (INIS)

    Missirlian, M.; Escourbiac, F.; Merola, M.; Bobin-Vastra, I.; Schlosser, J.; Durocher, A.

    2005-01-01

    After an extensive R and D development program, a full-scale divertor target prototype, manufactured with all the main features of the corresponding ITER divertor, was intensively tested in the high heat flux FE200 facility. The prototype consists of four units having a full monoblock geometry. The lower part (CFC armour) and the upper part (W armour) of each monoblock were joined to the solution annealed, quenched and cold worked CuCrZr tube by HIP technique. This paper summarises and analyses the main test results obtained on this prototype

  2. Comprehending Adverbs Of Doubt And Certainty In Health Communication: A Multidimensional Scaling Approach

    Directory of Open Access Journals (Sweden)

    Norman S. Segalowitz

    2016-05-01

    Full Text Available This research explored the feasibility of using multidimensional scaling (MDS analysis in novel combination with other techniques to study comprehension of epistemic adverbs expressing doubt and certainty (e.g., evidently, obviously, probably as they relate to health communication in clinical settings. In Study 1, Australian English speakers performed a dissimilarity-rating task with sentence pairs containing the target stimuli, presented as doctors' opinions. Ratings were analyzed using a combination of cultural consensus analysis (factor analysis across participants, weighted-data classical-MDS, and cluster analysis. Analyses revealed strong within-community consistency for a 3-dimensional semantic space solution that took into account individual differences, strong statistical acceptability of the MDS results in terms of stress and explained variance, and semantic configurations that were interpretable in terms of linguistic analyses of the target adverbs. The results confirmed the feasibility of using MDS in this context. Study 2 replicated the results with Canadian English speakers on the same task. Semantic analyses and stress decomposition analysis were performed on the Australian and Canadian data sets, revealing similarities and differences between the two groups. Overall, the results support using MDS to study comprehension of words critical for health communication, including in future studies, for example, second language speaking patients and/or practitioners. More broadly, the results indicate that the techniques described should be promising for comprehension studies in many communicative domains, in both clinical settings and beyond, and including those targeting other aspects of language and focusing on comparisons across different speech communities.

  3. Developing scale for colleague solidarity among nurses in Turkey.

    Science.gov (United States)

    Uslusoy, Esin Cetinkaya; Alpar, Sule Ecevit

    2013-02-01

    There is a need for an appropriate instrument to measure colleague solidarity among nurses. This study was carried out to develop a Colleague Solidarity of Nurses' Scale (CSNS). This study was planned to be descriptive and methodological. The CSNS examined content validity, construct validity, test-retest reliability and internal consistency reliability. The trial form of the CSNS, which was composed of 44 items, was given to 200 nurses, followed by validity and reliability analyses. Following the analyses, 21 items were excluded from the scale, leaving an attitude scale made up of 23 items. Factor analysis of the data showed that the scale has a three sub-factor structure: emotional solidarity, academic solidarity and negative opinions about solidarity. The Cronbach's alpha reliability of the whole scale was 0.80. This study provides evidence that the CSNS possesses robust solidarity among nurses. © 2013 Wiley Publishing Asia Pty Ltd.

  4. Scaling up Copy Detection

    OpenAIRE

    Li, Xian; Dong, Xin Luna; Lyons, Kenneth B.; Meng, Weiyi; Srivastava, Divesh

    2015-01-01

    Recent research shows that copying is prevalent for Deep-Web data and considering copying can significantly improve truth finding from conflicting values. However, existing copy detection techniques do not scale for large sizes and numbers of data sources, so truth finding can be slowed down by one to two orders of magnitude compared with the corresponding techniques that do not consider copying. In this paper, we study {\\em how to improve scalability of copy detection on structured data}. Ou...

  5. Instruments and techniques for analysing the time-resolved transverse phase space distribution of high-brightness electron beams

    International Nuclear Information System (INIS)

    Rudolph, Jeniffa

    2012-01-01

    This thesis deals with the instruments and techniques used to characterise the transverse phase space distribution of high-brightness electron beams. In particular, methods are considered allowing to measure the emittance as a function of the longitudinal coordinate within the bunch (slice emittance) with a resolution in the ps to sub-ps range. The main objective of this work is the analysis of techniques applicable for the time-resolved phase space characterisation for future high-brightness electron beam sources and single-pass accelerators based on these. The competence built up by understanding and comparing different techniques is to be used for the design and operation of slice diagnostic systems for the Berlin Energy Recovery Linac Project (BERLinPro). In the framework of the thesis, two methods applicable for slice emittance measurements are considered, namely the zero-phasing technique and the use of a transverse deflector. These methods combine the conventional quadrupole scan technique with a transfer of the longitudinal distribution into a transverse distribution. Measurements were performed within different collaborative projects. The experimental setup, the measurement itself and the data analysis are discussed as well as measurement results and simulations. In addition, the phase space tomography technique is introduced. In contrast to quadrupole scan-based techniques, tomography is model-independent and can reconstruct the phase space distribution from simple projected measurements. The developed image reconstruction routine based on the Maximum Entropy algorithm is introduced. The quality of the reconstruction is tested using different model distributions, simulated data and measurement data. The results of the tests are presented. The adequacy of the investigated techniques, the experimental procedures as well as the developed data analysis tools could be verified. The experimental and practical experience gathered during this work, the

  6. Radiocarbon analyses along the EDML ice core in Antarctica

    NARCIS (Netherlands)

    van de Wal, R.S.W.; Meijer, H.A.J.; van Rooij, M.; van der Veen, C.

    2007-01-01

    Samples, 17 in total, from the EDML core drilled at Kohnen station Antarctica are analysed for 14CO and 14CO2 with a dry-extraction technique in combination with accelerator mass spectrometry. Results of the in situ produced 14CO fraction show a very low concentration of in situ produced 14CO.

  7. Radiocarbon analyses along the EDML ice core in Antarctica

    NARCIS (Netherlands)

    Van de Wal, R. S. W.; Meijer, H. A. J.; De Rooij, M.; Van der Veen, C.

    Samples, 17 in total, from the EDML core drilled at Kohnen station Antarctica are analysed for (CO)-C-14 and (CO2)-C-14 with a dry-extraction technique in combination with accelerator mass spectrometry. Results of the in situ produced (CO)-C-14 fraction show a very low concentration of in situ

  8. The development and validation of the Incivility from Customers Scale.

    Science.gov (United States)

    Wilson, Nicole L; Holmvall, Camilla M

    2013-07-01

    Scant research has examined customers as sources of workplace incivility, despite evidence suggesting that mistreatment is more common from organizational outsiders, including customers, than from organizational members (Grandey, Kern, & Frone, 2007; Schat & Kelloway, 2005). As an important step in extending the literature on customer incivility, we conducted two studies to develop and validate a measure of this construct. Study 1 used focus groups of retail and restaurant employees (n = 30) to elicit a list of uncivil customer behaviors, based on which we wrote initial scale items. Study 2 used a correlational survey design (n = 439) to pare down the number of scale items to 10 and to garner reliability and validity evidence for the scale. Exploratory and confirmatory factor analyses show that the scale is unidimensional and distinguishable from measures of the related, but distinct, constructs of interpersonal justice and psychological aggression from customers. Reliability analyses show that the scale is internally consistent. Significant correlations between the scale and individuals' job satisfaction, turnover intentions, and general and job-specific psychological strain provide evidence of criterion-related validity. Hierarchical regression analyses show that the scale significantly predicts three of four organizational and personal strain outcomes over and above a workplace incivility measure adapted for customer incivility, providing some evidence of incremental validity. Limitations and future research directions are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  9. Frost Multidimensional Perfectionism Scale: the portuguese version

    Directory of Open Access Journals (Sweden)

    Ana Paula Monteiro Amaral

    2013-01-01

    Full Text Available BACKGROUND: The Frost Multidimensional Perfectionism Scale is one of the most world widely used measures of perfectionism. OBJECTIVE: To analyze the psychometric properties of the Portuguese version of the Frost Multidimensional Perfectionism Scale. METHODS: Two hundred and seventeen (178 females students from two Portuguese Universities filled in the scale, and a subgroup (n = 166 completed a retest with a four weeks interval. RESULTS: The scale reliability was good (Cronbach alpha = .857. Corrected item-total correlations ranged from .019 to .548. The scale test-retest reliability suggested a good temporal stability with a test-retest correlation of .765. A principal component analysis with Varimax rotation was performed and based on the Scree plot, two robust factorial structures were found (four and six factors. The principal component analyses, using Monte Carlo PCA for parallel analyses confirmed the six factor solution. The concurrent validity with Hewitt and Flett MPS was high, as well as the discriminant validity of positive and negative affect (Profile of Mood Stats-POMS. DISCUSSION: The two factorial structures (of four and six dimensions of the Portuguese version of Frost Multidimensional Perfectionism Scale replicate the results from different authors, with different samples and cultures. This suggests this scale is a robust instrument to assess perfectionism, in several clinical and research settings as well as in transcultural studies.

  10. The Prosocial and Antisocial Behavior in Sport Scale.

    Science.gov (United States)

    Kavussanu, Maria; Boardley, Ian D

    2009-02-01

    This research aimed to (a) develop a measure of prosocial and antisocial behavior in sport, (b) examine its invariance across sex and sport, and (c) provide evidence for its discriminant and concurrent validity. We conducted two studies. In study 1, team sport athletes (N=1,213) recruited from 103 teams completed questionnaires assessing demographics and prosocial and antisocial behaviors in sport. Factor analyses revealed two factors representing prosocial behavior and two factors representing antisocial behavior. The model had a very good fit to the data and showed configural, metric, and scalar invariance across sex and sport. The final scale consisted of 20 items. In Study 2, team-sport athletes (N=106) completed the scale and measures of empathy and goal orientation. Analyses provided support for the discriminant and concurrent validity of the scale. In conclusion, the new scale can be used to measure prosocial and antisocial behaviors in team sport.

  11. Kinetic stability analyses in a bumpy cylinder

    International Nuclear Information System (INIS)

    Dominguez, R.R.; Berk, H.L.

    1981-01-01

    Recent interest in the ELMO Bumpy Torus (EBT) has prompted a number of stability analyses of both the hot electron rings and the toroidal plasma. Typically these works employ the local approximation, neglecting radial eigenmode structure and ballooning effects to perform the stability analysis. In the present work we develop a fully kinetic formalism for performing nonlocal stability analyses in a bumpy cylinder. We show that the Vlasov-Maxwell integral equations (with one ignorable coordinate) are self-adjoint and hence amenable to analysis using numerical techniques developed for self-adjoint systems of equations. The representation we obtain for the kernel of the Vlasov-Maxwell equations is a differential operator of arbitrarily high order. This form leads to a manifestly self-adjoint system of differential equations for long wavelength modes

  12. Round-robin pretest analyses of a 1:6-scale reinforced concrete containment model subject to static internal pressurization

    International Nuclear Information System (INIS)

    Clauss, D.B.

    1987-05-01

    Analyses of a 1:6-scale reinforced concrete containment model that will be tested to failure at Sandia National Laboratories in the spring of 1987 were conducted by the following organizations in the United States and Europe: Sandia National Laboratories (USA), Argonne National Laboratory (USA), Electric Power Research Institute (USA), Commissariat a L'Energie Atomique (France), HM Nuclear Installations Inspectorate (UK), Comitato Nazionale per la ricerca e per lo sviluppo dell'Energia Nucleare e delle Energie Alternative (Italy), UK Atomic Energy Authority, Safety and Reliability Directorate (UK), Gesellschaft fuer Reaktorsicherheit (FRG), Brookhaven National Laboratory (USA), and Central Electricity Generating Board (UK). Each organization was supplied with a standard information package, which included construction drawings and actual material properties for most of the materials used in the model. Each organization worked independently using their own analytical methods. This report includes descriptions of the various analytical approaches and pretest predictions submitted by each organization. Significant milestones that occur with increasing pressure, such as damage to the concrete (cracking and crushing) and yielding of the steel components, and the failure pressure (capacity) and failure mechanism are described. Analytical predictions for pressure histories of strain in the liner and rebar and displacements are compared at locations where experimental results will be available after the test. Thus, these predictions can be compared to one another and to experimental results after the test

  13. Dither Gyro Scale Factor Calibration: GOES-16 Flight Experience

    Science.gov (United States)

    Reth, Alan D.; Freesland, Douglas C.; Krimchansky, Alexander

    2018-01-01

    This poster is a sequel to a paper presented at the 34th Annual AAS Guidance and Control Conference in 2011, which first introduced dither-based calibration of gyro scale factors. The dither approach uses very small excitations, avoiding the need to take instruments offline during gyro scale factor calibration. In 2017, the dither calibration technique was successfully used to estimate gyro scale factors on the GOES-16 satellite. On-orbit dither calibration results were compared to more traditional methods using large angle spacecraft slews about each gyro axis, requiring interruption of science. The results demonstrate that the dither technique can estimate gyro scale factors to better than 2000 ppm during normal science observations.

  14. Recent developments in complex scaling

    International Nuclear Information System (INIS)

    Rescigno, T.N.

    1980-01-01

    Some recent developments in the use of complex basis function techniques to study resonance as well as certain types of non-resonant, scattering phenomena are discussed. Complex scaling techniques and other closely related methods have continued to attract the attention of computational physicists and chemists and have now reached a point of development where meaningful calculations on many-electron atoms and molecules are beginning to appear feasible

  15. Post-placement temperature reduction techniques

    DEFF Research Database (Denmark)

    Liu, Wei; Nannarelli, Alberto

    2010-01-01

    With technology scaled to deep submicron era, temperature and temperature gradient have emerged as important design criteria. We propose two post-placement techniques to reduce peak temperature by intelligently allocating whitespace in the hotspots. Both methods are fully compliant with commercial...

  16. Factor solutions of the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) in a Swedish population.

    Science.gov (United States)

    Mörtberg, Ewa; Reuterskiöld, Lena; Tillfors, Maria; Furmark, Tomas; Öst, Lars-Göran

    2017-06-01

    Culturally validated rating scales for social anxiety disorder (SAD) are of significant importance when screening for the disorder, as well as for evaluating treatment efficacy. This study examined construct validity and additional psychometric properties of two commonly used scales, the Social Phobia Scale and the Social Interaction Anxiety Scale, in a clinical SAD population (n = 180) and in a normal population (n = 614) in Sweden. Confirmatory factor analyses of previously reported factor solutions were tested but did not reveal acceptable fit. Exploratory factor analyses (EFA) of the joint structure of the scales in the total population yielded a two-factor model (performance anxiety and social interaction anxiety), whereas EFA in the clinical sample revealed a three-factor solution, a social interaction anxiety factor and two performance anxiety factors. The SPS and SIAS showed good to excellent internal consistency, and discriminated well between patients with SAD and a normal population sample. Both scales showed good convergent validity with an established measure of SAD, whereas the discriminant validity of symptoms of social anxiety and depression could not be confirmed. The optimal cut-off score for SPS and SIAS were 18 and 22 points, respectively. It is concluded that the factor structure and the additional psychometric properties of SPS and SIAS support the use of the scales for assessment in a Swedish population.

  17. Core calculational techniques and procedures

    International Nuclear Information System (INIS)

    Romano, J.J.

    1977-10-01

    Described are the procedures and techniques employed by B and W in core design analyses of power peaking, control rod worths, and reactivity coefficients. Major emphasis has been placed on current calculational tools and the most frequently performed calculations over the operating power range

  18. Standardizing Scale Height Computation of Maven Ngims Neutral Data and Variations Between Exobase and Homeopause Scale Heights

    Science.gov (United States)

    Elrod, M. K.; Slipski, M.; Curry, S.; Williamson, H. N.; Benna, M.; Mahaffy, P. R.

    2017-12-01

    The MAVEN NGIMS team produces a level 3 product which includes the computation of Ar scale height an atmospheric temperatures at 200 km. In the latest version (v05_r01) this has been revised to include scale height fits for CO2, N2 O and CO. Members of the MAVEN team have used various methods to compute scale heights leading to significant variations in scale height values depending on fits and techniques within a few orbits even, occasionally, the same pass. Additionally fitting scale heights in a very stable atmosphere like the day side vs night side can have different results based on boundary conditions. Currently, most methods only compute Ar scale heights as it is most stable and reacts least with the instrument. The NGIMS team has chosen to expand these fitting techniques to include fitted scale heights for CO2, N2, CO, and O. Having compared multiple techniques, the method found to be most reliable for most conditions was determined to be a simple fit method. We have focused this to a fitting method that determines the exobase altidude of the CO2 atmosphere as a maximum altitude for the highest point for fitting, and uses the periapsis as the lowest point and then fits the altitude versus log(density). The slope of altitude vs log(density) is -1/H where H is the scale height of the atmosphere for each species. Since this is between the homeopause and the exobase, each species will have a different scale height by this point. This is being released as a new standardization for the level 3 product, with the understanding that scientists and team members will continue to compute more precise scale heights and temperatures as needed based on science and model demands. This is being released in the PDS NGIMS level 3 v05 files for August 2017. Additionally, we are examining these scale heights for variations seasonally, diurnally, and above and below the exobase. The atmosphere is significantly more stable on the dayside than on the nightside. We have also found

  19. Development and Validation of the Mathematical Resilience Scale

    Science.gov (United States)

    Kooken, Janice; Welsh, Megan E.; McCoach, D. Betsy; Johnston-Wilder, Sue; Lee, Clare

    2016-01-01

    The Mathematical Resilience Scale measures students' attitudes toward studying mathematics, using three correlated factors: Value, Struggle, and Growth. The Mathematical Resilience Scale was developed and validated using exploratory and confirmatory factor analyses across three samples. Results provide a new approach to gauge the likelihood of…

  20. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence

    Science.gov (United States)

    Linkmann, Moritz; McComb, W. David; Yoffe, Samuel; Berera, Arjun

    2014-11-01

    The pseudospectral method, in conjunction with a new technique for obtaining scaling exponents ζn from the structure functions Sn (r) , is presented as an alternative to the extended self-similarity (ESS) method and the use of generalized structure functions. We propose plotting the ratio | Sn (r) /S3 (r) | against the separation r in accordance with a standard technique for analysing experimental data. This method differs from the ESS technique, which plots the generalized structure functions Gn (r) against G3 (r) , where G3 (r) ~ r . Using our method for the particular case of S2 (r) we obtain the new result that the exponent ζ2 decreases as the Taylor-Reynolds number increases, with ζ2 --> 0 . 679 +/- 0 . 013 as Rλ --> ∞ . This supports the idea of finite-viscosity corrections to the K41 prediction for S2, and is the opposite of the result obtained by ESS. The pseudospectral method permits the forcing to be taken into account exactly through the calculation of the energy input in real space from the work spectrum of the stirring forces. The combination of the viscous and the forcing corrections as calculated by the pseudospectral method is shown to account for the deviation of S3 from Kolmogorov's ``four-fifths''-law at all scales. This work has made use of the resources provided by the UK supercomputing service HECToR, made available through the Edinburgh Compute and Data Facility (ECDF). A. B. is supported by STFC, S. R. Y. and M. F. L. are funded by EPSRC.

  1. Protein-material interactions: From micro-to-nano scale

    International Nuclear Information System (INIS)

    Tsapikouni, Theodora S.; Missirlis, Yannis F.

    2008-01-01

    The article presents a survey on the significance of protein-material interactions, the mechanisms which control them and the techniques used for their study. Protein-surface interactions play a key role in regenerative medicine, drug delivery, biosensor technology and chromatography, while it is related to various undesired effects such as biofouling and bio-prosthetic malfunction. Although the effects of protein-surface interaction concern the micro-scale, being sometimes obvious even with bare eyes, they derive from biophysical events at the nano-scale. The sequential steps for protein adsorption involve events at the single biomolecule level and the forces driving or inhibiting protein adsorption act at the molecular level too. Following the scaling of protein-surface interactions, various techniques have been developed for their study both in the micro- and nano-scale. Protein labelling with radioisotopes or fluorescent probes, colorimetric assays and the quartz crystal microbalance were the first techniques used to monitor protein adsorption isotherms, while the surface force apparatus was used to measure the interaction forces between protein layers at the micro-scale. Recently, more elaborate techniques like total internal reflection fluorescence (TIRF), Fourier transform infrared spectroscopy (FTIR), surface plasmon resonance, Raman spectroscopy, ellipsometry and time of flight secondary ion mass spectrometry (ToF-SIMS) have been applied for the investigation of protein density, structure or orientation at the interfaces. However, a turning point in the study of protein interactions with the surfaces was the invention and the wide-spread use of atomic force microscopy (AFM) which can both image single protein molecules on surfaces and directly measure the interaction force

  2. Construct validation of emotional labor scale for a sample of Pakistani corporate employees.

    Science.gov (United States)

    Akhter, Noreen

    2017-02-01

    To translate, adapt and validate emotional labour scale for Pakistani corporate employees. This study was conducted in locale of Rawalpindi and Islamabad from October 2014 to December 2015, and comprised customer service employees of commercial banks and telecommunication companies. It comprised of two independent parts. Part one had two steps. Step one involved translation and adaptation of the instrument. In the second step psychometric properties of the translated scale were established by administering it to customer services employees from commercial banks and the telecommunication sector. Data of the pilot study was analysed by using exploratory factor analysis to extract the initial factor of emotional labour. Part two comprised the main study. Commercial bank employees were included in the sample by using convenient sampling technique. SPSS 20 was used for data analysis. There were 145 participants in the first study and 495 in the second study . Exploratory factor analysis initially generated three-factor model of emotional labour which was further confirmed by confirmatory factor analysis suggesting that emotional labour had three distinct dimensions, i.e. surface acting, deep acting and genuine expressions of emotions. The emotional labour scale was found to be a valid and reliable measure.

  3. Data concerning the psychometric properties of the Behavioral Inhibition/Behavioral Activation Scales for the Portuguese population.

    Science.gov (United States)

    Moreira, Diana; Almeida, Fernando; Pinto, Marta; Segarra, Pilar; Barbosa, Fernando

    2015-09-01

    The behavioral inhibition/behavioral activation (BIS/BAS) scales (Carver & White, 1994), which allow rating the Gray's motivational systems, were translated and adapted into Portuguese. In this study, the authors present the procedure and the psychometric analyses of the Portuguese version of the scales, which included basic item and scales psychometric characteristics, as well as confirmatory and exploratory factor analyses. After the psychometric analyses provided evidence for the quality of the Portuguese version of the scales, the normative data was provided by age and school grade. The confirmatory factor analysis of the BIS/BAS scales that the authors performed did not demonstrate satisfactory fit for the 2- or 4-factor solution. The authors also tested the more recent 5-factor model, but the fit indices remained inadequate. As fit indices were not satisfactory they proceeded with an exploratory factor analysis to examine the structure of the Portuguese scales. These psychometric analyses provided evidence of a successful translation of the original scales. Therefore these scales can now be used in future research with Portuguese or Brazilian population. (c) 2015 APA, all rights reserved.

  4. Comparison of digital and conventional impression techniques: evaluation of patients’ perception, treatment comfort, effectiveness and clinical outcomes

    Science.gov (United States)

    2014-01-01

    Background The purpose of this study was to compare two impression techniques from the perspective of patient preferences and treatment comfort. Methods Twenty-four (12 male, 12 female) subjects who had no previous experience with either conventional or digital impression participated in this study. Conventional impressions of maxillary and mandibular dental arches were taken with a polyether impression material (Impregum, 3 M ESPE), and bite registrations were made with polysiloxane bite registration material (Futar D, Kettenbach). Two weeks later, digital impressions and bite scans were performed using an intra-oral scanner (CEREC Omnicam, Sirona). Immediately after the impressions were made, the subjects’ attitudes, preferences and perceptions towards impression techniques were evaluated using a standardized questionnaire. The perceived source of stress was evaluated using the State-Trait Anxiety Scale. Processing steps of the impression techniques (tray selection, working time etc.) were recorded in seconds. Statistical analyses were performed with the Wilcoxon Rank test, and p < 0.05 was considered significant. Results There were significant differences among the groups (p < 0.05) in terms of total working time and processing steps. Patients stated that digital impressions were more comfortable than conventional techniques. Conclusions Digital impressions resulted in a more time-efficient technique than conventional impressions. Patients preferred the digital impression technique rather than conventional techniques. PMID:24479892

  5. A Study on Nondestructive Technique Using Laser Technique for Evaluation of Carbon fiber Reinforced Plastic

    International Nuclear Information System (INIS)

    Choi, Sang Woo; Lee, Joon Hyun; Seo, Kyeong Cheol; Byun, Joon Hyung

    2005-01-01

    Fiber reinforced plastic material should be inspected in fabrication process in order to enhance quality by prevent defects such as delamination and void. Generally, ultrasonic technique is widely used to evaluate FRP. In conventional ultrasonic techniques, transducer should be contacted on FRP. However, conventional contacting method could not be applied in fabrication process and novel non-contact evaluating technique was required. Laser-based ultrasonic technique was tried to evaluate CFRP plate. Laser-based ultrasonic waves propagated on CFRP were received with various transducers such as accelerometer and AE sensor in order to evaluate the properties of waves due to the variation of frequency. Velocities of laser-based ultrasonic waves were evaluated for various fiber orientation. In addition, laser interferometry was used to receive ultrasonic wave in CFRP and frequency was analysed

  6. Controls on Mississippi Valley-Type Zn-Pb mineralization in Behabad district, Central Iran: Constraints from spatial and numerical analyses

    Science.gov (United States)

    Parsa, Mohammad; Maghsoudi, Abbas

    2018-04-01

    The Behabad district, located in the central Iranian microcontinent, contains numerous epigenetic stratabound carbonate-hosted Zn-Pb ore bodies. The mineralizations formed as fault, fracture and karst fillings in the Permian-Triassic formations, especially in Middle Triassic dolostones, and comprise mainly non-sulfides zinc ores. These are all interpreted as Mississippi Valley-type (MVT) base metal deposits. From an economic geological point of view, it is imperative to recognize the processes that have plausibly controlled the emplacement of MVT Zn-Pb mineralization in the Behabad district. To address the foregoing issue, analyses of the spatial distribution of mineral deposits comprising fry and fractal techniques and analysis of the spatial association of mineral deposits with geological features using distance distribution analysis were applied to assess the regional-scale processes that could have operated in the distribution of MVT Zn-Pb deposits in the district. The obtained results based on these analytical techniques show the main trends of the occurrences are NW-SE and NE-SW, which are parallel or subparallel to the major northwest and northeast trending faults, supporting the idea that these particular faults could have acted as the main conduits for transport of mineral-bearing fluids. The results of these analyses also suggest that Permian-Triassic brittle carbonate sedimentary rocks have served as the lithological controls on MVT mineralization in the Behabad district as they are spatially and temporally associated with mineralization.

  7. Column-Oriented Storage Techniques for MapReduce

    OpenAIRE

    Floratou, Avrilia; Patel, Jignesh; Shekita, Eugene; Tata, Sandeep

    2011-01-01

    Users of MapReduce often run into performance problems when they scale up their workloads. Many of the problems they encounter can be overcome by applying techniques learned from over three decades of research on parallel DBMSs. However, translating these techniques to a MapReduce implementation such as Hadoop presents unique challenges that can lead to new design choices. This paper describes how column-oriented storage techniques can be incorporated in Hadoop in a way that preserves its pop...

  8. Innovative Techniques for Large-Scale Collection, Processing, and Storage of Eelgrass (Zostera marina) Seeds

    National Research Council Canada - National Science Library

    Orth, Robert J; Marion, Scott R

    2007-01-01

    .... Although methods for hand-collecting, processing and storing eelgrass seeds have advanced to match the scale of collections, the number of seeds collected has limited the scale of restoration efforts...

  9. Scale issues in remote sensing

    CERN Document Server

    Weng, Qihao

    2014-01-01

    This book provides up-to-date developments, methods, and techniques in the field of GIS and remote sensing and features articles from internationally renowned authorities on three interrelated perspectives of scaling issues: scale in land surface properties, land surface patterns, and land surface processes. The book is ideal as a professional reference for practicing geographic information scientists and remote sensing engineers as well as a supplemental reading for graduate level students.

  10. Systemic perspectives on scaling agricultural innovations. A review

    NARCIS (Netherlands)

    Wigboldus, Seerp; Klerkx, Laurens; Leeuwis, Cees; Schut, Marc; Muilerman, Sander; Jochemsen, Henk

    2016-01-01

    Agricultural production involves the scaling of agricultural innovations such as disease-resistant and drought-tolerant maize varieties, zero-tillage techniques, permaculture cultivation practices based on perennial crops and automated milking systems. Scaling agricultural innovations should take

  11. Rating scale for psychogenic nonepileptic seizures: scale development and clinimetric testing.

    Science.gov (United States)

    Cianci, Vittoria; Ferlazzo, Edoardo; Condino, Francesca; Mauvais, Hélène Somma; Farnarier, Guy; Labate, Angelo; Latella, Maria Adele; Gasparini, Sara; Branca, Damiano; Pucci, Franco; Vazzana, Francesco; Gambardella, Antonio; Aguglia, Umberto

    2011-06-01

    Our aim was to develop a clinimetric scale evaluating motor phenomena, associated features, and severity of psychogenic nonepileptic seizures (PNES). Sixty video/EEG-recorded PNES induced by suggestion maneuvers were evaluated. We examined the relationship between results from this scale and results from the Clinical Global Impression (CGI) scale to validate this technique. Interrater reliabilities of the PNES scale for three raters were analyzed using the AC1 statistic, Kendall's coefficient of concordance (KCC), and intraclass correlation coefficients (ICCs). The relationship between the CGI and PNES scales was evaluated with Spearman correlations. The AC1 statistic demonstrated good interrater reliability for each phenomenon analyzed (tremor/oscillation, tonic; clonic/jerking, hypermotor/agitation, atonic/akinetic, automatisms, associated features). KCC and the ICC showed moderate interrater agreement for phenomenology, associated phenomena, and total PNES scores. Spearman's correlation of mean CGI score with mean total PNES score was 0.69 (Pscale described here accurately evaluates the phenomenology of PNES and could be used to assess and compare subgroups of patients with PNES. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    Science.gov (United States)

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  13. SCALE-UP OF RAPID SMALL-SCALE ADSORPTION TESTS TO FIELD-SCALE ADSORBERS: THEORETICAL BASIS AND EXPERIMENTAL RESULTS FOR A CONSTANT DIFFUSIVITY

    Science.gov (United States)

    Granular activated carbon (GAC) is an effective treatment technique for the removal of some toxic organics from drinking water or wastewater, however, it can be a relatively expensive process, especially if it is designed improperly. A rapid method for the design of large-scale f...

  14. Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations

    Science.gov (United States)

    Mantz, A.; Allen, S. W.

    2011-01-01

    Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.

  15. The benefits of global scaling in multi-criteria decision analysis

    Directory of Open Access Journals (Sweden)

    Jamie P. Monat

    2009-10-01

    Full Text Available When there are multiple competing objectives in a decision-making process, Multi-Attribute Choice scoring models are excellent tools, permitting the incorporation of both subjective and objective attributes. However, their accuracy depends upon the subjective techniques used to construct the attribute scales and their concomitant weights. Conventional techniques using local scales tend to overemphasize small differences in attribute measures, which may yield erroneous conclusions. The Range Sensitivity Principle (RSP is often invoked to adjust attribute weights when local scales are used. In practice, however, decision makers often do not follow the prescriptions of the Range Sensitivity Principle and under-adjust the weights, resulting in potentially poor decisions. Examples are discussed as is a proposed solution: the use of global scales instead of local scales.

  16. Comparative analyses of population-scale phenomic data in electronic medical records reveal race-specific disease networks.

    Science.gov (United States)

    Glicksberg, Benjamin S; Li, Li; Badgeley, Marcus A; Shameer, Khader; Kosoy, Roman; Beckmann, Noam D; Pho, Nam; Hakenberg, Jörg; Ma, Meng; Ayers, Kristin L; Hoffman, Gabriel E; Dan Li, Shuyu; Schadt, Eric E; Patel, Chirag J; Chen, Rong; Dudley, Joel T

    2016-06-15

    Underrepresentation of racial groups represents an important challenge and major gap in phenomics research. Most of the current human phenomics research is based primarily on European populations; hence it is an important challenge to expand it to consider other population groups. One approach is to utilize data from EMR databases that contain patient data from diverse demographics and ancestries. The implications of this racial underrepresentation of data can be profound regarding effects on the healthcare delivery and actionability. To the best of our knowledge, our work is the first attempt to perform comparative, population-scale analyses of disease networks across three different populations, namely Caucasian (EA), African American (AA) and Hispanic/Latino (HL). We compared susceptibility profiles and temporal connectivity patterns for 1988 diseases and 37 282 disease pairs represented in a clinical population of 1 025 573 patients. Accordingly, we revealed appreciable differences in disease susceptibility, temporal patterns, network structure and underlying disease connections between EA, AA and HL populations. We found 2158 significantly comorbid diseases for the EA cohort, 3265 for AA and 672 for HL. We further outlined key disease pair associations unique to each population as well as categorical enrichments of these pairs. Finally, we identified 51 key 'hub' diseases that are the focal points in the race-centric networks and of particular clinical importance. Incorporating race-specific disease comorbidity patterns will produce a more accurate and complete picture of the disease landscape overall and could support more precise understanding of disease relationships and patient management towards improved clinical outcomes. rong.chen@mssm.edu or joel.dudley@mssm.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  17. Large-scale genome-wide association studies and meta-analyses of longitudinal change in adult lung function.

    Directory of Open Access Journals (Sweden)

    Wenbo Tang

    Full Text Available Genome-wide association studies (GWAS have identified numerous loci influencing cross-sectional lung function, but less is known about genes influencing longitudinal change in lung function.We performed GWAS of the rate of change in forced expiratory volume in the first second (FEV1 in 14 longitudinal, population-based cohort studies comprising 27,249 adults of European ancestry using linear mixed effects model and combined cohort-specific results using fixed effect meta-analysis to identify novel genetic loci associated with longitudinal change in lung function. Gene expression analyses were subsequently performed for identified genetic loci. As a secondary aim, we estimated the mean rate of decline in FEV1 by smoking pattern, irrespective of genotypes, across these 14 studies using meta-analysis.The overall meta-analysis produced suggestive evidence for association at the novel IL16/STARD5/TMC3 locus on chromosome 15 (P  =  5.71 × 10(-7. In addition, meta-analysis using the five cohorts with ≥3 FEV1 measurements per participant identified the novel ME3 locus on chromosome 11 (P  =  2.18 × 10(-8 at genome-wide significance. Neither locus was associated with FEV1 decline in two additional cohort studies. We confirmed gene expression of IL16, STARD5, and ME3 in multiple lung tissues. Publicly available microarray data confirmed differential expression of all three genes in lung samples from COPD patients compared with controls. Irrespective of genotypes, the combined estimate for FEV1 decline was 26.9, 29.2 and 35.7 mL/year in never, former, and persistent smokers, respectively.In this large-scale GWAS, we identified two novel genetic loci in association with the rate of change in FEV1 that harbor candidate genes with biologically plausible functional links to lung function.

  18. Multi-scale Analysis of MEMS Sensors Subject to Drop Impacts

    Directory of Open Access Journals (Sweden)

    Sarah Zerbini

    2007-09-01

    Full Text Available The effect of accidental drops on MEMS sensors are examined within the frame-work of a multi-scale finite element approach. With specific reference to a polysilicon MEMSaccelerometer supported by a naked die, the analysis is decoupled into macro-scale (at dielength-scale and meso-scale (at MEMS length-scale simulations, accounting for the verysmall inertial contribution of the sensor to the overall dynamics of the device. Macro-scaleanalyses are adopted to get insights into the link between shock waves caused by the impactagainst a target surface and propagating inside the die, and the displacement/acceleration his-tories at the MEMS anchor points. Meso-scale analyses are adopted to detect the most stresseddetails of the sensor and to assess whether the impact can lead to possible localized failures.Numerical results show that the acceleration at sensor anchors cannot be considered an ob-jective indicator for drop severity. Instead, accurate analyses at sensor level are necessary toestablish how MEMS can fail because of drops.

  19. Nuclear scales

    Energy Technology Data Exchange (ETDEWEB)

    Friar, J.L.

    1998-12-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the {pi}-{gamma} force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted.

  20. Nuclear scales

    International Nuclear Information System (INIS)

    Friar, J.L.

    1998-01-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the π-γ force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted

  1. Analysing E-Services and Mobile Applications with Companied Conjoint Analysis and fMRI Technique

    OpenAIRE

    Heinonen, Jarmo

    2015-01-01

    Previous research has shown that neuromarketing and conjoint analysis have been used in many areas of consumer research, and to provide for further understanding of consumer behaviour. Together these two methods may reveal more information about hidden desires, expectations and restrains of consumers’ brain. This paper attempts to examine these two research methods together as a companied analysis. More specifically this study utilizes fMRI and conjoint analysis is a tool for analysing consum...

  2. INTERSPINOUS SPACER IN PERSISTENT DISCOGENIC PAIN: PERCUTANEOUS APPROACH OR OPEN TECHNIQUE

    Directory of Open Access Journals (Sweden)

    José Antonio Cruz Ricardez

    Full Text Available ABSTRACT Objective: To compare the postoperative clinical course of placement of interspinous spacer with open technique (ISO with percutaneous interspinous spacer (PIS. Methods: Quasi-experimental, longitudinal study of 42 patients with discogenic pain uncontrolled with analgesics, aged 35-55 years old, 21 women, and 21 men. Clinical history, location of pain, VAS scale before and after surgery, Oswestry Disability Index and Macnab modified scale at 6 months were used. Results: When performing quantitative analysis statistical significance (p = 0.0478, 0.0466, 0.0399 was demonstrated with Student's t test between the results according to VAS scale; in the qualitative analysis with the Oswestry index and Macnab modified scale it was demonstrated the hypothesis that the results is dependent of the surgical technique. Conclusions: According to the results, we can conclude that there is a statistically significant difference depending on the surgical technique used with respect to the rate of disability and functionality in daily life as well as in the improvement of pain symptoms.

  3. Self-adapted sliding scale spectroscopy ADC

    International Nuclear Information System (INIS)

    Xu Qichun; Wang Jingjin

    1992-01-01

    The traditional sliding scale technique causes a disabled range that is equal to the sliding length, thus reduces the analysis range of a MCA. A method for reduce ADC's DNL, which is called self-adapted sliding scale method, has been designed and tested. With this method, the disabled range caused by a traditional sliding scale method can be eliminated by a random trial scale and there is no need of an additional amplitude discriminator with swing threshold. A special trial-and-correct logic is presented. The tested DNL of the spectroscopy ADC described here is less than 0.5%

  4. Selected methods of waste monitoring using modern analytical techniques

    International Nuclear Information System (INIS)

    Hlavacek, I.; Hlavackova, I.

    1993-11-01

    Issues of the inspection and control of bituminized and cemented waste are discussed, and some methods of their nondestructive testing are described. Attention is paid to the inspection techniques, non-nuclear spectral techniques in particular, as employed for quality control of the wastes, waste concentrates, spent waste leaching solutions, as well as for the examination of environmental samples (waters and soils) from the surroundings of nuclear power plants. Some leaching tests used abroad for this purpose and practical analyses by the ICP-AES technique are given by way of example. The ICP-MS technique, which is unavailable in the Czech Republic, is routinely employed abroad for alpha nuclide measurements; examples of such analyses are also given. The next topic discussed includes the monitoring of organic acids and complexants to determine the degree of their thermal decomposition during the bituminization of wastes on an industrial line. All of the methods and procedures highlighted can be used as technical support during the monitoring of radioactive waste properties in industrial conditions, in the chemical and radiochemical analyses of wastes and related matter, in the calibration of nondestructive testing instrumentation, in the monitoring of contamination of the surroundings of nuclear facilities, and in trace analysis. (author). 10 tabs., 1 fig., 14 refs

  5. Techniques for Automated Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  6. Diffusion Experiments with Opalinus and Callovo-Oxfordian Clays: Laboratory, Large-Scale Experiments and Microscale Analysis by RBS

    International Nuclear Information System (INIS)

    Garcia-Gutierrez, M.; Alonso, U.; Missana, T.; Cormenzana, J.L.; Mingarro, M.; Morejon, J.; Gil, P.

    2009-01-01

    Consolidated clays are potential host rocks for deep geological repositories for high-level radioactive waste. Diffusion is the main transport process for radionuclides (RN) in these clays. Radionuclide (RN) diffusion coefficients are the most important parameters for Performance Assessment (PA) calculations of clay barriers. Different diffusion methodologies were applied at a laboratory scale to analyse the diffusion behaviour of a wide range of RN. Main aims were to understand the transport properties of different RNs in two different clays and to contribute with feasible methodologies to improve in-situ diffusion experiments, using samples of larger scale. Classical laboratory essays and a novel experimental set-up for large-scale diffusion experiments were performed, together to a novel application of the nuclear ion beam technique Rutherford Backscattering Spectrometry (RBS), for diffusion analyses at the micrometer scale. The main experimental and theoretical characteristics of the different methodologies, and their advantages and limitations are here discussed. Experiments were performed with the Opalinus and the Callovo-Oxfordian clays. Both clays are studied as potential host rock for a repository. Effective diffusion coefficients ranged between 1.10 - 10 to 1.10 - 12 m 2 /s for neutral, low sorbing cations (as Na and Sr) and anions. Apparent diffusion coefficients for strongly sorbing elements, as Cs and Co, are in the order of 1.10-13 m 2 /s; europium present the lowest diffusion coefficient (5.10 - 15 m 2 /s). The results obtained by the different approaches gave a comprehensive database of diffusion coefficients for RN with different transport behaviour within both clays. (Author) 42 refs

  7. Analyses of inks and papers in historical documents through external beam PIXE techniques

    International Nuclear Information System (INIS)

    Cahill, T.A.; Kusko, B.; California Univ., Davis; Schwab, R.N.

    1981-01-01

    PIXE analyses of documents can be carried out to high senstitivty in an external beam configuration designed to protect historical materials from damage. Test runs have shown that a properbly designed system with high solid angle can operate at less than 1% of the flux necessary to cause any discoloration whatsoever on papers of the 17th and 18th centuries. The composition of these papers is suprisingly complex, yet retains distinct association with the historical period, paper source, and even the individual sheets of paper that are folded and cut to make groups of pages. Early studies are planned on historical forgeries. (orig.)

  8. Integrated Waste Treatment Unit (IWTU) Input Coal Analyses and Off-Gass Filter (OGF) Content Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Jantzen, Carol M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Missimer, David M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Guenther, Chris P. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Shekhawat, Dushyant [National Energy Technology Lab. (NETL), Morgantown, WV (United States); VanEssendelft, Dirk T. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Means, Nicholas C. [AECOM Technology Corp., Oak Ridge, TN (United States)

    2015-04-23

    in process piping and materials, in excessive off-gas absorbent loading, and in undesired process emissions. The ash content of the coal is important as the ash adds to the DMR and other vessel products which affect the final waste product mass and composition. The amount and composition of the ash also affects the reaction kinetics. Thus ash content and composition contributes to the mass balance. In addition, sodium, potassium, calcium, sulfur, and maybe silica and alumina in the ash may contribute to wall-scale formation. Sodium, potassium, and alumina in the ash will be overwhelmed by the sodium, potassium, and alumina from the feed but the impact from the other ash components needs to be quantified. A maximum coal particle size is specified so the feed system does not plug and a minimum particle size is specified to prevent excess elutriation from the DMR to the Process Gas Filter (PGF). A vendor specification was used to procure the calcined coal for IWTU processing. While the vendor supplied a composite analysis for the 22 tons of coal (Appendix A), this study compares independent analyses of the coal performed at the Savannah River National Laboratory (SRNL) and at the National Energy Technology Laboratory (NETL). Three supersacks a were sampled at three different heights within the sack in order to determine within bag variability and between bag variability of the coal. These analyses were also compared to the vendor’s composite analyses and to the coal specification. These analyses were also compared to historic data on Bestac coal analyses that had been performed at Hazen Research Inc. (HRI) between 2004-2011.

  9. normes techniques et pratiques locales des producteurs dans les ...

    African Journals Online (AJOL)

    ACSS

    pour objectif d'analyser l'évolution des normes techniques de la production du riz et les pratiques locales dans les périmètres .... des systèmes de production agricole et du riz dans les ..... recommandations de l'encadrement technique local.

  10. Improvements in technique for determining the surfactant penetration in hair fibres using scanning ion beam analyses

    International Nuclear Information System (INIS)

    Hollands, R.; Clough, A.S.; Meredith, P.

    1999-01-01

    The penetration abilities of surfactants need to be known by companies manufacturing hair-care products. In this work three complementary techniques were used simultaneously - PIXE, NRA and RBS - to measure the penetration of a surfactant, which had been deuterated, into permed hair fibres. Using a scanning micro-beam of 2 MeV 3 He ions 2-dimensional concentration maps were obtained which showed whether the surfactant penetrated the fibre or just stayed on the surface. This is the first report of the use of three simultaneous scattering techniques with a scanning micro-beam. (author)

  11. Finite size scaling theory

    International Nuclear Information System (INIS)

    Rittenberg, V.

    1983-01-01

    Fischer's finite-size scaling describes the cross over from the singular behaviour of thermodynamic quantities at the critical point to the analytic behaviour of the finite system. Recent extensions of the method--transfer matrix technique, and the Hamiltonian formalism--are discussed in this paper. The method is presented, with equations deriving scaling function, critical temperature, and exponent v. As an application of the method, a 3-states Hamiltonian with Z 3 global symmetry is studied. Diagonalization of the Hamiltonian for finite chains allows one to estimate the critical exponents, and also to discover new phase transitions at lower temperatures. The critical points lambda, and indices v estimated for finite-scaling are given

  12. Nuclear analytical techniques applied to the large scale measurements of atmospheric aerosols in the amazon region

    International Nuclear Information System (INIS)

    Gerab, Fabio

    1996-03-01

    This work presents the characterization of the atmosphere aerosol collected in different places of the Amazon Basin. We studied both the biogenic emission from the forest and the particulate material which is emitted to the atmosphere due to the large scale man-made burning during the dry season. The samples were collected during a three year period at two different locations in the Amazon, namely the Alta Floresta (MT) and Serra do Navio (AP) regions, using stacked unit filters. These regions represent two different atmospheric compositions: the aerosol is dominated by the forest natural biogenic emission at Serra do Navio, while at Alta Floresta it presents an important contribution from the man-made burning during the dry season. At Alta Floresta we took samples in gold in order to characterize mercury emission to the atmosphere related to the gold prospection activity in Amazon. Airplanes were used for aerosol sampling during the 1992 and 1993 dry seasons to characterize the atmospheric aerosol contents from man-made burning in large Amazonian areas. The samples were analyzed using several nuclear analytic techniques: Particle Induced X-ray Emission for the quantitative analysis of trace elements with atomic number above 11; Particle Induced Gamma-ray Emission for the quantitative analysis of Na; and Proton Microprobe was used for the characterization of individual particles of the aerosol. Reflectancy technique was used in the black carbon quantification, gravimetric analysis to determine the total atmospheric aerosol concentration and Cold Vapor Atomic Absorption Spectroscopy for quantitative analysis of mercury in the particulate from the Alta Floresta gold shops. Ionic chromatography was used to quantify ionic contents of aerosols from the fine mode particulate samples from Serra do Navio. Multivariate statistical analysis was used in order to identify and characterize the sources of the atmospheric aerosol present in the sampled regions. (author)

  13. Islands Climatology at Local Scale. Downscaling with CIELO model

    Science.gov (United States)

    Azevedo, Eduardo; Reis, Francisco; Tomé, Ricardo; Rodrigues, Conceição

    2016-04-01

    Islands with horizontal scales of the order of tens of km, as is the case of the Atlantic Islands of Macaronesia, are subscale orographic features for Global Climate Models (GCMs) since the horizontal scales of these models are too coarse to give a detailed representation of the islands' topography. Even the Regional Climate Models (RCMs) reveals limitations when they are forced to reproduce the climate of small islands mainly by the way they flat and lowers the elevation of the islands, reducing the capacity of the model to reproduce important local mechanisms that lead to a very deep local climate differentiation. Important local thermodynamics mechanisms like Foehn effect, or the influence of topography on radiation balance, have a prominent role in the climatic spatial differentiation. Advective transport of air - and the consequent induced adiabatic cooling due to orography - lead to transformations of the state parameters of the air that leads to the spatial configuration of the fields of pressure, temperature and humidity. The same mechanism is in the origin of the orographic clouds cover that, besides the direct role as water source by the reinforcement of precipitation, act like a filter to direct solar radiation and as a source of long-wave radiation that affect the local balance of energy. Also, the saturation (or near saturation) conditions that they provide constitute a barrier to water vapour diffusion in the mechanisms of evapotranspiration. Topographic factors like slope, aspect and orographic mask have also significant importance in the local energy balance. Therefore, the simulation of the local scale climate (past, present and future) in these archipelagos requires the use of downscaling techniques to adjust locally outputs obtained at upper scales. This presentation will discuss and analyse the evolution of the CIELO model (acronym for Clima Insular à Escala LOcal) a statistical/dynamical technique developed at the University of the Azores

  14. Natural Scales in Geographical Patterns

    Science.gov (United States)

    Menezes, Telmo; Roth, Camille

    2017-04-01

    Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.

  15. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    S Safinaz

    2017-08-01

    Full Text Available In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames to high resolution frames. We compare our outcomes with multiple exiting algorithms. Our extensive results of proposed technique RemCNN (Reconstruction error minimization Convolution Neural Network shows that our model outperforms the existing technologies such as bicubic, bilinear, MCResNet and provide better reconstructed motioning images and video frames. The experimental results shows that our average PSNR result is 47.80474 considering upscale-2, 41.70209 for upscale-3 and 36.24503 for upscale-4 for Myanmar dataset which is very high in contrast to other existing techniques. This results proves our proposed model real-time video scaling based on convolution neural network architecture’s high efficiency and better performance.

  16. Tissue strands as "bioink" for scale-up organ printing.

    Science.gov (United States)

    Yu, Yin; Ozbolat, Ibrahim T

    2014-01-01

    Organ printing, takes tissue spheroids as building blocks together with additive manufacturing technique to engineer tissue or organ replacement parts. Although a wide array of cell aggregation techniques has been investigated, and gained noticeable success, the application of tissue spheroids for scale-up tissue fabrication is still worth investigation. In this paper, we introduce a new micro-fabrication technique to create tissue strands at the scale of 500-700μm as a "bioink" for future robotic tissue printing. Printable alginate micro-conduits are used as semi-permeable capsules for tissue strand fabrication. Mouse insulinoma beta TC3 cell tissue strands were formed upon 4 days post fabrication with reasonable mechanical strength, high cell viability close to 90%, and tissue specific markers expression. Fusion was readily observed between strands when placing them together as early as 24h. Also, tissue strands were deposited with human umbilical vein smooth muscle cells (HUVSMCs) vascular conduits together to fabricated miniature pancreatic tissue analog. Our study provided a novel technique using tissue strands as "bioink" for scale-up bioprinting of tissues or organs.

  17. Scaling and particulate fouling in membrane filtration systems

    NARCIS (Netherlands)

    Boerlage, S.F.E.

    2001-01-01

    Membrane filtration technologies have emerged as cost competitive and viable techniques in drinking and industrial water production. Despite advancements in membrane manufacturing and technology, membrane scaling and fouling remain major problems and may limit future growth in the industry. Scaling

  18. Local Geographic Variation of Public Services Inequality: Does the Neighborhood Scale Matter?

    Science.gov (United States)

    Wei, Chunzhu; Cabrera-Barona, Pablo; Blaschke, Thomas

    2016-01-01

    This study aims to explore the effect of the neighborhood scale when estimating public services inequality based on the aggregation of social, environmental, and health-related indicators. Inequality analyses were carried out at three neighborhood scales: the original census blocks and two aggregated neighborhood units generated by the spatial “k”luster analysis by the tree edge removal (SKATER) algorithm and the self-organizing map (SOM) algorithm. Then, we combined a set of health-related public services indicators with the geographically weighted principal components analyses (GWPCA) and the principal components analyses (PCA) to measure the public services inequality across all multi-scale neighborhood units. Finally, a statistical test was applied to evaluate the scale effects in inequality measurements by combining all available field survey data. We chose Quito as the case study area. All of the aggregated neighborhood units performed better than the original census blocks in terms of the social indicators extracted from a field survey. The SKATER and SOM algorithms can help to define the neighborhoods in inequality analyses. Moreover, GWPCA performs better than PCA in multivariate spatial inequality estimation. Understanding the scale effects is essential to sustain a social neighborhood organization, which, in turn, positively affects social determinants of public health and public quality of life. PMID:27706072

  19. A brief fatigue inventory of shoulder health developed by quality function deployment technique.

    Science.gov (United States)

    Liu, Shuo-Fang; Lee, Yannlong; Huang, Yiting

    2009-01-01

    The purpose of this study was to develop a diagnostic outcome instrument that has high reliability and low cost. The scale, called the Shoulder Fatigue Scale-30 Items (SFS-30) risk assessment, was used to determine the severity of patient neck and shoulder discomfort. The quality function deployment (QFD) technique was used in designing and developing a simple medical diagnostic scale with high degree of accuracy. Research data can be used to divide the common causes of neck and shoulder discomfort into 6 core categories: occupation, cumulative, psychologic, diseases, diet, and sleep quality. The SFS-30 was validated by using a group of individuals who had been previously diagnosed with different levels of neck and shoulder symptoms. The SFS-30 assessment determined that 78.57% of the participants experienced a neck and shoulder discomfort level above the SFS-30 risk curve and required immediate medical attention. The QFD technique can improve the accuracy and reliability of an assessment outcome instrument. This is mainly because the QFD technique is effective in prioritizing and assigning weight to the items in the scale. This research successfully developed a reliable risk assessment scale to diagnose neck and shoulder symptoms using QFD technique. This scale was proven to have high accuracy and closely represents reality.

  20. The SCALE Web site: Resources for the worldwide nuclear criticality safety community

    International Nuclear Information System (INIS)

    Bowman, S.M.

    2000-01-01

    The Standardized Computer Analyses for Licensing Evaluations (SCALE) computer software system developed at Oak Ridge National Laboratory (ORNL) is widely used and accepted around the world for criticality safety analyses. SCALE includes the well-known KENO V.a and KENO VI three-dimensional Monte Carlo criticality computer codes. For several years, the SCALE staff at ORNL has maintained a Web site to provide information and support to sponsors and users in the worldwide criticality safety community. The SCALE WEB site is located at www.cped.ornl.gov/scale and provides information in the following areas: 1. important notices to users; 2. SCALE Users Electronic Notebook; 3. current and past issues of the SCALE Newsletter; 4. verification and validation (V and V) and benchmark reports; 5. download updates, utilities, and V and V input files; 6. SCALE training course information; 7. SCALE Manual on-line; 8. overview of SCALE system; 9. how to install and run SCALE; 10. SCALE quality assurance documents; and 11. nuclear resources on the Internet

  1. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  2. Precision ring rolling technique and application in high-performance bearing manufacturing

    Directory of Open Access Journals (Sweden)

    Hua Lin

    2015-01-01

    Full Text Available High-performance bearing has significant application in many important industry fields, like automobile, precision machine tool, wind power, etc. Precision ring rolling is an advanced rotary forming technique to manufacture high-performance seamless bearing ring thus can improve the working life of bearing. In this paper, three kinds of precision ring rolling techniques adapt to different dimensional ranges of bearings are introduced, which are cold ring rolling for small-scale bearing, hot radial ring rolling for medium-scale bearing and hot radial-axial ring rolling for large-scale bearing. The forming principles, technological features and forming equipments for three kinds of precision ring rolling techniques are summarized, the technological development and industrial application in China are introduced, and the main technological development trend is described.

  3. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  4. A Stent-Retrieving into an Aspiration Catheter with Proximal Balloon (ASAP) Technique: A Technique of Mechanical Thrombectomy.

    Science.gov (United States)

    Goto, Shunsaku; Ohshima, Tomotaka; Ishikawa, Kojiro; Yamamoto, Taiki; Shimato, Shinji; Nishizawa, Toshihisa; Kato, Kyozo

    2018-01-01

    The best technique for the first attempt at mechanical thrombectomy for acute ischemic stroke is a still matter of debate. In this study, we evaluate the efficacy of a stent-retrieving into an aspiration catheter with proximal balloon (ASAP) technique that uses a series of thrombus extraction by withdrawing the stent retriever into the aspiration catheter and continuous aspiration from the aspiration catheter at the first attempt. We performed a retrospective analysis of 42 consecutive patients with acute ischemic stroke caused by occlusions in the anterior circulation who were treated with the ASAP technique at our institution. Preoperative patient characteristic, including age, thrombus location, Alberta Stroke Program Early CT Score, National Institutions of Health Stroke Scale, and time from onset to puncture; postoperative Thrombolysis in Cerebral Infarction score; modified Rankin Scale score after 3 months; time from puncture to recanalization; the number of passes to achieve recanalization; and procedural complications, including intracranial hemorrhage, embolization to new territory, and distal embolization, were assessed. A Thrombolysis in Cerebral Infarction score of 2B or 3 was achieved in 40/42 patients (95.2%). Average time from puncture to the final recanalization was 21.5 minutes. Recanalization was achieved in a single attempt in 31 patients (77.5%). Embolization to new territory was observed in only 2 patients (4.8%); no patient developed distal embolization or intracranial hemorrhage including asymptomatic subarachnoid hemorrhage. Thirty-two patients (76.2%) achieved modified Rankin Scale scores of 0-2 at 3 months postoperatively. Our ASAP technique showed fast recanalization, minimal complications, and good clinical outcomes in this case series. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Analysis, scale modeling, and full-scale tests of low-level nuclear-waste-drum response to accident environments

    International Nuclear Information System (INIS)

    Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.

    1983-01-01

    This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables

  6. Bridging the Gap between the Nanometer-Scale Bottom-Up and Micrometer-Scale Top-Down Approaches for Site-Defined InP/InAs Nanowires.

    Science.gov (United States)

    Zhang, Guoqiang; Rainville, Christophe; Salmon, Adrian; Takiguchi, Masato; Tateno, Kouta; Gotoh, Hideki

    2015-11-24

    This work presents a method that bridges the gap between the nanometer-scale bottom-up and micrometer-scale top-down approaches for site-defined nanostructures, which has long been a significant challenge for applications that require low-cost and high-throughput manufacturing processes. We realized the bridging by controlling the seed indium nanoparticle position through a self-assembly process. Site-defined InP nanowires were then grown from the indium-nanoparticle array in the vapor-liquid-solid mode through a "seed and grow" process. The nanometer-scale indium particles do not always occupy the same locations within the micrometer-scale open window of an InP exposed substrate due to the scale difference. We developed a technique for aligning the nanometer-scale indium particles on the same side of the micrometer-scale window by structuring the surface of a misoriented InP (111)B substrate. Finally, we demonstrated that the developed method can be used to grow a uniform InP/InAs axial-heterostructure nanowire array. The ability to form a heterostructure nanowire array with this method makes it possible to tune the emission wavelength over a wide range by employing the quantum confinement effect and thus expand the application of this technology to optoelectronic devices. Successfully pairing a controllable bottom-up growth technique with a top-down substrate preparation technique greatly improves the potential for the mass-production and widespread adoption of this technology.

  7. Item Response Theory Analyses of the Parent and Teacher Ratings of the DSM-IV ADHD Rating Scale

    Science.gov (United States)

    Gomez, Rapson

    2008-01-01

    The graded response model (GRM), which is based on item response theory (IRT), was used to evaluate the psychometric properties of the inattention and hyperactivity/impulsivity symptoms in an ADHD rating scale. To accomplish this, parents and teachers completed the DSM-IV ADHD Rating Scale (DARS; Gomez et al., "Journal of Child Psychology and…

  8. Motivasyonel Dil (MD Teorisi ve Ölçme Aracının Türkçede Geçerlik ve Güvenilirlik Analizi = The Relaibility and Validity Analyses of Motivational Language Theory and Scale

    Directory of Open Access Journals (Sweden)

    Türker BAŞ

    2011-08-01

    Full Text Available When the literature of leadership and communication is examined, it can be identified that until 1990s, there was not enough study on the effects of a leader’s language and its content on motivation and performance of employees. This gap was filled in theoretical dimension by the Motivating Language Theory by Sullivan (1988 and as an extension of this theory, the Motivating Language Scale developed by Mayfield, Mayfield and Kopf (1995 based on the former study closed in practical dimension. In this study, the scale developed by Mayfield, Mayfield and Kopf (1995 has been tested for its validity and reliability. As a result of analyses carried out, it has been determined that the scale has a high rate of validity and reliability. Therefore, it is assessed that this scale can contribute to empirical studies in the future.

  9. Network-derived inhomogeneity in monthly rainfall analyses over western Tasmania

    International Nuclear Information System (INIS)

    Fawcett, Robert; Trewin, Blair; Barnes-Keoghan, Ian

    2010-01-01

    Monthly rainfall in the wetter western half of Tasmania was relatively poorly observed in the early to middle parts of the 20th century, and this causes a marked inhomogeneity in the operational gridded monthly rainfall analyses generated by the Australian Bureau of Meteorology up until the end of 2009. These monthly rainfall analyses were generated for the period 1900 to 2009 in two forms; a national analysis at 0.25 0 latitude-longitude resolution, and a southeastern Australia regional analysis at 0.1 0 resolution. For any given month, they used all the monthly data from the standard Bureau rainfall gauge network available in the Australian Data Archive for Meteorology. Since this network has changed markedly since Federation (1901), there is obvious scope for network-derived inhomogeneities in the analyses. In this study, we show that the topography-resolving techniques of the new Australian Water Availability Project analyses, adopted as the official operational analyses from the start of 2010, substantially diminish those inhomogeneities, while using largely the same observation network. One result is an improved characterisation of recent rainfall declines across Tasmania. The new analyses are available at two resolutions, 0.25 0 and 0.05 0 .

  10. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  11. [Methods, challenges and opportunities for big data analyses of microbiome].

    Science.gov (United States)

    Sheng, Hua-Fang; Zhou, Hong-Wei

    2015-07-01

    Microbiome is a novel research field related with a variety of chronic inflamatory diseases. Technically, there are two major approaches to analysis of microbiome: metataxonome by sequencing the 16S rRNA variable tags, and metagenome by shot-gun sequencing of the total microbial (mainly bacterial) genome mixture. The 16S rRNA sequencing analyses pipeline includes sequence quality control, diversity analyses, taxonomy and statistics; metagenome analyses further includes gene annotation and functional analyses. With the development of the sequencing techniques, the cost of sequencing will decrease, and big data analyses will become the central task. Data standardization, accumulation, modeling and disease prediction are crucial for future exploit of these data. Meanwhile, the information property in these data, and the functional verification with culture-dependent and culture-independent experiments remain the focus in future research. Studies of human microbiome will bring a better understanding of the relations between the human body and the microbiome, especially in the context of disease diagnosis and therapy, which promise rich research opportunities.

  12. Hyphenated analytical techniques for materials characterisation

    International Nuclear Information System (INIS)

    Armstrong, Gordon; Kailas, Lekshmi

    2017-01-01

    This topical review will provide a survey of the current state of the art in ‘hyphenated’ techniques for characterisation of bulk materials, surface, and interfaces, whereby two or more analytical methods investigating different properties are applied simultaneously to the same sample to better characterise the sample than can be achieved by conducting separate analyses in series using different instruments. It is intended for final year undergraduates and recent graduates, who may have some background knowledge of standard analytical techniques, but are not familiar with ‘hyphenated’ techniques or hybrid instrumentation. The review will begin by defining ‘complementary’, ‘hybrid’ and ‘hyphenated’ techniques, as there is not a broad consensus among analytical scientists as to what each term means. The motivating factors driving increased development of hyphenated analytical methods will also be discussed. This introduction will conclude with a brief discussion of gas chromatography-mass spectroscopy and energy dispersive x-ray analysis in electron microscopy as two examples, in the context that combining complementary techniques for chemical analysis were among the earliest examples of hyphenated characterisation methods. The emphasis of the main review will be on techniques which are sufficiently well-established that the instrumentation is commercially available, to examine physical properties including physical, mechanical, electrical and thermal, in addition to variations in composition, rather than methods solely to identify and quantify chemical species. Therefore, the proposed topical review will address three broad categories of techniques that the reader may expect to encounter in a well-equipped materials characterisation laboratory: microscopy based techniques, scanning probe-based techniques, and thermal analysis based techniques. Examples drawn from recent literature, and a concluding case study, will be used to explain the

  13. Hyphenated analytical techniques for materials characterisation

    Science.gov (United States)

    Armstrong, Gordon; Kailas, Lekshmi

    2017-09-01

    This topical review will provide a survey of the current state of the art in ‘hyphenated’ techniques for characterisation of bulk materials, surface, and interfaces, whereby two or more analytical methods investigating different properties are applied simultaneously to the same sample to better characterise the sample than can be achieved by conducting separate analyses in series using different instruments. It is intended for final year undergraduates and recent graduates, who may have some background knowledge of standard analytical techniques, but are not familiar with ‘hyphenated’ techniques or hybrid instrumentation. The review will begin by defining ‘complementary’, ‘hybrid’ and ‘hyphenated’ techniques, as there is not a broad consensus among analytical scientists as to what each term means. The motivating factors driving increased development of hyphenated analytical methods will also be discussed. This introduction will conclude with a brief discussion of gas chromatography-mass spectroscopy and energy dispersive x-ray analysis in electron microscopy as two examples, in the context that combining complementary techniques for chemical analysis were among the earliest examples of hyphenated characterisation methods. The emphasis of the main review will be on techniques which are sufficiently well-established that the instrumentation is commercially available, to examine physical properties including physical, mechanical, electrical and thermal, in addition to variations in composition, rather than methods solely to identify and quantify chemical species. Therefore, the proposed topical review will address three broad categories of techniques that the reader may expect to encounter in a well-equipped materials characterisation laboratory: microscopy based techniques, scanning probe-based techniques, and thermal analysis based techniques. Examples drawn from recent literature, and a concluding case study, will be used to explain the

  14. Phylogenomic analyses data of the avian phylogenomics project

    DEFF Research Database (Denmark)

    Jarvis, Erich D; Mirarab, Siavash; Aberer, Andre J

    2015-01-01

    BACKGROUND: Determining the evolutionary relationships among the major lineages of extant birds has been one of the biggest challenges in systematic biology. To address this challenge, we assembled or collected the genomes of 48 avian species spanning most orders of birds, including all Neognathae...... and two of the five Palaeognathae orders. We used these genomes to construct a genome-scale avian phylogenetic tree and perform comparative genomic analyses. FINDINGS: Here we present the datasets associated with the phylogenomic analyses, which include sequence alignment files consisting of nucleotides......ML algorithm or when using statistical binning with the coalescence-based MP-EST algorithm (which we refer to as MP-EST*). Other data sets, such as the coding sequence of some exons, revealed other properties of genome evolution, namely convergence. CONCLUSIONS: The Avian Phylogenomics Project is the largest...

  15. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    Science.gov (United States)

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  16. Rolling at small scales

    DEFF Research Database (Denmark)

    Nielsen, Kim L.; Niordson, Christian F.; Hutchinson, John W.

    2016-01-01

    The rolling process is widely used in the metal forming industry and has been so for many years. However, the process has attracted renewed interest as it recently has been adapted to very small scales where conventional plasticity theory cannot accurately predict the material response. It is well....... Metals are known to be stronger when large strain gradients appear over a few microns; hence, the forces involved in the rolling process are expected to increase relatively at these smaller scales. In the present numerical analysis, a steady-state modeling technique that enables convergence without...

  17. KENO-VI Primer: A Primer for Criticality Calculations with SCALE/KENO-VI Using GeeWiz

    International Nuclear Information System (INIS)

    Bowman, Stephen M.

    2008-01-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. The well-known KENO-VI three-dimensional Monte Carlo criticality computer code is one of the primary criticality safety analysis tools in SCALE. The KENO-VI primer is designed to help a new user understand and use the SCALE/KENO-VI Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO-VI in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO-VI that are useful in criticality analyses. The primer is based on SCALE 6, which includes the Graphically Enhanced Editing Wizard (GeeWiz) Windows user interface. Each example uses GeeWiz to provide the framework for preparing input data and viewing output results. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO-VI input and allows the user to quickly run a simple criticality problem with SCALE/KENO-VI. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO-VI features that are covered in detail in the sample problems in that section. Upon completion of the primer, a new user should be comfortable using GeeWiz to set up criticality problems in SCALE/KENO-VI. The primer provides a starting point for the criticality safety analyst who uses SCALE/KENO-VI. Complete descriptions are provided in the SCALE/KENO-VI manual. Although the primer is self-contained, it is intended as a companion volume to the SCALE/KENO-VI documentation. (The SCALE manual is provided on the SCALE installation DVD.) The primer provides specific examples of

  18. From a meso- to micro-scale connectome: Array Tomography and mGRASP

    Directory of Open Access Journals (Sweden)

    Jinhyun eKim

    2015-06-01

    Full Text Available Mapping mammalian synaptic connectivity has long been an important goal of neuroscience because knowing how neurons and brain areas are connected underpins an understanding of brain function. Meeting this goal requires advanced techniques with single synapse resolution and large-scale capacity, especially at multiple scales tethering the meso- and micro-scale connectome. Among several advanced LM-based connectome technologies, Array Tomography (AT and mammalian GFP-Reconstitution Across Synaptic Partners (mGRASP can provide relatively high-throughput mapping synaptic connectivity at multiple scales. AT- and mGRASP-assisted circuit mapping (ATing and mGRASPing, combined with techniques such as retrograde virus, brain clearing techniques, and activity indicators will help unlock the secrets of complex neural circuits. Here, we discuss these useful new tools to enable mapping of brain circuits at multiple scales, some functional implications of spatial synaptic distribution, and future challenges and directions of these endeavors.

  19. A New Technique for Scanning the Pancreas

    Energy Technology Data Exchange (ETDEWEB)

    Ephraiem, K. H. [Rotterdamsch Radio-Therapeutisch Instituut, Rotterdam (Netherlands)

    1969-05-15

    The difficulties in visualizing the pancreas are partly caused by the high uptake of seleno-methionine in the liver. A simple technique has been developed to prevent data registration during the time the detector is moving above the liver. The technique is based on the fact that both {sup 75}Se and {sup 99m}Tc emit gamma rays of 140-keV energy. The pulses, normally going from the single-channel analyser to the registrating units, are deviated through a ratemeter to an API contactless optical meter relay (model API-compack I) and then passed on to the registrating units. The patient is given the normal dose of Se-methionine and everything is prepared for normal pancreas scanning with only one exception: The window of the single-channel analyser is tuned in on the 140-keV photopeak. The patient is given 2 mCi of {sup 99m}Tc colloid intravenously and the controls on the meter relay are adjusted in such a way that no pulse from the single-channel analyser passes to the registrating units unless the activity is beneath the activity level in the liver. Then the scanning machine is started. The author developed this inexpensive technique to help smaller clinical isotope laboratories which cannot afford the combination of a gamma camera with a special-purpose computer. (author)

  20. Nuclear activation techniques in the life sciences

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1978-08-15

    The analysis of the elemental composition of biological materials is presently undertaken on a large scale in many countries around the world One recent estimate puts the number of such analyses at six thousand million single-element determinations per year, of which about sixteen million are for the so-called trace elements. Since many of these elements are known to play an important role in relation to health and disease, there is considerable interest in learning more about the ways in which they function in living organisms. Nuclear activation techniques, generally referred to collectively as 'activation analysis' constitute an important group of methods for the analysis of the elemental composition of biological materials. Generally they rely on the use of a research nuclear reactor as a source of neutrons for bombarding small samples of biological material, followed by a measurement of the induced radioactivity to provide an estimate of the concentrations of elements. Other methods of activation with Bremsstrahlung and charged particles may also be used, and have their own special applications. These methods of in vitro analysis are particularly suitable for the study of trace elements. Another important group of methods makes use of neutrons from isotopic neutron sources or neutron generators to activate the whole body, or a part of the body, of a living patient. They are generally used for the study of major elements such as Ca, Na and N. All these techniques have previously been the subject of two symposia organised by the IAEA in 1967 and 1972. The present meeting was held to review some of the more recent developments in this field and also to provide a viewpoint on the current status of nuclear activation techniques vis-a-vis other competing non-nuclear methods of analysis.