WorldWideScience

Sample records for scale analyse technique

  1. Techniques for Scaling Up Analyses Based on Pre-interpretations

    DEFF Research Database (Denmark)

    Gallagher, John Patrick; Henriksen, Kim Steen; Banda, Gourinath

    2005-01-01

    a variety of analyses, both generic (such as mode analysis) and program-specific (with respect to a type describing some particular property of interest). Previous work demonstrated the approach using pre-interpretations over small domains. In this paper we present techniques that allow the method...

  2. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 1: Review and comparison of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked

  3. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 2: robustness of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type II errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples

  4. Techniques for Analysing Problems in Engineering Projects

    DEFF Research Database (Denmark)

    Thorsteinsson, Uffe

    1998-01-01

    Description of how CPM network can be used for analysing complex problems in engineering projects.......Description of how CPM network can be used for analysing complex problems in engineering projects....

  5. Analysing CMS transfers using Machine Learning techniques

    CERN Document Server

    Diotalevi, Tommaso

    2016-01-01

    LHC experiments transfer more than 10 PB/week between all grid sites using the FTS transfer service. In particular, CMS manages almost 5 PB/week of FTS transfers with PhEDEx (Physics Experiment Data Export). FTS sends metrics about each transfer (e.g. transfer rate, duration, size) to a central HDFS storage at CERN. The work done during these three months, here as a Summer Student, involved the usage of ML techniques, using a CMS framework called DCAFPilot, to process this new data and generate predictions of transfer latencies on all links between Grid sites. This analysis will provide, as a future service, the necessary information in order to proactively identify and maybe fix latency issued transfer over the WLCG.

  6. Analysing human genomes at different scales

    DEFF Research Database (Denmark)

    Liu, Siyang

    The thriving of the Next-Generation sequencing (NGS) technologies in the past decade has dramatically revolutionized the field of human genetics. We are experiencing a wave of several large-scale whole genome sequencing studies of humans in the world. Those studies vary greatly regarding cohort...... will be reflected by the analysis of real data. This thesis covers studies in two human genome sequencing projects that distinctly differ in terms of studied population, sample size and sequencing depth. In the first project, we sequenced 150 Danish individuals from 50 trio families to 78x coverage....... The sophisticated experimental design enables high-quality de novo assembly of the genomes and provides a good opportunity for mapping the structural variations in the human population. We developed the AsmVar approach to discover, genotype and characterize the structural variations from the assemblies. Our...

  7. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  8. Iterative categorization (IC): a systematic technique for analysing qualitative data

    Science.gov (United States)

    2016-01-01

    Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155

  9. Iterative categorization (IC): a systematic technique for analysing qualitative data.

    Science.gov (United States)

    Neale, Joanne

    2016-06-01

    The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  10. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  11. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  12. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  13. SCALE Graphical Developments for Improved Criticality Safety Analyses

    International Nuclear Information System (INIS)

    Barnett, D.L.; Bowman, S.M.; Horwedel, J.E.; Petrie, L.M.

    1999-01-01

    New computer graphic developments at Oak Ridge National Ridge National Laboratory (ORNL) are being used to provide visualization of criticality safety models and calculational results as well as tools for criticality safety analysis input preparation. The purpose of this paper is to present the status of current development efforts to continue to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system. Applications for criticality safety analysis in the areas of 3-D model visualization, input preparation and execution via a graphical user interface (GUI), and two-dimensional (2-D) plotting of results are discussed

  14. Risk and reliability analyses (LURI) and expert judgement techniques

    International Nuclear Information System (INIS)

    Pyy, P.; Pulkkinen, U.

    1998-01-01

    Probabilistic safety analysis (PSA) is currently used as a regulatory licensing tool in risk informed and plant performance based regulation. More often also utility safety improvements are based on PSA calculations as one criterion. PSA attempts to comprehensively identify all important risk contributors, compare them with each other, assess the safety level and suggest improvements based on its findings. The strength of PSA is that it is capable to provide decision makers with numerical estimates of risks. This makes decision making easier than the comparison of purely qualitative results. PSA is the only comprehensive tool that compactly attempts to include all the important risk contributors in its scope. Despite the demonstrated strengths of PSA, there are some features that have reduced its uses. For example, the PSA scope has been limited to the power operation and process internal events (transients and LOCAs). Only lately, areas such as shutdown, external events and severe accidents have been included in PSA models in many countries. Problems related to modelling are, e.g., that rather static fault and event tree models are commonly used in PSA to model dynamic event sequences. Even if a valid model may be generated, there may not be any other data sources to be used than expert judgement. Furthermore, there are a variety of different techniques for human reliability assessment (HRA) giving varying results. In the project Reliability and Risk Analyses (LURI) these limitations and shortcomings have been studied. In the decision making area, case studies on the application of decision analysis and a doctoral thesis have been published. Further, practical aid has been given to utilities and regulatory decision making. Model uncertainty effect on PSA results has been demonstrated by two case studies. Human reliability has been studied both in the integrated safety analysis study and in the study of maintenance originated NPP component faults based on the

  15. Experimental technique of stress analyses by neutron diffraction

    International Nuclear Information System (INIS)

    Sun, Guangai; Chen, Bo; Huang, Chaoqiang

    2009-09-01

    The structures and main components of neutron diffraction stress analyses spectrometer, SALSA, as well as functions and parameters of each components are presented. The technical characteristic and structure parameters of SALSA are described. Based on these aspects, the choice of gauge volume, method of positioning sample, determination of diffraction plane and measurement of zero stress do are discussed. Combined with the practical experiments, the basic experimental measurement and the related settings are introduced, including the adjustments of components, pattern scattering, data recording and checking etc. The above can be an instruction for stress analyses experiments by neutron diffraction and neutron stress spectrometer construction. (authors)

  16. Application of digital-image-correlation techniques in analysing ...

    Indian Academy of Sciences (India)

    Basis theory of strain analysis using the digital image correlation method .... Type 304N Stainless Steel (Modulus of Elasticity = 193 MPa, Tensile Yield .... also proves the accuracy of the qualitative analyses by using the DIC ... We thank the National Science Council of Taiwan for supporting this research through grant. No.

  17. Novel Space Exploration Technique for Analysing Planetary Atmospheres

    OpenAIRE

    Dekoulis, George

    2010-01-01

    The chapter presents a new reconfigurable wide-beam radio interferometer system for analysing planetary atmospheres. The system operates at frequencies, where the ionisation of the planetary plasma regions induces strong attenuation. For Earth, the attenuation is undistinguishable from the CMB at frequencies over 50 MHz. The system introduces a set of advanced specifications to this field of science, previously unseen in similar suborbital experiments. The reprogrammable dynamic range of the ...

  18. Structural analyses of sucrose laurate regioisomers by mass spectrometry techniques

    DEFF Research Database (Denmark)

    Lie, Aleksander; Stensballe, Allan; Pedersen, Lars Haastrup

    2015-01-01

    6- And 6′-O-lauroyl sucrose were isolated and analyzed by matrix-assisted laser desorption/ionisation (MALDI) time-of-flight (TOF) mass spectrometry (MS), Orbitrap high-resolution (HR) MS, and electrospray-ionization (ESI) tandem mass spectrometry (MS/MS). The analyses aimed to explore the physic......6- And 6′-O-lauroyl sucrose were isolated and analyzed by matrix-assisted laser desorption/ionisation (MALDI) time-of-flight (TOF) mass spectrometry (MS), Orbitrap high-resolution (HR) MS, and electrospray-ionization (ESI) tandem mass spectrometry (MS/MS). The analyses aimed to explore.......8, respectively, and Orbitrap HRMS confirmed the mass of [M+Na]+ (m/z 547.2712). ESI-MS/MS on the precursor ion [M+Na]+ resulted in product ion mass spectra showing two high-intensity signals for each sample. 6-O-Lauroyl sucrose produced signals located at m/z 547.27 and m/z 385.21, corresponding to the 6-O...

  19. Scaling Transformation in the Rembrandt Technique

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Leleur, Steen

    2013-01-01

    This paper examines a decision support system (DSS) for the appraisal of complex decision problems using multi-criteria decision analysis (MCDA). The DSS makes use of a structured hierarchical approach featuring the multiplicative AHP also known as the REMBRANDT technique. The paper addresses...... of a conventional AHP calculation in order to examine what impact the choice of progression factors as well as the choice of technique have on the decision making. Based on this a modified progression factor for the calculation of scores for the alternatives in REMBRANDT is suggested while the progression factor...

  20. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1994-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  1. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  2. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    Science.gov (United States)

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  3. Genome scale engineering techniques for metabolic engineering.

    Science.gov (United States)

    Liu, Rongming; Bassalo, Marcelo C; Zeitoun, Ramsey I; Gill, Ryan T

    2015-11-01

    Metabolic engineering has expanded from a focus on designs requiring a small number of genetic modifications to increasingly complex designs driven by advances in genome-scale engineering technologies. Metabolic engineering has been generally defined by the use of iterative cycles of rational genome modifications, strain analysis and characterization, and a synthesis step that fuels additional hypothesis generation. This cycle mirrors the Design-Build-Test-Learn cycle followed throughout various engineering fields that has recently become a defining aspect of synthetic biology. This review will attempt to summarize recent genome-scale design, build, test, and learn technologies and relate their use to a range of metabolic engineering applications. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  4. Tests and analyses of 1/4-scale upgraded nine-bay reinforced concrete basement models

    International Nuclear Information System (INIS)

    Woodson, S.C.

    1983-01-01

    Two nine-bay prototype structures, a flat plate and two-way slab with beams, were designed in accordance with the 1977 ACI code. A 1/4-scale model of each prototype was constructed, upgraded with timber posts, and statically tested. The development of the timber posts placement scheme was based upon yield-line analyses, punching shear evaluation, and moment-thrust interaction diagrams of the concrete slab sections. The flat plate model and the slab with beams model withstood approximate overpressures of 80 and 40 psi, respectively, indicating that required hardness may be achieved through simple upgrading techniques

  5. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  6. Comparative analyses of industrial-scale human platelet lysate preparations.

    Science.gov (United States)

    Pierce, Jan; Benedetti, Eric; Preslar, Amber; Jacobson, Pam; Jin, Ping; Stroncek, David F; Reems, Jo-Anna

    2017-12-01

    Efforts are underway to eliminate fetal bovine serum from mammalian cell cultures for clinical use. An emerging, viable replacement option for fetal bovine serum is human platelet lysate (PL) as either a plasma-based or serum-based product. Nine industrial-scale, serum-based PL manufacturing runs (i.e., lots) were performed, consisting of an average ± standard deviation volume of 24.6 ± 2.2 liters of pooled, platelet-rich plasma units that were obtained from apheresis donors. Manufactured lots were compared by evaluating various biochemical and functional test results. Comprehensive cytokine profiles of PL lots and product stability tests were performed. Global gene expression profiles of mesenchymal stromal cells (MSCs) cultured with plasma-based or serum-based PL were compared to MSCs cultured with fetal bovine serum. Electrolyte and protein levels were relatively consistent among all serum-based PL lots, with only slight variations in glucose and calcium levels. All nine lots were as good as or better than fetal bovine serum in expanding MSCs. Serum-based PL stored at -80°C remained stable over 2 years. Quantitative cytokine arrays showed similarities as well as dissimilarities in the proteins present in serum-based PL. Greater differences in MSC gene expression profiles were attributable to the starting cell source rather than with the use of either PL or fetal bovine serum as a culture supplement. Using a large-scale, standardized method, lot-to-lot variations were noted for industrial-scale preparations of serum-based PL products. However, all lots performed as well as or better than fetal bovine serum in supporting MSC growth. Together, these data indicate that off-the-shelf PL is a feasible substitute for fetal bovine serum in MSC cultures. © 2017 AABB.

  7. The role of the input scale in parton distribution analyses

    International Nuclear Information System (INIS)

    Jimenez-Delgado, Pedro

    2012-01-01

    A first systematic study of the effects of the choice of the input scale in global determinations of parton distributions and QCD parameters is presented. It is shown that, although in principle the results should not depend on these choices, in practice a relevant dependence develops as a consequence of what is called procedural bias. This uncertainty should be considered in addition to other theoretical and experimental errors, and a practical procedure for its estimation is proposed. Possible sources of mistakes in the determination of QCD parameter from parton distribution analysis are pointed out.

  8. Pico-CSIA: Picomolar Scale Compound-Specific Isotope Analyses

    Science.gov (United States)

    Baczynski, A. A.; Polissar, P. J.; Juchelka, D.; Schwieters, J. B.; Hilkert, A.; Freeman, K. H.

    2016-12-01

    The basic approach to analyzing molecular isotopes has remained largely unchanged since the late 1990s. Conventional compound-specific isotope analyses (CSIA) are conducted using capillary gas chromatography (GC), a combustion interface, and an isotope-ratio mass spectrometer (IRMS). Commercially available GC-IRMS systems are comprised of components with inner diameters ≥0.25 mm and employ helium flow rates of 1-4 mL/min. These flow rates are an order of magnitude larger than what the IRMS can accept. Consequently, ≥90% of the sample is lost through the open split, and 1-10s of nanomoles of carbon are required for analysis. These sample requirements are prohibitive for many biomarkers, which are often present in picomolar concentrations. We utilize the resolving power and low flows of narrow-bore capillary GC to improve the sensitivity of CSIA. Narrow bore capillary columns (<0.25 mm ID) allow low helium flow rates of ≤0.5mL/min for more efficient sample transfer to the ion source of the IRMS while maintaining the high linear flow rates necessary to preserve narrow peak widths ( 250 ms). The IRMS has been fitted with collector amplifiers configured to 25 ms response times for rapid data acquisition across narrow peaks. Previous authors (e.g., Sacks et al., 2007) successfully demonstrated improved sensitivity afforded by narrow-bore GC columns. They reported an accuracy and precision of 1.4‰ for peaks with an average width at half maximum of 720 ms for 100 picomoles of carbon on column. Our method builds on their advances and further reduces peak widths ( 600 ms) and the amount of sample lost prior to isotopic analysis. Preliminary experiments with 100 picomoles of carbon on column show an accuracy and standard deviation <1‰. With further improvement, we hope to demonstrate robust isotopic analysis of 10s of picomoles of carbon, more than 2 orders of magnitude lower than commercial systems. The pico-CSIA method affords high-precision isotopic analyses for

  9. Novel hybrid Monte Carlo/deterministic technique for shutdown dose rate analyses of fusion energy systems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2014-01-01

    Highlights: •Develop the novel Multi-Step CADIS (MS-CADIS) hybrid Monte Carlo/deterministic method for multi-step shielding analyses. •Accurately calculate shutdown dose rates using full-scale Monte Carlo models of fusion energy systems. •Demonstrate the dramatic efficiency improvement of the MS-CADIS method for the rigorous two step calculations of the shutdown dose rate in fusion reactors. -- Abstract: The rigorous 2-step (R2S) computational system uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the R2S neutron transport calculation. However, the prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their ability to accurately predict the SDDR in fusion energy systems using full-scale modeling of an entire fusion plant. This paper describes a novel hybrid Monte Carlo/deterministic methodology that uses the Consistent Adjoint Driven Importance Sampling (CADIS) method but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) methodology speeds up the R2S neutron Monte Carlo calculation using an importance function that represents the neutron importance to the final SDDR. Using a simplified example, preliminary results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the efficiency enhancement compared to analog Monte Carlo is higher than a factor of 10,000

  10. Lightweight and Statistical Techniques for Petascale PetaScale Debugging

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton

    2014-06-30

    This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errors in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis

  11. Development of a Scaling Technique for Sociometric Data.

    Science.gov (United States)

    Peper, John B.; Chansky, Norman M.

    This study explored the stability and interjudge agreements of a sociometric scaling device to which children could easily respond, which teachers could easily administer and score, and which provided scores that researchers could use in parametric statistical analyses. Each student was paired with every other member of his class. He voted on each…

  12. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  13. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger; Gokhale, Maya; Amato, Nancy M.

    2013-01-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash

  14. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  15. Ionizing radiation effects in Acai oil analysed by gas chromatography coupled to mass spectrometry technique

    International Nuclear Information System (INIS)

    Valli, Felipe; Fernandes, Carlos Eduardo; Moura, Sergio; Machado, Ana Carolina; Furasawa, Helio Akira; Pires, Maria Aparecida Faustino; Bustillos, Oscar Vega

    2007-01-01

    The Acai fruit is a well know Brazilian seed plant used in large scale as a source of feed stock, specially in the Brazilian North-east region. The Acai oil is use in many purposes from fuel sources to medicine. The scope of this paper is to analyzed the chemical structures modification of the acai oil after the ionizing radiation. The radiation were set in the range of 10 to 25 kGy in the extracted Acai oil. The analyses were made by gas chromatography coupled to mass spectrometry techniques. A GC/MS Shimatzu QP-5000 equipped with 30 meters DB-5 capillary column with internal diameter of 0.25 mm and 0.25 μm film thickness was used. Helium was used as carried gas and gave a column head pressure of 12 p.s.i. (1 p.s.i. = 6894.76 Pa) and an average flux of 1 ml/min. The temperature program for the GC column consisted of a 4-minutes hold at 75 deg C, a 15 deg C /min ramp to 200 deg C, 8 minutes isothermal. 20 deg C/min ramp to 250 deg C, 2 minutes isothermal. The extraction of the fatty acids was based on liquid-liquid method using chloroform as solvent. The chromatograms resulted shows the presences of the oleic acid and others fatty acids identify by the mass spectra library (NIST-92). The ionization radiation deplete the fatty acids presents in the Acai oil. Details on the chemical qualitative analytical is present as well in this work. (author)

  16. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  17. Quantifying Shapes: Mathematical Techniques for Analysing Visual Representations of Sound and Music

    Directory of Open Access Journals (Sweden)

    Genevieve L. Noyce

    2013-12-01

    Full Text Available Research on auditory-visual correspondences has a long tradition but innovative experimental paradigms and analytic tools are sparse. In this study, we explore different ways of analysing real-time visual representations of sound and music drawn by both musically-trained and untrained individuals. To that end, participants' drawing responses captured by an electronic graphics tablet were analysed using various regression, clustering, and classification techniques. Results revealed that a Gaussian process (GP regression model with a linear plus squared-exponential covariance function was able to model the data sufficiently, whereas a simpler GP was not a good fit. Spectral clustering analysis was the best of a variety of clustering techniques, though no strong groupings are apparent in these data. This was confirmed by variational Bayes analysis, which only fitted one Gaussian over the dataset. Slight trends in the optimised hyperparameters between musically-trained and untrained individuals allowed for the building of a successful GP classifier that differentiated between these two groups. In conclusion, this set of techniques provides useful mathematical tools for analysing real-time visualisations of sound and can be applied to similar datasets as well.

  18. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger

    2013-05-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash. We apply an edge list partitioning technique, designed to accommodate high-degree vertices (hubs) that create scaling challenges when processing scale-free graphs. In addition to partitioning hubs, we use ghost vertices to represent the hubs to reduce communication hotspots. We present a scaling study with three important graph algorithms: Breadth-First Search (BFS), K-Core decomposition, and Triangle Counting. We also demonstrate scalability on BG/P Intrepid by comparing to best known Graph500 results. We show results on two clusters with local NVRAM storage that are capable of traversing trillion-edge scale-free graphs. By leveraging node-local NAND Flash, our approach can process thirty-two times larger datasets with only a 39% performance degradation in Traversed Edges Per Second (TEPS). © 2013 IEEE.

  19. Spiritual Well-Being Scale Ethnic Differences between Caucasians and African-Americans: Follow Up Analyses.

    Science.gov (United States)

    Miller, Geri; Gridley, Betty; Fleming, Willie

    This follow up study is in response to Miller, Fleming, and Brown-Andersons (1998) study of ethnic differences between Caucasians and African-Americans where the authors suggested that the Spiritual Well-Being (SWB) Scale may need to be interpreted differently depending on ethnicity. In this study, confirmatory factor analyses were conducted for…

  20. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Use of the modal superposition technique for piping system blowdown analyses

    International Nuclear Information System (INIS)

    Ware, A.G.; Macek, R.W.

    1983-01-01

    A standard method of solving for the seismic response of piping systems is the modal superposition technique. Only a limited number of structural modes are considered (typically those up to 33 Hz in the U.S.), since the effect on the calculated response due to higher modes is generally small, and the method can result in considerable computer cost savings over the direct integration method. The modal superposition technique has also been applied to piping response problems in which the forcing functions are due to fluid excitation. Application of the technique to this case is somewhat more difficult, because a well defined cutoff frequency for determining structural modes to be included has not been established. This paper outlines a method for higher mode corrections, and suggests methods to determine suitable cutoff frequencies for piping system blowdown analyses. A numerical example illustrates how uncorrected modal superposition results can produce erroneous stress results

  2. Power plant economy of scale and cost trends: further analyses and review of empirical studies

    International Nuclear Information System (INIS)

    Fisher, C.F. Jr.; Paik, S.; Schriver, W.R.

    1986-07-01

    Multiple regression analyses were performed on capital cost data for nuclear and coal-fired power plants in an extension of an earlier study which indicated that nuclear units completed prior to the accident at Three-Mile Island (TMI) have no economy of scale, and that units completed after that event have a weak economy of scale (scaling exponent of about 0.81). The earlier study also indicated that the scaling exponent for coal-fired units is about 0.92, compared with conceptual models which project scaling exponents in a range from about 0.5 to 0.9. Other empirical studies have indicated poor economy of scale, but a large range of cost-size scaling exponents has been reported. In the present study, the results for nuclear units indicate a scaling exponent of about 0.94 but with no economy of scale for large units, that a first unit costs 17% more than a second unit, that a unit in the South costs 20% less than others, that a unit completed after TMI costs 33% more than one completed before TMI, and that costs are increasing at 9.3% per year. In the present study, the results for coal-fired units indicate a scaling exponent of 0.93 but with better scaling economy in the larger units, that a first unit costs 38.5% more, a unit in the South costs 10% less, flue-gas desulfurization units cost 23% more, and that costs are increasing at 4% per year

  3. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    Science.gov (United States)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  4. Elemental analyses of goundwater: demonstrated advantage of low-flow sampling and trace-metal clean techniques over standard techniques

    Science.gov (United States)

    Creasey, C. L.; Flegal, A. R.

    'introduction accidentelle de contaminants au cours de l'échantillonnage, du stockage et de l'analyse. Lorsque ces techniques sont appliquées, les concentrations résultantes en éléments en traces sont nettement plus faibles que les résultats obtenus par les techniques d'échantillonnage classique. Dans une comparaison de données concernant des puits contaminés et des puits de contrôle d'un site de Californie (États-Unis), les concentrations en éléments en traces de cette étude ont été de 2 à 1000 fois plus faibles que celles déterminées par les techniques conventionnelles utilisées pour l'échantillonnage des mêmes puits cinq mois auparavant et un mois après ces prélèvements. En particulier, les concentrations en cadmium et en chrome obtenues par les techniques classiques de prélèvements dépassent les teneurs maximales admises en Californie, alors que les concentrations obtenues pour ces deux éléments dans cette étude sont nettement au-dessous de ces teneurs maximales. Par conséquent, le recours à des techniques à faible débit et sans traces de métal peut faire apparaître que la publication de contamination d'eaux souterraines par des éléments en traces était erronée. Resumen El uso combinado del purgado y muestreo a bajo caudal con las técnicas limpias de metales traza proporcionan medidas de la concentración de elementos traza en las aguas subterráneas que son más representativas que las obtenidas con técnicas tradicionales. El purgado y muestreo a bajo caudal proporciona muestras de agua prácticamente inalteradas, representativas de las condiciones en el terreno. Las técnicas limpias de metales traza limitan la no deseada introducción de contaminantes durante el muestreo, almacenamiento y análisis. Las concentraciones de elementos traza resultantes suelen ser bastante menores que las obtenidas por técnicas tradicionales. En una comparación entre los datos procedentes de pozos en California, las concentraciones obtenidas con el nuevo m

  5. Applicability of two mobile analysers for mercury in urine in small-scale gold mining areas.

    Science.gov (United States)

    Baeuml, Jennifer; Bose-O'Reilly, Stephan; Lettmeier, Beate; Maydl, Alexandra; Messerer, Katalin; Roider, Gabriele; Drasch, Gustav; Siebert, Uwe

    2011-12-01

    Mercury is still used in developing countries to extract gold from the ore in small-scale gold mining areas. This is a major health hazard for people living in mining areas. The concentration of mercury in urine was analysed in different mining areas in Zimbabwe, Indonesia and Tanzania. First the urine samples were analysed by CV-AAS (cold vapour atomic absorption spectrometry) during the field projects with a mobile mercury analyser (Lumex(®) or Seefelder(®)) and secondly, in a laboratory with a stationary CV-AAS mercury analyser (PerkinElmer(®)). Caused by the different systems (reduction agent either SnCl(2) (Lumex(®) or Seefelder(®))) or NaBH(4) (PerkinElmer(®)), with the mobile analysers only the inorganic mercury was obtained and with the stationary system the total mercury concentration was measured. The aims of the study were whether the results obtained in field with the mobile equipments can be compared with the stationary reference method in the laboratory and allow the application of these mobile analysers in screening studies on concerned populations to select those, who are exposed to critical mercury levels. Overall, the concentrations obtained with the two mobile systems were approximately 25% lower than determined with the stationary system. Nevertheless, both mobile systems seem to be very useful for screening of volunteers in field. Moreover, regional staff may be trained on such analysers to perform screening tests by themselves. Copyright © 2011 Elsevier GmbH. All rights reserved.

  6. Systematic comparative and sensitivity analyses of additive and outranking techniques for supporting impact significance assessments

    International Nuclear Information System (INIS)

    Cloquell-Ballester, Vicente-Agustin; Monterde-Diaz, Rafael; Cloquell-Ballester, Victor-Andres; Santamarina-Siurana, Maria-Cristina

    2007-01-01

    Assessing the significance of environmental impacts is one of the most important and all together difficult processes of Environmental Impact Assessment. This is largely due to the multicriteria nature of the problem. To date, decision techniques used in the process suffer from two drawbacks, namely the problem of compensation and the problem of identification of the 'exact boundary' between sub-ranges. This article discusses these issues and proposes a methodology for determining the significance of environmental impacts based on comparative and sensitivity analyses using the Electre TRI technique. An application of the methodology for the environmental assessment of a Power Plant project within the Valencian Region (Spain) is presented, and its performance evaluated. It is concluded that contrary to other techniques, Electre TRI automatically identifies those cases where allocation of significance categories is most difficult and, when combined with sensitivity analysis, offers greatest robustness in the face of variation in weights of the significance attributes. Likewise, this research demonstrates the efficacy of systematic comparison between Electre TRI and sum-based techniques, in the solution of assignment problems. The proposed methodology can therefore be regarded as a successful aid to the decision-maker, who will ultimately take the final decision

  7. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  8. Tools and Techniques for Basin-Scale Climate Change Assessment

    Science.gov (United States)

    Zagona, E.; Rajagopalan, B.; Oakley, W.; Wilson, N.; Weinstein, P.; Verdin, A.; Jerla, C.; Prairie, J. R.

    2012-12-01

    The Department of Interior's WaterSMART Program seeks to secure and stretch water supplies to benefit future generations and identify adaptive measures to address climate change. Under WaterSMART, Basin Studies are comprehensive water studies to explore options for meeting projected imbalances in water supply and demand in specific basins. Such studies could be most beneficial with application of recent scientific advances in climate projections, stochastic simulation, operational modeling and robust decision-making, as well as computational techniques to organize and analyze many alternatives. A new integrated set of tools and techniques to facilitate these studies includes the following components: Future supply scenarios are produced by the Hydrology Simulator, which uses non-parametric K-nearest neighbor resampling techniques to generate ensembles of hydrologic traces based on historical data, optionally conditioned on long paleo reconstructed data using various Markov Chain techniuqes. Resampling can also be conditioned on climate change projections from e.g., downscaled GCM projections to capture increased variability; spatial and temporal disaggregation is also provided. The simulations produced are ensembles of hydrologic inputs to the RiverWare operations/infrastucture decision modeling software. Alternative demand scenarios can be produced with the Demand Input Tool (DIT), an Excel-based tool that allows modifying future demands by groups such as states; sectors, e.g., agriculture, municipal, energy; and hydrologic basins. The demands can be scaled at future dates or changes ramped over specified time periods. Resulting data is imported directly into the decision model. Different model files can represent infrastructure alternatives and different Policy Sets represent alternative operating policies, including options for noticing when conditions point to unacceptable vulnerabilities, which trigger dynamically executing changes in operations or other

  9. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files

  10. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  11. A simple technique investigating baseline heterogeneity helped to eliminate potential bias in meta-analyses.

    Science.gov (United States)

    Hicks, Amy; Fairhurst, Caroline; Torgerson, David J

    2018-03-01

    To perform a worked example of an approach that can be used to identify and remove potentially biased trials from meta-analyses via the analysis of baseline variables. True randomisation produces treatment groups that differ only by chance; therefore, a meta-analysis of a baseline measurement should produce no overall difference and zero heterogeneity. A meta-analysis from the British Medical Journal, known to contain significant heterogeneity and imbalance in baseline age, was chosen. Meta-analyses of baseline variables were performed and trials systematically removed, starting with those with the largest t-statistic, until the I 2 measure of heterogeneity became 0%, then the outcome meta-analysis repeated with only the remaining trials as a sensitivity check. We argue that heterogeneity in a meta-analysis of baseline variables should not exist, and therefore removing trials which contribute to heterogeneity from a meta-analysis will produce a more valid result. In our example none of the overall outcomes changed when studies contributing to heterogeneity were removed. We recommend routine use of this technique, using age and a second baseline variable predictive of outcome for the particular study chosen, to help eliminate potential bias in meta-analyses. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  13. An automatic system to search, acquire, and analyse chromosomal aberrations obtained using FISH technique

    International Nuclear Information System (INIS)

    Esposito, R.D.

    2003-01-01

    Full text: Chromosomal aberrations (CA) analysis in peripheral blood lymphocytes is useful both in prenatal diagnoses and cancer cytogenetics, as well as in toxicology to determine the biologically significant dose of specific, both physical and chemical, genotoxic agents to which an individual is exposed. A useful cytogenetic technique for CAs analysis is Fluorescence-in-situ-Hybridization (FISH) which simplifies the automatic Identification and characterisation of aberrations, allowing the visualisation of chromosomes as bright signals on a dark background, and a fast analysis of stable aberrations, which are particularly interesting for late effects. The main limitation of CA analysis is the rarity with which these events occur, and therefore the time necessary to single out a statistically significant number of aberrant cells. In order to address this problem, a prototype system, capable of automatically searching, acquiring, and recognising chromosomal images of samples prepared using FISH, has been developed. The system is able to score large number of samples in a reasonable time using predefined search criteria. The system is based on the appropriately implemented and characterised automatic metaphase finder Metafer4 (MetaSystems), coupled with a specific module for the acquisition of high magnification metaphase images with any combination of fluorescence filters. These images are then analysed and classified using our software. The prototype is currently capable of separating normal metaphase images from presumed aberrant ones. This system is currently in use in our laboratories both by ourselves and by other researchers not involved in its development, in order to carry out analyses of CAs induced by ionising radiation. The prototype allows simple acquisition and management of large quantities of images and makes it possible to carry out methodological studies -such as the comparison of results obtained by different operators- as well as increasing the

  14. Consensuses and discrepancies of basin-scale ocean heat content changes in different ocean analyses

    Science.gov (United States)

    Wang, Gongjie; Cheng, Lijing; Abraham, John; Li, Chongyin

    2018-04-01

    Inconsistent global/basin ocean heat content (OHC) changes were found in different ocean subsurface temperature analyses, especially in recent studies related to the slowdown in global surface temperature rise. This finding challenges the reliability of the ocean subsurface temperature analyses and motivates a more comprehensive inter-comparison between the analyses. Here we compare the OHC changes in three ocean analyses (Ishii, EN4 and IAP) to investigate the uncertainty in OHC in four major ocean basins from decadal to multi-decadal scales. First, all products show an increase of OHC since 1970 in each ocean basin revealing a robust warming, although the warming rates are not identical. The geographical patterns, the key modes and the vertical structure of OHC changes are consistent among the three datasets, implying that the main OHC variabilities can be robustly represented. However, large discrepancies are found in the percentage of basinal ocean heating related to the global ocean, with the largest differences in the Pacific and Southern Ocean. Meanwhile, we find a large discrepancy of ocean heat storage in different layers, especially within 300-700 m in the Pacific and Southern Oceans. Furthermore, the near surface analysis of Ishii and IAP are consistent with sea surface temperature (SST) products, but EN4 is found to underestimate the long-term trend. Compared with ocean heat storage derived from the atmospheric budget equation, all products show consistent seasonal cycles of OHC in the upper 1500 m especially during 2008 to 2012. Overall, our analyses further the understanding of the observed OHC variations, and we recommend a careful quantification of errors in the ocean analyses.

  15. Tuneable diode laser gas analyser for methane measurements on a large scale solid oxide fuel cell

    Science.gov (United States)

    Lengden, Michael; Cunningham, Robert; Johnstone, Walter

    2011-10-01

    A new in-line, real time gas analyser is described that uses tuneable diode laser spectroscopy (TDLS) for the measurement of methane in solid oxide fuel cells. The sensor has been tested on an operating solid oxide fuel cell (SOFC) in order to prove the fast response and accuracy of the technology as compared to a gas chromatograph. The advantages of using a TDLS system for process control in a large-scale, distributed power SOFC unit are described. In future work, the addition of new laser sources and wavelength modulation will allow the simultaneous measurement of methane, water vapour, carbon-dioxide and carbon-monoxide concentrations.

  16. A study of residence time distribution using radiotracer technique in the large scale plant facility

    Science.gov (United States)

    Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.

    2017-06-01

    As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.

  17. Scaling analyses of the spectral dimension in 3-dimensional causal dynamical triangulations

    Science.gov (United States)

    Cooperman, Joshua H.

    2018-05-01

    The spectral dimension measures the dimensionality of a space as witnessed by a diffusing random walker. Within the causal dynamical triangulations approach to the quantization of gravity (Ambjørn et al 2000 Phys. Rev. Lett. 85 347, 2001 Nucl. Phys. B 610 347, 1998 Nucl. Phys. B 536 407), the spectral dimension exhibits novel scale-dependent dynamics: reducing towards a value near 2 on sufficiently small scales, matching closely the topological dimension on intermediate scales, and decaying in the presence of positive curvature on sufficiently large scales (Ambjørn et al 2005 Phys. Rev. Lett. 95 171301, Ambjørn et al 2005 Phys. Rev. D 72 064014, Benedetti and Henson 2009 Phys. Rev. D 80 124036, Cooperman 2014 Phys. Rev. D 90 124053, Cooperman et al 2017 Class. Quantum Grav. 34 115008, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151, Kommu 2012 Class. Quantum Grav. 29 105003). I report the first comprehensive scaling analysis of the small-to-intermediate scale spectral dimension for the test case of the causal dynamical triangulations of 3-dimensional Einstein gravity. I find that the spectral dimension scales trivially with the diffusion constant. I find that the spectral dimension is completely finite in the infinite volume limit, and I argue that its maximal value is exactly consistent with the topological dimension of 3 in this limit. I find that the spectral dimension reduces further towards a value near 2 as this case’s bare coupling approaches its phase transition, and I present evidence against the conjecture that the bare coupling simply sets the overall scale of the quantum geometry (Ambjørn et al 2001 Phys. Rev. D 64 044011). On the basis of these findings, I advance a tentative physical explanation for the dynamical reduction of the spectral dimension observed within causal dynamical triangulations: branched polymeric quantum geometry on sufficiently small scales. My analyses should facilitate attempts to employ the spectral

  18. The use of data mining techniques for analysing factors affecting cow reactivity during milking

    Directory of Open Access Journals (Sweden)

    Wojciech NEJA

    2017-06-01

    Full Text Available Motor activity of 158 Polish Holstein-Friesian cows was evaluated 5 times (before and during milking in a DeLaval 2*10 milking parlour for both the morning and evening milking, on a 5-point scale, according to the method of Budzyńska et al. (2007. The statistical analysis used multiple logistic regression and classification trees (Enterprise Miner 7.1 software which comes in with SAS package. In the evaluation of motor activity, cows that were among the first ten to enter the milking parlour were more often given a score of 3 points before (11.5% and during milking (23.5% compared to the other cows. Cows’ activity tended to decrease (both before and during milking with advancing lactation. The cows’ reduced activity was accompanied by shorter teat cup attachment times and lower milk yields. The criteria calculated for the quality of models based on classification tree technique as well as logistic regression showed that similar variables were responsible for the reactivity of cows before milking (teat cup attachment time, day of lactation, number of lactation, side of the milking parlour and during milking (day of lactation, side of the milking parlour, morning or evening milking, milk yield, number of lactation. At the same time, the applied methods showed that the determinants of the cow reactivity trait are highly complex. This complexity may be well explained using the classification tree technique.

  19. How scaling fluctuation analyses can transform our view of the climate

    Science.gov (United States)

    Lovejoy, Shaun; Schertzer, Daniel

    2013-04-01

    There exist a bewildering diversity of proxy climate data including tree rings, ice cores, lake varves, boreholes, ice cores, pollen, foraminifera, corals and speleothems. Their quantitative use raises numerous questions of interpretation and calibration. Even in classical cases - such as the isotope signal in ice cores - the usual assumption of linear dependence on ambient temperature is only a first approximation. In other cases - such as speleothems - the isotope signals arise from multiple causes (which are not always understood) and this hinders their widespread use. We argue that traditional interpretations and calibrations - based on essentially deterministic comparisons between instrumental data, model outputs and proxies (albeit with the help of uncertainty analyses) - have been both overly ambitious while simultaneously underexploiting the data. The former since comparisons typically involve series at different temporal resolutions and from different geographical locations - one does not expect agreement in a deterministic sense, while with respect to climate models, one only expects statistical correspondences. The proxies are underexploited since comparisons are done at unique temporal and / or spatial resolutions whereas the fluctuations they describe provide information over wide ranges of scale. A convenient method of overcoming these difficulties is the use of fluctuation analysis systematically applied over the full range of available scales to determine the scaling proeprties. The new transformative element presented here, is to define fluctuations ΔT in a series T(t) at scale Δt not by differences (ΔT(Δt) = T(t+Δt) - T(t)) but rather by the difference in the means over the first and second halves of the lag Δt . This seemingly minor change - technically from "poor man's" to "Haar" wavelets - turns out to make a huge difference since for example, it is adequate for analysing temperatures from seconds to hundreds of millions of years yet

  20. [Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].

    Science.gov (United States)

    Uesawa, Yoshihiro

    2018-01-01

     Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.

  1. Study on high density multi-scale calculation technique

    International Nuclear Information System (INIS)

    Sekiguchi, S.; Tanaka, Y.; Nakada, H.; Nishikawa, T.; Yamamoto, N.; Yokokawa, M.

    2004-01-01

    To understand degradation of nuclear materials under irradiation, it is essential to know as much about each phenomenon observed from multi-scale points of view; they are micro-scale in atomic-level, macro-level in structural scale and intermediate level. In this study for application to meso-scale materials (100A ∼ 2μm), computer technology approaching from micro- and macro-scales was developed including modeling and computer application using computational science and technology method. And environmental condition of grid technology for multi-scale calculation was prepared. The software and MD (molecular dynamics) stencil for verifying the multi-scale calculation were improved and their movement was confirmed. (A. Hishinuma)

  2. Vibration tests and analyses of the reactor building model on a small scale

    International Nuclear Information System (INIS)

    Tsuchiya, Hideo; Tanaka, Mitsuru; Ogihara, Yukio; Moriyama, Ken-ichi; Nakayama, Masaaki

    1985-01-01

    The purpose of this paper is to describe the vibration tests and the simulation analyses of the reactor building model on a small scale. The model vibration tests were performed to investigate the vibrational characteristics of the combined super-structure and to verify the computor code based on Dr. H. Tajimi's Thin Layered Element Theory, using the uniaxial shaking table (60 cm x 60 cm). The specimens consist of ground model, three structural model (prestressed concrete containment vessel, inner concrete structure, and enclosure building), a combined structural model and a combined structure-soil interaction model. These models are made of silicon-rubber, and they have a scale of 1:600. Harmonic step by step excitation of 40 gals was performed to investigate the vibrational characteristics for each structural model. The responses of the specimen to harmonic excitation were measured by optical displacement meters, and analyzed by a real time spectrum analyzer. The resonance and phase lag curves of the specimens to the shaking table were obtained respectively. As for the tests of a combined structure-soil interaction model, three predominant frequencies were observed in the resonance curves. These values were in good agreement with the analytical transfer function curves on the computer code. From the vibration tests and the simulation analyses, the silicon-rubber model test is useful for the fundamental study of structural problems. The computer code based on the Thin Element Theory can simulate well the test results. (Kobozono, M.)

  3. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    International Nuclear Information System (INIS)

    Williams, Paul T.; Yin, Shengjun; Klasky, Hilda B.; Bass, Bennett Richard

    2011-01-01

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current status of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite

  4. Photogrammetric techniques for across-scale soil erosion assessment

    OpenAIRE

    Eltner, Anette

    2016-01-01

    Soil erosion is a complex geomorphological process with varying influences of different impacts at different spatio-temporal scales. To date, measurement of soil erosion is predominantly realisable at specific scales, thereby detecting separate processes, e.g. interrill erosion contrary to rill erosion. It is difficult to survey soil surface changes at larger areal coverage such as field scale with high spatial resolution. Either net changes at the system outlet or remaining traces after the ...

  5. Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method

    International Nuclear Information System (INIS)

    Cadinu, F.; Kozlowski, T.; Dinh, T.N.

    2007-01-01

    Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)

  6. Mathematical analysis of the dimensional scaling technique for the Schroedinger equation with power-law potentials

    International Nuclear Information System (INIS)

    Ding Zhonghai; Chen, Goong; Lin, Chang-Shou

    2010-01-01

    The dimensional scaling (D-scaling) technique is an innovative asymptotic expansion approach to study the multiparticle systems in molecular quantum mechanics. It enables the calculation of ground and excited state energies of quantum systems without having to solve the Schroedinger equation. In this paper, we present a mathematical analysis of the D-scaling technique for the Schroedinger equation with power-law potentials. By casting the D-scaling technique in an appropriate variational setting and studying the corresponding minimization problem, the D-scaling technique is justified rigorously. A new asymptotic dimensional expansion scheme is introduced to compute asymptotic expansions for ground state energies.

  7. DupTree: a program for large-scale phylogenetic analyses using gene tree parsimony.

    Science.gov (United States)

    Wehe, André; Bansal, Mukul S; Burleigh, J Gordon; Eulenstein, Oliver

    2008-07-01

    DupTree is a new software program for inferring rooted species trees from collections of gene trees using the gene tree parsimony approach. The program implements a novel algorithm that significantly improves upon the run time of standard search heuristics for gene tree parsimony, and enables the first truly genome-scale phylogenetic analyses. In addition, DupTree allows users to examine alternate rootings and to weight the reconciliation costs for gene trees. DupTree is an open source project written in C++. DupTree for Mac OS X, Windows, and Linux along with a sample dataset and an on-line manual are available at http://genome.cs.iastate.edu/CBL/DupTree

  8. CSNI Project for Fracture Analyses of Large-Scale International Reference Experiments (FALSIRE II)

    Energy Technology Data Exchange (ETDEWEB)

    Bass, B.R.; Pugh, C.E.; Keeney, J. [Oak Ridge National Lab., TN (United States); Schulz, H.; Sievers, J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Gemany)

    1996-11-01

    A summary of Phase II of the Project for FALSIRE is presented. FALSIRE was created by the Fracture Assessment Group (FAG) of the OECD/NEA`s Committee on the Safety of Nuclear Installations (CNSI) Principal Working Group No. 3. FALSIRE I in 1988 assessed fracture methods through interpretive analyses of 6 large-scale fracture experiments in reactor pressure vessel (RPV) steels under pressurized- thermal-shock (PTS) loading. In FALSIRE II, experiments examined cleavage fracture in RPV steels for a wide range of materials, crack geometries, and constraint and loading conditions. The cracks were relatively shallow, in the transition temperature region. Included were cracks showing either unstable extension or two stages of extensions under transient thermal and mechanical loads. Crack initiation was also investigated in connection with clad surfaces and with biaxial load. Within FALSIRE II, comparative assessments were performed for 7 reference fracture experiments based on 45 analyses received from 22 organizations representing 12 countries. Temperature distributions in thermal shock loaded samples were approximated with high accuracy and small scatter bands. Structural response was predicted reasonably well; discrepancies could usually be traced to the assumed material models and approximated material properties. Almost all participants elected to use the finite element method.

  9. CSNI Project for Fracture Analyses of Large-Scale International Reference Experiments (FALSIRE II)

    International Nuclear Information System (INIS)

    Bass, B.R.; Pugh, C.E.; Keeney, J.; Schulz, H.; Sievers, J.

    1996-11-01

    A summary of Phase II of the Project for FALSIRE is presented. FALSIRE was created by the Fracture Assessment Group (FAG) of the OECD/NEA's Committee on the Safety of Nuclear Installations (CNSI) Principal Working Group No. 3. FALSIRE I in 1988 assessed fracture methods through interpretive analyses of 6 large-scale fracture experiments in reactor pressure vessel (RPV) steels under pressurized- thermal-shock (PTS) loading. In FALSIRE II, experiments examined cleavage fracture in RPV steels for a wide range of materials, crack geometries, and constraint and loading conditions. The cracks were relatively shallow, in the transition temperature region. Included were cracks showing either unstable extension or two stages of extensions under transient thermal and mechanical loads. Crack initiation was also investigated in connection with clad surfaces and with biaxial load. Within FALSIRE II, comparative assessments were performed for 7 reference fracture experiments based on 45 analyses received from 22 organizations representing 12 countries. Temperature distributions in thermal shock loaded samples were approximated with high accuracy and small scatter bands. Structural response was predicted reasonably well; discrepancies could usually be traced to the assumed material models and approximated material properties. Almost all participants elected to use the finite element method

  10. Falsire: CSNI project for fracture analyses of large-scale international reference experiments (Phase 1). Comparison report

    International Nuclear Information System (INIS)

    1994-01-01

    A summary of the recently completed Phase I of the Project for Fracture Analysis of Large-Scale International Reference Experiments (Project FALSIRE) is presented. Project FALSIRE was created by the Fracture Assessment Group (FAG) of Principal Working Group No. 3 (PWG/3) of the OECD/NEA Committee on the Safety of Nuclear Installations (CSNI), formed to evaluate fracture prediction capabilities currently used in safety assessments of nuclear vessel components. The aim of the Project FALSIRE was to assess various fracture methodologies through interpretive analyses of selected large-scale fracture experiments. The six experiments used in Project FALSIRE (performed in the Federal Republic of Germany, Japan, the United Kingdom, and the U.S.A.) were designed to examine various aspects of crack growth in reactor pressure vessel (RPV) steels under pressurized-thermal-shock (PTS) loading conditions. The analysis techniques employed by the participants included engineering and finite-element methods, which were combined with Jr fracture methodology and the French local approach. For each experiment, analysis results provided estimates of variables such as crack growth, crack-mouth-opening displacement, temperature, stress, strain, and applied J and K values. A comparative assessment and discussion of the analysis results are presented; also, the current status of the entire results data base is summarized. Some conclusions concerning predictive capabilities of selected ductile fracture methodologies, as applied to RPVs subjected to PTS loading, are given, and recommendations for future development of fracture methodologies are made

  11. Multidimensional scaling technique for analysis of magnetic storms ...

    Indian Academy of Sciences (India)

    R.Narasimhan(krishtel emaging) 1461 1996 Oct 15 13:05:22

    Multidimensional Scaling (MDS) comprises a set of models and associated methods for construct- ing a geometrical representation of proximity and dominance relationship between elements in one or more sets of entities. MDS can be applied to data that express two types of relationships: proxim- ity relations and ...

  12. Microneedle-assisted transdermal delivery of Zolmitriptan: effect of microneedle geometry, in vitro permeation experiments, scaling analyses and numerical simulations.

    Science.gov (United States)

    Uppuluri, Chandra Teja; Devineni, Jyothirmayee; Han, Tao; Nayak, Atul; Nair, Kartik J; Whiteside, Benjamin R; Das, Diganta B; Nalluri, Buchi N

    2017-08-01

    The present study was aimed to investigate the effect of salient microneedle (MN) geometry parameters like length, density, shape and type on transdermal permeation enhancement of Zolmitriptan (ZMT). Two types of MN devices viz. AdminPatch ® arrays (ADM) (0.6, 0.9, 1.2 and 1.5 mm lengths) and laboratory fabricated polymeric MNs (PM) of 0.6 mm length were employed. In the case of PMs, arrays were applied thrice at different places within a 1.77 cm 2 skin area (PM-3) to maintain the MN density closer to 0.6 mm ADM. Scaling analyses was done using dimensionless parameters like concentration of ZMT (C t /C s ), thickness (h/L) and surface area of the skin (Sa/L 2 ). Micro-injection molding technique was employed to fabricate PM. Histological studies revealed that the PM, owing to their geometry/design, formed wider and deeper microconduits when compared to ADM of similar length. Approximately 3.17- and 3.65-fold increase in ZMT flux values were observed with 1.5 mm ADM and PM-3 applications when compared to the passive studies. Good correlations were observed between different dimensionless parameters with scaling analyses. Numerical simulations, using MATLAB and COMSOL software, based on experimental data and histological images provided information regarding the ZMT skin distribution after MN application. Both from experimental studies and simulations, it was inferred that PM were more effective in enhancing the transdermal delivery of ZMT when compared to ADM. The study suggests that MN application enhances the ZMT transdermal permeation and the geometrical parameters of MNs play an important role in the degree of such enhancement.

  13. Reliability analysis of large scaled structures by optimization technique

    International Nuclear Information System (INIS)

    Ishikawa, N.; Mihara, T.; Iizuka, M.

    1987-01-01

    This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)

  14. Static and fatigue experimental tests on a full scale fuselage panel and FEM analyses

    Directory of Open Access Journals (Sweden)

    Raffaele Sepe

    2016-02-01

    Full Text Available A fatigue test on a full scale panel with complex loading condition and geometry configuration has been carried out using a triaxial test machine. The demonstrator is made up of two skins which are linked by a transversal butt-joint, parallel to the stringer direction. A fatigue load was applied in the direction normal to the longitudinal joint, while a constant load was applied in the longitudinal joint direction. The test panel was instrumented with strain gages and previously quasi-static tests were conducted to ensure a proper load transferring to the panel. In order to support the tests, geometric nonlinear shell finite element analyses were conducted to predict strain and stress distributions. The demonstrator broke up after about 177000 cycles. Subsequently, a finite element analysis (FEA was carried out in order to correlate failure events; due to the biaxial nature of the fatigue loads, Sines criterion was used. The analysis was performed taking into account the different materials by which the panel is composed. The numerical results show a good correlation with experimental data, successfully predicting failure locations on the panel.

  15. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  16. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  17. Optimized evaporation technique for leachate treatment: Small scale implementation.

    Science.gov (United States)

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature. Copyright © 2016. Published by Elsevier Ltd.

  18. Development of engineering scale HLLW vitrification technique at PNC

    International Nuclear Information System (INIS)

    Nagaki, H.; Oguino, N.; Tsunoda, N.; Segawa, T.

    1979-01-01

    Some processes have been investigated to develop the technology of solidification of the high-level radioactive liquid waste generated from the nuclear fuel reprocessing plant operated by the Power Reactor and Nuclear Fuel Development Corporation (PNC) at Tokai-mura. This report covers the present state of development of a Joule-heated ceramic melter and a direct megahertz induction-heated melter. Engineering-scale tests have been performed with both melters. The Joule-heated melter could produce 45 kg or 16 liters of glass per hour. The direct-induction furnace was able to melt 5 kg or 1.8 liters of glass per hour. Both melters were composed of electrofused cast refractory brick. Thus it was possible to melt the glass at above 1200 0 C. Glass produced at higher melting temperatures is generally superior. 3 figures, 2 tables

  19. Heterodyne interferometric technique for displacement control at the nanometric scale

    Science.gov (United States)

    Topcu, Suat; Chassagne, Luc; Haddad, Darine; Alayli, Yasser; Juncar, Patrick

    2003-11-01

    We propose a method of displacement control that addresses the measurement requirements of the nanotechnology community and provide a traceability to the definition of the mèter at the nanometric scale. The method is based on the use of both a heterodyne Michelson's interferometer and a homemade high frequency electronic circuit. The system so established allows us to control the displacement of a translation stage with a known step of 4.945 nm. Intrinsic relative uncertainty on the step value is 1.6×10-9. Controls of the period of repetition of these steps with a high-stability quartz oscillator permits to impose an uniform speed to the translation stage with the same accuracy. This property will be used for the watt balance project of the Bureau National de Métrologie of France.

  20. Application of a 2-D approximation technique for solving stress analyses problem in FEM

    Directory of Open Access Journals (Sweden)

    H Khawaja

    2016-10-01

    Full Text Available With the advent of computational techniques and methods like finite element method, complex engineering problems are no longer difficult to solve. These methods have helped engineers and designers to simulate and solve engineering problems in much more details than possible with experimental techniques. However, applying these techniques is not a simple task and require lots of acumen, understanding, and experience in obtaining a solution that is as close to an exact solution as possible with minimum computer resources. In this work using the finite element (FE method, stress analyzes of the low-pressure turbine of a small turbofan engine is carried out by employing two different techniques. Initially, a complete solid model of the turbine is prepared which is then finite element modelled with the eight-node brick element. Stresses are calculated using this model. Subsequently, the same turbine is modelled with four-node shell element for calculation of stresses. Material properties, applied loads (inertial, aerodynamic, and thermal, and constraints were same for both the cases. Authors have developed a “2-D approximation technique” to approximate a 3-D problem into a 2-D problem to study the saving invaluable computational time and resources. In this statistics technique, the 3-D domain of variable thickness is divided into many small areas of constant thickness. It is ensured that the value of the thickness for each sub-area is the correct representative thickness of that sub area, and it is within three sigma limit. The results revealed that technique developed is accurate, less time consuming and computational effort saving; the stresses obtained by 2-D technique are within five percent of 3-D results. The solution is obtained in CPU time which is six times less than the 3-D model. Similarly, the number of nodes and elements are more than ten times less than that of the 3-D model. ANSYS ® was used in this work.

  1. Fine scale analyses of a coralline bank mapped using multi-beam backscatter data

    Digital Repository Service at National Institute of Oceanography (India)

    Menezes, A.A.A.; Naik, M.; Fernandes, W.A.; Haris, K.; Chakraborty, B.; Estiberio, S.; Lohani, R.B.

    In this work, we have developed a classification technique to characterize the seafloor of the Gaveshani (coralline) bank area using multi-beam backscatter data. Softcomputational techniques like the artificial neural networks (ANNs) based...

  2. Application of proton-induced X-ray emission technique to gunshot residue analyses

    International Nuclear Information System (INIS)

    Sen, P.; Panigrahi, N.; Rao, M.S.; Varier, K.M.; Sen, S.; Mehta, G.K.

    1982-01-01

    The proton-induced X-ray emission (PIXE) technique was applied to the identification and analysis of gunshot residues. Studies were made of the type of bullet and bullet hole identification, firearm discharge element profiles, the effect of various target backings, and hand swabbings. The discussion of the results reviews the sensitivity of the PIXE technique, its nondestructive nature, and its role in determining the distance from the gun to the victim and identifying the type of bullet used and whether a wound was made by a bullet or not. The high sensitivity of the PIXE technique, which is able to analyze samples as small as 0.1 to 1 ng, and its usefulness for detecting a variety of elements should make it particularly useful in firearms residue investigations

  3. Improvements in technique for determining the surfactant penetration in hair fibres using scanning ion beam analyses

    International Nuclear Information System (INIS)

    Hollands, R.; Clough, A.S.; Meredith, P.

    1999-01-01

    The penetration abilities of surfactants need to be known by companies manufacturing hair-care products. In this work three complementary techniques were used simultaneously - PIXE, NRA and RBS - to measure the penetration of a surfactant, which had been deuterated, into permed hair fibres. Using a scanning micro-beam of 2 MeV 3 He ions 2-dimensional concentration maps were obtained which showed whether the surfactant penetrated the fibre or just stayed on the surface. This is the first report of the use of three simultaneous scattering techniques with a scanning micro-beam. (author)

  4. Multielemental analyses of isomorphous Indian garnet gemstones by XRD and external pixe techniques.

    Science.gov (United States)

    Venkateswarulu, P; Srinivasa Rao, K; Kasipathi, C; Ramakrishna, Y

    2012-12-01

    Garnet gemstones were collected from parts of Eastern Ghats geological formations of Andhra Pradesh, India and their gemological studies were carried out. Their study of chemistry is not possible as they represent mixtures of isomorphism nature, and none of the individual specimens indicate independent chemistry. Hence, non-destructive instrumental methodology of external PIXE technique was employed to understand their chemistry and identity. A 3 MeV proton beam was employed to excite the samples. In the present study geochemical characteristics of garnet gemstones were studied by proton induced X-ray emission. Almandine variety of garnet is found to be abundant in the present study by means of their chemical contents. The crystal structure and the lattice parameters were estimated using X-Ray Diffraction studies. The trace and minor elements are estimated using PIXE technique and major compositional elements are confirmed by XRD studies. The technique is found very useful in characterizing the garnet gemstones. The present work, thus establishes usefulness and versatility of the PIXE technique with external beam for research in Geo-scientific methodology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Analysing E-Services and Mobile Applications with Companied Conjoint Analysis and fMRI Technique

    OpenAIRE

    Heinonen, Jarmo

    2015-01-01

    Previous research has shown that neuromarketing and conjoint analysis have been used in many areas of consumer research, and to provide for further understanding of consumer behaviour. Together these two methods may reveal more information about hidden desires, expectations and restrains of consumers’ brain. This paper attempts to examine these two research methods together as a companied analysis. More specifically this study utilizes fMRI and conjoint analysis is a tool for analysing consum...

  6. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    Science.gov (United States)

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Assessing regional and interspecific variation in threshold responses of forest breeding birds through broad scale analyses.

    Directory of Open Access Journals (Sweden)

    Yntze van der Hoek

    Full Text Available BACKGROUND: Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. METHODOLOGY/PRINCIPAL FINDINGS: We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively. In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence and 66.45% (SE = 9.15, extinction in New York, compared to 51.08% (SE = 10.60, persistence and 73.67% (SE = 5.70, extinction in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. CONCLUSIONS/SIGNIFICANCE: Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that

  8. Assessing regional and interspecific variation in threshold responses of forest breeding birds through broad scale analyses.

    Science.gov (United States)

    van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L

    2013-01-01

    Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that

  9. SRM Internal Flow Tests and Computational Fluid Dynamic Analysis. Volume 2; CFD RSRM Full-Scale Analyses

    Science.gov (United States)

    2001-01-01

    This document presents the full-scale analyses of the CFD RSRM. The RSRM model was developed with a 20 second burn time. The following are presented as part of the full-scale analyses: (1) RSRM embedded inclusion analysis; (2) RSRM igniter nozzle design analysis; (3) Nozzle Joint 4 erosion anomaly; (4) RSRM full motor port slag accumulation analysis; (5) RSRM motor analysis of two-phase flow in the aft segment/submerged nozzle region; (6) Completion of 3-D Analysis of the hot air nozzle manifold; (7) Bates Motor distributed combustion test case; and (8) Three Dimensional Polysulfide Bump Analysis.

  10. Analyses of Effects of Cutting Parameters on Cutting Edge Temperature Using Inverse Heat Conduction Technique

    Directory of Open Access Journals (Sweden)

    Marcelo Ribeiro dos Santos

    2014-01-01

    Full Text Available During machining energy is transformed into heat due to plastic deformation of the workpiece surface and friction between tool and workpiece. High temperatures are generated in the region of the cutting edge, which have a very important influence on wear rate of the cutting tool and on tool life. This work proposes the estimation of heat flux at the chip-tool interface using inverse techniques. Factors which influence the temperature distribution at the AISI M32C high speed steel tool rake face during machining of a ABNT 12L14 steel workpiece were also investigated. The temperature distribution was predicted using finite volume elements. A transient 3D numerical code using irregular and nonstaggered mesh was developed to solve the nonlinear heat diffusion equation. To validate the software, experimental tests were made. The inverse problem was solved using the function specification method. Heat fluxes at the tool-workpiece interface were estimated using inverse problems techniques and experimental temperatures. Tests were performed to study the effect of cutting parameters on cutting edge temperature. The results were compared with those of the tool-work thermocouple technique and a fair agreement was obtained.

  11. XRF analyses for the study of painting technique and degradation on frescoes by Beato Angelico: first results

    International Nuclear Information System (INIS)

    Mazzinghi, A.

    2014-01-01

    Beato Angelico is one of the most important Italian painters of the Renaissance period, in particular he was a master of the so-called 'Buon fresco' technique for mural paintings. A wide diagnostic campaign with X-Ray Fluorescence (XRF) analyses has been carried out on three masterworks painted by Beato Angelico in the San Marco monastery in Florence: the Crocifissione con Santi, the 'Annunciazione' and the 'Madonna delle Ombre'. The latter is painted by mixing fresco and secco techniques, which makes it of particular interest for the study of two different paintings techniques of the same artist. Then the aim of the study was focused on the characterization of the painting palette, and therefore the painting techniques, used by Beato Angelico. Moreover, the conservators were interested in the study of degradation processes and old restoration treatments. Our analyses have been carried out by means of the XRF spectrometer developed at LABEC laboratory at Istituto Nazionale di Fisica Nucleare in Florence (Italy). XRF is indeed especially suited for such a kind of study, allowing for multi-elemental, nondestructive, non-invasive analyses in a short time, with portable instruments. In this paper the first results concerning the XRF analysis are presented.

  12. Principal Components Analyses of the MMPI-2 PSY-5 Scales. Identification of Facet Subscales

    Science.gov (United States)

    Arnau, Randolph C.; Handel, Richard W.; Archer, Robert P.

    2005-01-01

    The Personality Psychopathology Five (PSY-5) is a five-factor personality trait model designed for assessing personality pathology using quantitative dimensions. Harkness, McNulty, and Ben-Porath developed Minnesota Multiphasic Personality Inventory-2 (MMPI-2) scales based on the PSY-5 model, and these scales were recently added to the standard…

  13. Analyses of inks and papers in historical documents through external beam PIXE techniques

    International Nuclear Information System (INIS)

    Cahill, T.A.; Kusko, B.; California Univ., Davis; Schwab, R.N.

    1981-01-01

    PIXE analyses of documents can be carried out to high senstitivty in an external beam configuration designed to protect historical materials from damage. Test runs have shown that a properbly designed system with high solid angle can operate at less than 1% of the flux necessary to cause any discoloration whatsoever on papers of the 17th and 18th centuries. The composition of these papers is suprisingly complex, yet retains distinct association with the historical period, paper source, and even the individual sheets of paper that are folded and cut to make groups of pages. Early studies are planned on historical forgeries. (orig.)

  14. Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)

    Science.gov (United States)

    Hancher, M.

    2013-12-01

    Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.

  15. TA3 - Dosimetry and instrumentation supply of the M-Fish technique to the Fish-3 painting technique for analysing translocations: A radiotherapy-treated patient study

    International Nuclear Information System (INIS)

    Pouzoulet, F.; Roch-Lefevre, S.; Giraudet, A.L.; Vaurijoux, A.; Voisin, P.A.; Buard, V.; Delbos, M.; Voisin, Ph.; Roy, L.; Bourhis, J.

    2006-01-01

    Purpose: Currently, the chromosome translocation study is the best method to estimate the dose of an old radiation exposure. Fluorescent In Situ Hybridization (F.I.S.H.) technique allows an easy detection of this kind of aberrations. However, as only a few number of chromosomes is usually painted, some bias could skew the result. To evaluate the advantage of using full genome staining (M-F.I.S.H. technique) compared with three chromosomes labelling (F.I.S.H.-3 painting), we compared translocation yields in radiotherapy treated patients. Methods: Chromosome aberration analyses were performed on peripheral blood lymphocyte cultures of two patients treated for a throat cancer by radiotherapy. Blood samples were obtained, before, along the treatment and six or four months later. For each sample, a dicentrics analysis was performed together with translocation analysis either with F.I.S.H.-3 painting or M-F.I.S.H.. Results: By confronting results from the F.I.S.H.-3 painting technique and the M-F.I.S.H. technique, significant differences were revealed. The translocations yield seemed to be stable with the F.I.S.H.-3 painting technique whereas it is not the case with the M-F.I.S.H. technique. This difference in results was explained by the bias induced by F.I.S.H.-3 Painting technique in the visualisation of complex aberrations. Furthermore, we found the presence of a clone bearing a translocation involving a painted chromosome. Conclusions: According to the potential bias of F.I.S.H.-3 painting on translocations study, the M-F.I.S.H. technique should provide more precise and reproducible results. Because of its more difficult implement, it seems hardly applicable to retrospective dosimetry instead of F.I.S.H.-3 painting technique. (authors)

  16. TA3 - Dosimetry and instrumentation supply of the M-Fish technique to the Fish-3 painting technique for analysing translocations: A radiotherapy-treated patient study

    Energy Technology Data Exchange (ETDEWEB)

    Pouzoulet, F.; Roch-Lefevre, S.; Giraudet, A.L.; Vaurijoux, A.; Voisin, P.A.; Buard, V.; Delbos, M.; Voisin, Ph.; Roy, L. [Institut de Radioprotection et de Surete Nucleaire, Lab. de Dosimetrie Biologique, 92 - Fontenay aux Roses (France); Bourhis, J. [Laboratoire UPRES EA 27-10, Radiosensibilite des Tumeurs et Tissus sains, PR1, 94 - Villejuif (France)

    2006-07-01

    Purpose: Currently, the chromosome translocation study is the best method to estimate the dose of an old radiation exposure. Fluorescent In Situ Hybridization (F.I.S.H.) technique allows an easy detection of this kind of aberrations. However, as only a few number of chromosomes is usually painted, some bias could skew the result. To evaluate the advantage of using full genome staining (M-F.I.S.H. technique) compared with three chromosomes labelling (F.I.S.H.-3 painting), we compared translocation yields in radiotherapy treated patients. Methods: Chromosome aberration analyses were performed on peripheral blood lymphocyte cultures of two patients treated for a throat cancer by radiotherapy. Blood samples were obtained, before, along the treatment and six or four months later. For each sample, a dicentrics analysis was performed together with translocation analysis either with F.I.S.H.-3 painting or M-F.I.S.H.. Results: By confronting results from the F.I.S.H.-3 painting technique and the M-F.I.S.H. technique, significant differences were revealed. The translocations yield seemed to be stable with the F.I.S.H.-3 painting technique whereas it is not the case with the M-F.I.S.H. technique. This difference in results was explained by the bias induced by F.I.S.H.-3 Painting technique in the visualisation of complex aberrations. Furthermore, we found the presence of a clone bearing a translocation involving a painted chromosome. Conclusions: According to the potential bias of F.I.S.H.-3 painting on translocations study, the M-F.I.S.H. technique should provide more precise and reproducible results. Because of its more difficult implement, it seems hardly applicable to retrospective dosimetry instead of F.I.S.H.-3 painting technique. (authors)

  17. A novel interferometric characterization technique for 3D analyses at high pressures and temperatures

    Science.gov (United States)

    Roshanghias, Ali; Bardong, Jochen; Pulko, Jozef; Binder, Alfred

    2018-04-01

    Advanced optical measurement techniques are always of interest for the characterization of engineered surfaces. When pressure or temperature modules are also incorporated, these techniques will turn into robust and versatile methodologies for various applications such as performance monitoring of devices in service conditions. However, some microelectromechanical systems (MEMS) and MOEMS devices require performance monitoring at their final stage, i.e. enclosed or packaged. That necessitates measurements through a protective liquid, plastic, or glass, whereas the conventional objective lenses are not designed for such media. Correspondingly, in the current study, the development and tailoring of a 3D interferometer as a means for measuring the topography of reflective surfaces under transmissive media is sought. For topography measurements through glass, water and oil, compensation glass plates were designed and incorporated into the Michelson type interferometer objectives. Moreover, a customized chamber set-up featuring an optical access for the observation of the topographical changes at increasing pressure and temperature conditions was constructed and integrated into the apparatus. Conclusively, the in situ monitoring of the elastic deformation of sensing microstructures inside MEMS packages was achieved. These measurements were performed at a defined pressure (0–100 bar) and temperature (25 °C–180 °C).

  18. Simple Crosscutting Concerns Are Not So Simple : Analysing Variability in Large-Scale Idioms-Based Implementations

    NARCIS (Netherlands)

    Bruntink, M.; Van Deursen, A.; d’Hondt, M.; Tourwé, T.

    2007-01-01

    This paper describes a method for studying idioms-based implementations of crosscutting concerns, and our experiences with it in the context of a real-world, large-scale embedded software system. In particular, we analyse a seemingly simple concern, tracing, and show that it exhibits significant

  19. Analysing conflicts around small-scale gold mining in the Amazon : The contribution of a multi-temporal model

    NARCIS (Netherlands)

    Salman, Ton; de Theije, Marjo

    Conflict is small-scale gold mining's middle name. In only a very few situations do mining operations take place without some sort of conflict accompanying the activity, and often various conflicting stakeholders struggle for their interests simultaneously. Analyses of such conflicts are typically

  20. Prompt nuclear analytical techniques for material research in accelerator driven transmutation technologies: Prospects and quantitative analyses

    International Nuclear Information System (INIS)

    Vacik, J.; Hnatowicz, V.; Cervena, J.; Perina, V.; Mach, R.

    1998-01-01

    Accelerator driven transmutation technology (ADTT) is a promising way toward liquidation of spent nuclear fuel, nuclear wastes and weapon grade Pu. The ADTT facility comprises a high current (proton) accelerator supplying a sub-critical reactor assembly with spallation neutrons. The reactor part is supposed to be cooled by molten fluorides or metals which serve, at the same time, as a carrier of nuclear fuel. Assumed high working temperature (400-600 C) and high radiation load in the subcritical reactor and spallation neutron source put forward the problem of optimal choice of ADTT construction materials, especially from the point of their radiation and corrosion resistance when in contact with liquid working media. The use of prompt nuclear analytical techniques in ADTT related material research is considered and examples of preliminary analytical results obtained using neutron depth profiling method are shown for illustration. (orig.)

  1. A reduced scale two loop PWR core designed with particle swarm optimization technique

    International Nuclear Information System (INIS)

    Lima Junior, Carlos A. Souza; Pereira, Claudio M.N.A; Lapa, Celso M.F.; Cunha, Joao J.; Alvim, Antonio C.M.

    2007-01-01

    Reduced scale experiments are often employed in engineering projects because they are much cheaper than real scale testing. Unfortunately, designing reduced scale thermal-hydraulic circuit or equipment, with the capability of reproducing, both accurately and simultaneously, all physical phenomena that occur in real scale and at operating conditions, is a difficult task. To solve this problem, advanced optimization techniques, such as Genetic Algorithms, have been applied. Following this research line, we have performed investigations, using the Particle Swarm Optimization (PSO) Technique, to design a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power and non accidental operating conditions. Obtained results show that the proposed methodology is a promising approach for forced flow reduced scale experiments. (author)

  2. Development of triple scale finite element analyses based on crystallographic homogenization methods

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2004-01-01

    Crystallographic homogenization procedure is implemented in the piezoelectric and elastic-crystalline plastic finite element (FE) code to assess its macro-continuum properties of piezoelectric ceramics and BCC and FCC sheet metals. Triple scale hierarchical structure consists of an atom cluster, a crystal aggregation and a macro- continuum. In this paper, we focus to discuss a triple scale numerical analysis for piezoelectric material, and apply to assess a macro-continuum material property. At first, we calculate material properties of Perovskite crystal of piezoelectric material, XYO3 (such as BaTiO3 and PbTiO3) by employing ab-initio molecular analysis code CASTEP. Next, measured results of SEM and EBSD observations of crystal orientation distributions, shapes and boundaries of a real material (BaTiO3) are employed to define an inhomogeneity of crystal aggregation, which corresponds to a unit cell of micro-structure, and satisfies the periodicity condition. This procedure is featured as a first scaling up from the molecular to the crystal aggregation. Finally, the conventional homogenization procedure is implemented in FE code to evaluate a macro-continuum property. This final procedure is featured as a second scaling up from the crystal aggregation (unit cell) to macro-continuum. This triple scale analysis is applied to design piezoelectric ceramic and finds an optimum crystal orientation distribution, in which a macroscopic piezoelectric constant d33 has a maximum value

  3. Stable isotope analyses of feather amino acids identify penguin migration strategies at ocean basin scales.

    Science.gov (United States)

    Polito, Michael J; Hinke, Jefferson T; Hart, Tom; Santos, Mercedes; Houghton, Leah A; Thorrold, Simon R

    2017-08-01

    Identifying the at-sea distribution of wide-ranging marine predators is critical to understanding their ecology. Advances in electronic tracking devices and intrinsic biogeochemical markers have greatly improved our ability to track animal movements on ocean-wide scales. Here, we show that, in combination with direct tracking, stable carbon isotope analysis of essential amino acids in tail feathers provides the ability to track the movement patterns of two, wide-ranging penguin species over ocean basin scales. In addition, we use this isotopic approach across multiple breeding colonies in the Scotia Arc to evaluate migration trends at a regional scale that would be logistically challenging using direct tracking alone. © 2017 The Author(s).

  4. Recent Regional Climate State and Change - Derived through Downscaling Homogeneous Large-scale Components of Re-analyses

    Science.gov (United States)

    Von Storch, H.; Klehmet, K.; Geyer, B.; Li, D.; Schubert-Frisius, M.; Tim, N.; Zorita, E.

    2015-12-01

    Global re-analyses suffer from inhomogeneities, as they process data from networks under development. However, the large-scale component of such re-analyses is mostly homogeneous; additional observational data add in most cases to a better description of regional details and less so on large-scale states. Therefore, the concept of downscaling may be applied to homogeneously complementing the large-scale state of the re-analyses with regional detail - wherever the condition of homogeneity of the large-scales is fulfilled. Technically this can be done by using a regional climate model, or a global climate model, which is constrained on the large scale by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional risks - in particular marine risks - was identified. While the data density in Europe is considerably better than in most other regions of the world, even here insufficient spatial and temporal coverage is limiting risk assessments. Therefore, downscaled data-sets are frequently used by off-shore industries. We have run this system also in regions with reduced or absent data coverage, such as the Lena catchment in Siberia, in the Yellow Sea/Bo Hai region in East Asia, in Namibia and the adjacent Atlantic Ocean. Also a global (large scale constrained) simulation has been. It turns out that spatially detailed reconstruction of the state and change of climate in the three to six decades is doable for any region of the world.The different data sets are archived and may freely by used for scientific purposes. Of course, before application, a careful analysis of the quality for the intended application is needed, as sometimes unexpected changes in the quality of the description of large-scale driving states prevail.

  5. Analyses of archaeological pottery samples using X-ray fluorescence technique for provenance study

    International Nuclear Information System (INIS)

    Tamilarasu, S.; Swain, K.K.; Singhal, R.K; Reddy, A.V.R.; Acharya, R.; Velraj, G.

    2015-01-01

    Archaeological artifacts reveal information on past human activities, artifact preparation technology, art and possible trade. Ceramics are the most stable and abundant material in archaeological context. Pottery is the most abundant tracers in all archaeological excavations. Compared to major elements, elements present at trace concentrations levels are source specific and they maintain same concentration levels in source clay as well as finished products e.g., fired clay potteries. As it is difficult to find out exact source or origin, provenance study is carried out first to establish whether objects under study are from the same or different sources/origin. Various analytical techniques like instrumental neutron activation analysis (INAA), Ion beam analysis (IBA) and X-ray fluorescence (XRF) have been used for obtaining elemental concentrations in archaeological potteries. Portable X-ray fluorescence (pXRF) spectrometry provides a non-destructive means for elemental characterization of a wide range of archaeological materials. Ten archaeological pottery samples were collected from Kottapuram, Kerala under the supervision of archaeological survey of India. Portable X-ray fluorescence (pXRF) spectrometry using a handheld Olympus Innov-X Delta XRF device, ACD BARC, has been used for chemical characterization of the pottery samples. The instrument is equipped with the Delta Rhodium (Rh) anode X-Ray tube and uses a Silicon Drift Detector (resolution <200 eV at 5.95 keV Mn Kα X-ray). NIST 2781 SRM was analyzed for quality control purpose. Ten elements namely Fe, Ti, Mn, Co, Cu, Zn, Pb, Zr, Mo and Se were chosen for cluster analysis and their concentration values were utilized for multivariate statistical analysis using WinSTAT 9.0

  6. The Visual Analogue Scale for Rating, Ranking and Paired-Comparison (VAS-RRP): A new technique for psychological measurement.

    Science.gov (United States)

    Sung, Yao-Ting; Wu, Jeng-Shin

    2018-04-17

    Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.

  7. Application of IBA in the comparative analyses of fish scales used as biomonitors in the Matola River, Mozambique

    International Nuclear Information System (INIS)

    Guambe, J.F.; Mars, J.A.; Day, J.

    2013-01-01

    Full text: Many natural resources are invariably contaminated by industries located on the periphery of the resources. More so, fish found in the resources are used as dietary supplements, especially by individual that reside near the natural resources. The scale offish have been proven to be applicable in monitoring contamination of the natural resources. However, the morphology and chemical composition of the scale of various species differ to a significant degree. Consequently, the incorporation of contaminants into the scale structure will be different. There is a need of pilot for contaminants which can harm the biota. The composition of the fish scales is different. To quantify the degree of incorporation onto the scale matrix we have analysed, using PIXE, RBS and SEM, the scale of four types of fish scales, that is, Pomadasys kaakan the javelin grunter; Luljanus gibbus the humpback red snapper; Pinjalo pinjalo the pinjalo and Uthognathus mormyrus the sand streenbras. In this work we report on the viability of using various fish scales as monitors of natural resource contamination. (author)

  8. Application of IBA in the comparative analyses of fish scales used as biomonitors in the Matola River, Mozambique

    Energy Technology Data Exchange (ETDEWEB)

    Guambe, J.F. [Freshwater Research Unit, Department of Zoology, University of Cape Town, Private Bag, Rondebosch, 7701 (South Africa); Physics Department, Eduardo Mondlane Universily, PO Box 257, Maputo (Mozambique); Materials Research Department, iThemba LABS, PO Box 722, Somerset West, 7129 (South Africa); Mars, J.A. [Faculty of Health and Wellness Sciences, Cape Peninsula University of Technology, PO Box 1906, Bellville, 7535 (South Africa); Day, J. [Freshwater Research Unit, Department of Zoology, University of Cape Town, Private Bag, Rondebosch, 7701 (South Africa)

    2013-07-01

    Full text: Many natural resources are invariably contaminated by industries located on the periphery of the resources. More so, fish found in the resources are used as dietary supplements, especially by individual that reside near the natural resources. The scale offish have been proven to be applicable in monitoring contamination of the natural resources. However, the morphology and chemical composition of the scale of various species differ to a significant degree. Consequently, the incorporation of contaminants into the scale structure will be different. There is a need of pilot for contaminants which can harm the biota. The composition of the fish scales is different. To quantify the degree of incorporation onto the scale matrix we have analysed, using PIXE, RBS and SEM, the scale of four types of fish scales, that is, Pomadasys kaakan the javelin grunter; Luljanus gibbus the humpback red snapper; Pinjalo pinjalo the pinjalo and Uthognathus mormyrus the sand streenbras. In this work we report on the viability of using various fish scales as monitors of natural resource contamination. (author)

  9. Spectral analyses of the Forel-Ule Ocean colour comparator scale

    NARCIS (Netherlands)

    Wernand, M.; van der Woerd, H.J.

    2010-01-01

    François Alphonse Forel (1890) and Willi Ule (1892) composed a colour comparator scale, with tints varying from indigo-blue to cola brown, to quantify the colour of natural waters, like seas, lakes and rivers. For each measurement, the observer compares the colour of the water above a submersed

  10. The ENIGMA Consortium : large-scale collaborative analyses of neuroimaging and genetic data

    NARCIS (Netherlands)

    Thompson, Paul M.; Stein, Jason L.; Medland, Sarah E.; Hibar, Derrek P.; Vasquez, Alejandro Arias; Renteria, Miguel E.; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J.; Martin, Nicholas G.; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C.; Andreassen, Ole A.; Apostolova, Liana G.; Appel, Katja; Armstrong, Nicola J.; Aribisala, Benjamin; Bastin, Mark E.; Bauer, Michael; Bearden, Carrie E.; Bergmann, Orjan; Binder, Elisabeth B.; Blangero, John; Bockholt, Henry J.; Boen, Erlend; Bois, Catherine; Boomsma, Dorret I.; Booth, Tom; Bowman, Ian J.; Bralten, Janita; Brouwer, Rachel M.; Brunner, Han G.; Brohawn, David G.; Buckner, Randy L.; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R.; Calhoun, Vince D.; Hartman, Catharina A.; Hoekstra, Pieter J.; Penninx, Brenda W.; Schmaal, Lianne; van Tol, Marie-Jose

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience,

  11. The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data

    NARCIS (Netherlands)

    Thompson, Paul M.; Stein, Jason L.; Medland, Sarah E.; Hibar, Derrek P.; Vasquez, Alejandro Arias; Renteria, Miguel E.; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J.; Martin, Nicholas G.; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C.; Andreassen, Ole A.; Apostolova, Liana G.; Appel, Katja; Armstrong, Nicola J.; Aribisala, Benjamin; Bastin, Mark E.; Bauer, Michael; Bearden, Carrie E.; Bergmann, Orjan; Binder, Elisabeth B.; Blangero, John; Bockholt, Henry J.; Bøen, Erlend; Bois, Catherine; Boomsma, Dorret I.; Booth, Tom; Bowman, Ian J.; Bralten, Janita; Brouwer, Rachel M.; Brunner, Han G.; Brohawn, David G.; Buckner, Randy L.; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R.; Calhoun, Vince D.; Cannon, Dara M.; Cantor, Rita M.; Carless, Melanie A.; Caseras, Xavier; Cavalleri, Gianpiero L.; Chakravarty, M. Mallar; Chang, Kiki D.; Ching, Christopher R. K.; Christoforou, Andrea; Cichon, Sven; Clark, Vincent P.; Conrod, Patricia; Coppola, Giovanni; Crespo-Facorro, Benedicto; Curran, Joanne E.; Czisch, Michael; Deary, Ian J.; de Geus, Eco J. C.; den Braber, Anouk; Delvecchio, Giuseppe; Depondt, Chantal; de Haan, Lieuwe; de Zubicaray, Greig I.; Dima, Danai; Dimitrova, Rali; Djurovic, Srdjan; Dong, Hongwei; Donohoe, Gary; Duggirala, Ravindranath; Dyer, Thomas D.; Ehrlich, Stefan; Ekman, Carl Johan; Elvsåshagen, Torbjørn; Emsell, Louise; Erk, Susanne; Espeseth, Thomas; Fagerness, Jesen; Fears, Scott; Fedko, Iryna; Fernández, Guillén; Fisher, Simon E.; Foroud, Tatiana; Fox, Peter T.; Francks, Clyde; Frangou, Sophia; Frey, Eva Maria; Frodl, Thomas; Frouin, Vincent; Garavan, Hugh; Giddaluru, Sudheer; Glahn, David C.; Godlewska, Beata; Goldstein, Rita Z.; Gollub, Randy L.; Grabe, Hans J.; Grimm, Oliver; Gruber, Oliver; Guadalupe, Tulio; Gur, Raquel E.; Gur, Ruben C.; Göring, Harald H. H.; Hagenaars, Saskia; Hajek, Tomas; Hall, Geoffrey B.; Hall, Jeremy; Hardy, John; Hartman, Catharina A.; Hass, Johanna; Hatton, Sean N.; Haukvik, Unn K.; Hegenscheid, Katrin; Heinz, Andreas; Hickie, Ian B.; Ho, Beng-Choon; Hoehn, David; Hoekstra, Pieter J.; Hollinshead, Marisa; Holmes, Avram J.; Homuth, Georg; Hoogman, Martine; Hong, L. Elliot; Hosten, Norbert; Hottenga, Jouke-Jan; Hulshoff Pol, Hilleke E.; Hwang, Kristy S.; Jack, Clifford R.; Jenkinson, Mark; Johnston, Caroline; Jönsson, Erik G.; Kahn, René S.; Kasperaviciute, Dalia; Kelly, Sinead; Kim, Sungeun; Kochunov, Peter; Koenders, Laura; Krämer, Bernd; Kwok, John B. J.; Lagopoulos, Jim; Laje, Gonzalo; Landen, Mikael; Landman, Bennett A.; Lauriello, John; Lawrie, Stephen M.; Lee, Phil H.; Le Hellard, Stephanie; Lemaître, Herve; Leonardo, Cassandra D.; Li, Chiang-Shan; Liberg, Benny; Liewald, David C.; Liu, Xinmin; Lopez, Lorna M.; Loth, Eva; Lourdusamy, Anbarasu; Luciano, Michelle; Macciardi, Fabio; Machielsen, Marise W. J.; Macqueen, Glenda M.; Malt, Ulrik F.; Mandl, René; Manoach, Dara S.; Martinot, Jean-Luc; Matarin, Mar; Mather, Karen A.; Mattheisen, Manuel; Mattingsdal, Morten; Meyer-Lindenberg, Andreas; McDonald, Colm; McIntosh, Andrew M.; McMahon, Francis J.; McMahon, Katie L.; Meisenzahl, Eva; Melle, Ingrid; Milaneschi, Yuri; Mohnke, Sebastian; Montgomery, Grant W.; Morris, Derek W.; Moses, Eric K.; Mueller, Bryon A.; Muñoz Maniega, Susana; Mühleisen, Thomas W.; Müller-Myhsok, Bertram; Mwangi, Benson; Nauck, Matthias; Nho, Kwangsik; Nichols, Thomas E.; Nilsson, Lars-Göran; Nugent, Allison C.; Nyberg, Lars; Olvera, Rene L.; Oosterlaan, Jaap; Ophoff, Roel A.; Pandolfo, Massimo; Papalampropoulou-Tsiridou, Melina; Papmeyer, Martina; Paus, Tomas; Pausova, Zdenka; Pearlson, Godfrey D.; Penninx, Brenda W.; Peterson, Charles P.; Pfennig, Andrea; Phillips, Mary; Pike, G. Bruce; Poline, Jean-Baptiste; Potkin, Steven G.; Pütz, Benno; Ramasamy, Adaikalavan; Rasmussen, Jerod; Rietschel, Marcella; Rijpkema, Mark; Risacher, Shannon L.; Roffman, Joshua L.; Roiz-Santiañez, Roberto; Romanczuk-Seiferth, Nina; Rose, Emma J.; Royle, Natalie A.; Rujescu, Dan; Ryten, Mina; Sachdev, Perminder S.; Salami, Alireza; Satterthwaite, Theodore D.; Savitz, Jonathan; Saykin, Andrew J.; Scanlon, Cathy; Schmaal, Lianne; Schnack, Hugo G.; Schork, Andrew J.; Schulz, S. Charles; Schür, Remmelt; Seidman, Larry; Shen, Li; Shoemaker, Jody M.; Simmons, Andrew; Sisodiya, Sanjay M.; Smith, Colin; Smoller, Jordan W.; Soares, Jair C.; Sponheim, Scott R.; Sprooten, Emma; Starr, John M.; Steen, Vidar M.; Strakowski, Stephen; Strike, Lachlan; Sussmann, Jessika; Sämann, Philipp G.; Teumer, Alexander; Toga, Arthur W.; Tordesillas-Gutierrez, Diana; Trabzuni, Daniah; Trost, Sarah; Turner, Jessica; van den Heuvel, Martijn; van der Wee, Nic J.; van Eijk, Kristel; van Erp, Theo G. M.; van Haren, Neeltje E. M.; van 't Ent, Dennis; van Tol, Marie-Jose; Valdés Hernández, Maria C.; Veltman, Dick J.; Versace, Amelia; Völzke, Henry; Walker, Robert; Walter, Henrik; Wang, Lei; Wardlaw, Joanna M.; Weale, Michael E.; Weiner, Michael W.; Wen, Wei; Westlye, Lars T.; Whalley, Heather C.; Whelan, Christopher D.; White, Tonya; Winkler, Anderson M.; Wittfeld, Katharina; Woldehawariat, Girma; Wolf, Christiane; Zilles, David; Zwiers, Marcel P.; Thalamuthu, Anbupalam; Schofield, Peter R.; Freimer, Nelson B.; Lawrence, Natalia S.; Drevets, Wayne

    2014-01-01

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience,

  12. The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data

    NARCIS (Netherlands)

    P.M. Thompson (Paul); J.L. Stein; S.E. Medland (Sarah Elizabeth); D.P. Hibar (Derrek); A.A. Vásquez (Arias); M.E. Rentería (Miguel); R. Toro (Roberto); N. Jahanshad (Neda); G. Schumann (Gunter); B. Franke (Barbara); M.J. Wright (Margaret); N.G. Martin (Nicholas); I. Agartz (Ingrid); M. Alda (Martin); S. Alhusaini (Saud); L. Almasy (Laura); K. Alpert (Kathryn); N.C. Andreasen; O.A. Andreassen (Ole); L.G. Apostolova (Liana); K. Appel (Katja); N.J. Armstrong (Nicola); B. Aribisala (Benjamin); M.E. Bastin (Mark); M. Bauer (Michael); C.E. Bearden (Carrie); Ø. Bergmann (Ørjan); E.B. Binder (Elisabeth); J. Blangero (John); H.J. Bockholt; E. Bøen (Erlend); M. Bois (Monique); D.I. Boomsma (Dorret); T. Booth (Tom); I.J. Bowman (Ian); L.B.C. Bralten (Linda); R.M. Brouwer (Rachel); H.G. Brunner; D.G. Brohawn (David); M. Buckner; J.K. Buitelaar (Jan); K. Bulayeva (Kazima); J. Bustillo; V.D. Calhoun (Vince); D.M. Cannon (Dara); R.M. Cantor; M.A. Carless (Melanie); X. Caseras (Xavier); G. Cavalleri (Gianpiero); M.M. Chakravarty (M. Mallar); K.D. Chang (Kiki); C.R.K. Ching (Christopher); A. Christoforou (Andrea); S. Cichon (Sven); V.P. Clark; P. Conrod (Patricia); D. Coppola (Domenico); B. Crespo-Facorro (Benedicto); J.E. Curran (Joanne); M. Czisch (Michael); I.J. Deary (Ian); E.J.C. de Geus (Eco); A. den Braber (Anouk); G. Delvecchio (Giuseppe); C. Depondt (Chantal); L. de Haan (Lieuwe); G.I. de Zubicaray (Greig); D. Dima (Danai); R. Dimitrova (Rali); S. Djurovic (Srdjan); H. Dong (Hongwei); D.J. Donohoe (Dennis); A. Duggirala (Aparna); M.D. Dyer (Matthew); S.M. Ehrlich (Stefan); C.J. Ekman (Carl Johan); T. Elvsåshagen (Torbjørn); L. Emsell (Louise); S. Erk; T. Espeseth (Thomas); J. Fagerness (Jesen); S. Fears (Scott); I. Fedko (Iryna); G. Fernandez (Guillén); S.E. Fisher (Simon); T. Foroud (Tatiana); P.T. Fox (Peter); C. Francks (Clyde); S. Frangou (Sophia); E.M. Frey (Eva Maria); T. Frodl (Thomas); V. Frouin (Vincent); H. Garavan (Hugh); S. Giddaluru (Sudheer); D.C. Glahn (David); B. Godlewska (Beata); R.Z. Goldstein (Rita); R.L. Gollub (Randy); H.J. Grabe (Hans Jörgen); O. Grimm (Oliver); O. Gruber (Oliver); T. Guadalupe (Tulio); R.E. Gur (Raquel); R.C. Gur (Ruben); H.H.H. Göring (Harald); S. Hagenaars (Saskia); T. Hajek (Tomas); G.B. Hall (Garry); J. Hall (Jeremy); J. Hardy (John); C.A. Hartman (Catharina); J. Hass (Johanna); W. Hatton; U.K. Haukvik (Unn); K. Hegenscheid (Katrin); J. Heinz (Judith); I.B. Hickie (Ian); B.C. Ho (Beng ); D. Hoehn (David); P.J. Hoekstra (Pieter); M. Hollinshead (Marisa); A.J. Holmes (Avram); G. Homuth (Georg); M. Hoogman (Martine); L.E. Hong (L.Elliot); N. Hosten (Norbert); J.J. Hottenga (Jouke Jan); H.E. Hulshoff Pol (Hilleke); K.S. Hwang (Kristy); C.R. Jack Jr. (Clifford); S. Jenkinson (Sarah); C. Johnston; E.G. Jönsson (Erik); R.S. Kahn (René); D. Kasperaviciute (Dalia); S. Kelly (Steve); S. Kim (Shinseog); P. Kochunov (Peter); L. Koenders (Laura); B. Krämer (Bernd); J.B.J. Kwok (John); J. Lagopoulos (Jim); G. Laje (Gonzalo); M. Landén (Mikael); B.A. Landman (Bennett); J. Lauriello; S. Lawrie (Stephen); P.H. Lee (Phil); S. Le Hellard (Stephanie); H. Lemaître (Herve); C.D. Leonardo (Cassandra); C.-S. Li (Chiang-shan); B. Liberg (Benny); D.C. Liewald (David C.); X. Liu (Xinmin); L.M. Lopez (Lorna); E. Loth (Eva); A. Lourdusamy (Anbarasu); M. Luciano (Michelle); F. MacCiardi (Fabio); M.W.J. Machielsen (Marise); G.M. MacQueen (Glenda); U.F. Malt (Ulrik); R. Mandl (René); D.S. Manoach (Dara); J.-L. Martinot (Jean-Luc); M. Matarin (Mar); R. Mather; M. Mattheisen (Manuel); M. Mattingsdal (Morten); A. Meyer-Lindenberg; C. McDonald (Colm); A.M. McIntosh (Andrew); F.J. Mcmahon (Francis J); K.L. Mcmahon (Katie); E. Meisenzahl (Eva); I. Melle (Ingrid); Y. Milaneschi (Yuri); S. Mohnke (Sebastian); G.W. Montgomery (Grant); D.W. Morris (Derek W); E.K. Moses (Eric); B.A. Mueller (Bryon ); S. Muñoz Maniega (Susana); T.W. Mühleisen (Thomas); B. Müller-Myhsok (Bertram); B. Mwangi (Benson); M. Nauck (Matthias); K. Nho (Kwangsik); T.E. Nichols (Thomas); L.G. Nilsson; A.C. Nugent (Allison); L. Nyberg (Lisa); R.L. Olvera (Rene); J. Oosterlaan (Jaap); R.A. Ophoff (Roel); M. Pandolfo (Massimo); M. Papalampropoulou-Tsiridou (Melina); M. Papmeyer (Martina); T. Paus (Tomas); Z. Pausova (Zdenka); G. Pearlson (Godfrey); B.W.J.H. Penninx (Brenda); C.P. Peterson (Charles); A. Pfennig (Andrea); M. Phillips (Mary); G.B. Pike (G Bruce); J.B. Poline (Jean Baptiste); S.G. Potkin (Steven); B. Pütz (Benno); A. Ramasamy (Adaikalavan); J. Rasmussen (Jerod); M. Rietschel (Marcella); M. Rijpkema (Mark); S.L. Risacher (Shannon); J.L. Roffman (Joshua); R. Roiz-Santiañez (Roberto); N. Romanczuk-Seiferth (Nina); E.J. Rose (Emma); N.A. Royle (Natalie); D. Rujescu (Dan); M. Ryten (Mina); P.S. Sachdev (Perminder); A. Salami (Alireza); T.D. Satterthwaite (Theodore); J. Savitz (Jonathan); A.J. Saykin (Andrew); C. Scanlon (Cathy); L. Schmaal (Lianne); H. Schnack (Hugo); N.J. Schork (Nicholas); S.C. Schulz (S.Charles); R. Schür (Remmelt); L.J. Seidman (Larry); L. Shen (Li); L. Shoemaker (Lawrence); A. Simmons (Andrew); S.M. Sisodiya (Sanjay); C. Smith (Colin); J.W. Smoller; J.C. Soares (Jair); S.R. Sponheim (Scott); R. Sprooten (Roy); J.M. Starr (John); V.M. Steen (Vidar); S. Strakowski (Stephen); L.T. Strike (Lachlan); J. Sussmann (Jessika); P.G. Sämann (Philipp); A. Teumer (Alexander); A.W. Toga (Arthur); D. Tordesillas-Gutierrez (Diana); D. Trabzuni (Danyah); S. Trost (Sarah); J. Turner (Jessica); M. van den Heuvel (Martijn); N.J. van der Wee (Nic); K.R. van Eijk (Kristel); T.G.M. van Erp (Theo G.); N.E.M. van Haren (Neeltje E.); D. van 't Ent (Dennis); M.J.D. van Tol (Marie-José); M.C. Valdés Hernández (Maria); D.J. Veltman (Dick); A. Versace (Amelia); H. Völzke (Henry); R. Walker (Robert); H.J. Walter (Henrik); L. Wang (Lei); J.M. Wardlaw (J.); M.E. Weale (Michael); M.W. Weiner (Michael); W. Wen (Wei); L.T. Westlye (Lars); H.C. Whalley (Heather); C.D. Whelan (Christopher); T.J.H. White (Tonya); A.M. Winkler (Anderson); K. Wittfeld (Katharina); G. Woldehawariat (Girma); A. Björnsson (Asgeir); D. Zilles (David); M.P. Zwiers (Marcel); A. Thalamuthu (Anbupalam); J.R. Almeida (Jorge); C.J. Schofield (Christopher); N.B. Freimer (Nelson); N.S. Lawrence (Natalia); D.A. Drevets (Douglas)

    2014-01-01

    textabstractThe Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in

  13. Bifactor and Item Response Theory Analyses of Interviewer Report Scales of Cognitive Impairment in Schizophrenia

    Science.gov (United States)

    Reise, Steven P.; Ventura, Joseph; Keefe, Richard S. E.; Baade, Lyle E.; Gold, James M.; Green, Michael F.; Kern, Robert S.; Mesholam-Gately, Raquelle; Nuechterlein, Keith H.; Seidman, Larry J.; Bilder, Robert

    2011-01-01

    A psychometric analysis of 2 interview-based measures of cognitive deficits was conducted: the 21-item Clinical Global Impression of Cognition in Schizophrenia (CGI-CogS; Ventura et al., 2008), and the 20-item Schizophrenia Cognition Rating Scale (SCoRS; Keefe et al., 2006), which were administered on 2 occasions to a sample of people with…

  14. The application of fluid structure interaction techniques within finite element analyses of water-filled transport flasks

    International Nuclear Information System (INIS)

    Smith, C.; Stojko, S.

    2004-01-01

    Historically, Finite Element (FE) analyses of water-filled transport flasks and their payloads have been carried out assuming a dry environment, mainly due to a lack of robust Fluid Structure Interaction (FSI) modelling techniques. Also it has been accepted within the RAM transport industry that the presence of water would improve the impact withstand capability of dropped payloads within containers. In recent years the FE community has seen significant progress and improvement in FSI techniques. These methods have been utilised to investigate the effects of a wet environment on payload behaviour for the regulatory drop test within a recent transport licence renewal application. Fluid flow and pressure vary significantly during a wet impact and the effects on the contents become complex when water is incorporated into the flask analyses. Modelling a fluid environment within the entire flask is considered impractical; hence a good understanding of the FSI techniques and assumptions regarding fluid boundaries is required in order to create a representative FSI model. Therefore, a Verification and Validation (V and V) exercise was undertaken to underpin the FSI techniques eventually utilised. A number of problems of varying complexity have been identified to test the FSI capabilities of the explicit code LS-DYNA, which is used in the extant dry container impact analyses. RADIOSS explicit code has been used for comparison, to provide further confidence in LS-DYNA predictions. Various methods of modelling fluid are tested, and the relative advantages and limitations of each method and FSI coupling approaches are discussed. Results from the V and V problems examined provided sufficient confidence that FSI effects within containers can be accurately modelled

  15. Cross-cultural and sex differences in the Emotional Skills and Competence Questionnaire scales: Challenges of differential item functioning analyses

    Directory of Open Access Journals (Sweden)

    Bo Molander

    2009-11-01

    Full Text Available University students in Croatia, Slovenia, and Sweden (N = 1129 were examined by means of the Emotional Skills and Competence Questionnaire (Takšić, 1998. Results showed a significant effect for the sex factor only on the total-score scale, women scoring higher than men, but significant effects were obtained for country, as well as for sex, on the Express and Label (EL and Perceive and Understand (PU subscales. Sweden showed higher scores than Croatia and Slovenia on the EL scale, and Slovenia showed higher scores than Croatia and Sweden on the PU scale. In subsequent analyses of differential item functioning (DIF, comparisons were carried out for pairs of countries. The analyses revealed that a large proportion of the items in the total-score scale were potentially biased, most so for the Croatian-Swedish comparison, less for the Slovenian-Swedish comparison, and least for the Croatian-Slovenian comparison. These findings give doubts about the validity of mean score differences in comparisons of countries. However, DIF analyses of sex differences within each country show very few DIF items, indicating that the ESCQ instrument works well within each cultural/linguistic setting. Possible explanations of the findings are discussed, and improvements for future studies are suggested.

  16. Analysing and Correcting the Differences between Multi-Source and Multi-Scale Spatial Remote Sensing Observations

    Science.gov (United States)

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding

  17. Disagreements in meta-analyses using outcomes measured on continuous or rating scales: observer agreement study

    DEFF Research Database (Denmark)

    Tendal, Britta; Higgins, Julian P T; Jüni, Peter

    2009-01-01

    difference (SMD), the protocols for the reviews and the trial reports (n=45) were retrieved. DATA EXTRACTION: Five experienced methodologists and five PhD students independently extracted data from the trial reports for calculation of the first SMD result in each review. The observers did not have access...... to the reviews but to the protocols, where the relevant outcome was highlighted. The agreement was analysed at both trial and meta-analysis level, pairing the observers in all possible ways (45 pairs, yielding 2025 pairs of trials and 450 pairs of meta-analyses). Agreement was defined as SMDs that differed less...... than 0.1 in their point estimates or confidence intervals. RESULTS: The agreement was 53% at trial level and 31% at meta-analysis level. Including all pairs, the median disagreement was SMD=0.22 (interquartile range 0.07-0.61). The experts agreed somewhat more than the PhD students at trial level (61...

  18. High-Resolution Global and Basin-Scale Ocean Analyses and Forecasts

    Science.gov (United States)

    2009-09-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Research Laboratory,Oceanographic Division,Stennis Space Center,MS,39529-5004 8. PERFORMING... ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT...six weeks, here circling near the center of an anti- cyclonic eddy seen in both analyses. A third drifter is moving southward past Coffs Harbour

  19. Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle

    2018-01-01

    For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...

  20. Intermittent Rivers and Biodiversity. Large scale analyses between hydrology and ecology in intermittent rivers

    OpenAIRE

    Blanchard, Q.

    2014-01-01

    Intermittent rivers are characterized by a temporary interruption of their flow which can manifest in a variety of ways, as much on a spatial scale as on a temporal one. This particular aspect of intermittent river hydrology gives rise to unique ecosystems, combining both aquatic and terrestrial habitats. Neglected for a long time by scientists and once considered biologically depauperate and ecologically unimportant, these fragile habitats are nowadays acknowledged for their rendered service...

  1. Academic Motivation Scale: adaptation and psychometric analyses for high school and college students.

    Science.gov (United States)

    Stover, Juliana Beatriz; de la Iglesia, Guadalupe; Boubeta, Antonio Rial; Liporace, Mercedes Fernández

    2012-01-01

    The Academic Motivation Scale (AMS), supported in Self-Determination Theory, has been applied in recent decades as well in high school as in college education. Although several versions in Spanish are available, the underlying linguistic and cultural differences raise important issues when they are applied to Latin-American population. Consequently an adapted version of the AMS was developed, and its construct validity was analyzed in Argentine students. Results obtained on a sample that included 723 students from Buenos Aires (393 high school and 330 college students) verified adequate psychometric properties in this new version, solving some controversies regarded to its dimensionality.

  2. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.

  3. QuickRNASeq lifts large-scale RNA-seq data analyses to the next level of automation and interactive visualization.

    Science.gov (United States)

    Zhao, Shanrong; Xi, Li; Quan, Jie; Xi, Hualin; Zhang, Ying; von Schack, David; Vincent, Michael; Zhang, Baohong

    2016-01-08

    RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. The high degree

  4. Analysed potential of big data and supervised machine learning techniques in effectively forecasting travel times from fused data

    Directory of Open Access Journals (Sweden)

    Ivana Šemanjski

    2015-12-01

    Full Text Available Travel time forecasting is an interesting topic for many ITS services. Increased availability of data collection sensors increases the availability of the predictor variables but also highlights the high processing issues related to this big data availability. In this paper we aimed to analyse the potential of big data and supervised machine learning techniques in effectively forecasting travel times. For this purpose we used fused data from three data sources (Global Positioning System vehicles tracks, road network infrastructure data and meteorological data and four machine learning techniques (k-nearest neighbours, support vector machines, boosting trees and random forest. To evaluate the forecasting results we compared them in-between different road classes in the context of absolute values, measured in minutes, and the mean squared percentage error. For the road classes with the high average speed and long road segments, machine learning techniques forecasted travel times with small relative error, while for the road classes with the small average speeds and segment lengths this was a more demanding task. All three data sources were proven itself to have a high impact on the travel time forecast accuracy and the best results (taking into account all road classes were achieved for the k-nearest neighbours and random forest techniques.

  5. Scaling and design analyses of a scaled-down, high-temperature test facility for experimental investigation of the initial stages of a VHTR air-ingress accident

    International Nuclear Information System (INIS)

    Arcilesi, David J.; Ham, Tae Kyu; Kim, In Hun; Sun, Xiaodong; Christensen, Richard N.; Oh, Chang H.

    2015-01-01

    Highlights: • A 1/8th geometric-scale test facility that models the VHTR hot plenum is proposed. • Geometric scaling analysis is introduced for VHTR to analyze air-ingress accident. • Design calculations are performed to show that accident phenomenology is preserved. • Some analyses include time scale, hydraulic similarity and power scaling analysis. • Test facility has been constructed and shake-down tests are currently being carried out. - Abstract: A critical event in the safety analysis of the very high-temperature gas-cooled reactor (VHTR) is an air-ingress accident. This accident is initiated, in its worst case scenario, by a double-ended guillotine break of the coaxial cross vessel, which leads to a rapid reactor vessel depressurization. In a VHTR, the reactor vessel is located within a reactor cavity that is filled with air during normal operating conditions. Following the vessel depressurization, the dominant mode of ingress of an air–helium mixture into the reactor vessel will either be molecular diffusion or density-driven stratified flow. The mode of ingress is hypothesized to depend largely on the break conditions of the cross vessel. Since the time scales of these two ingress phenomena differ by orders of magnitude, it is imperative to understand under which conditions each of these mechanisms will dominate in the air ingress process. Computer models have been developed to analyze this type of accident scenario. There are, however, limited experimental data available to understand the phenomenology of the air-ingress accident and to validate these models. Therefore, there is a need to design and construct a scaled-down experimental test facility to simulate the air-ingress accident scenarios and to collect experimental data. The current paper focuses on the analyses performed for the design and operation of a 1/8th geometric scale (by height and diameter), high-temperature test facility. A geometric scaling analysis for the VHTR, a time

  6. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  7. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes

  8. Medium scale test study of chemical cleaning technique for secondary side of SG in PWR

    International Nuclear Information System (INIS)

    Zhang Mengqin; Zhang Shufeng; Yu Jinghua; Hou Shufeng

    1997-08-01

    The medium scale test study of chemical cleaning technique for removing corrosion product (Fe 3 O 4 ) in secondary side of SG in PWR has been completed. The test has been carried out in a medium scale test loop. The medium scale test evaluated the effect of the chemical cleaning technique (temperature, flow rate, cleaning time, cleaning process), the state of corrosion product deposition on magnetite (Fe 3 O 4 ) solubility and safety of materials of SG in cleaning process. The inhibitor component of chemical cleaning agent has been improved by electrochemical linear polarization method, the effect of inhibitor on corrosion resistance of materials have been examined in the medium scale test loop, the most components of chemical cleaning agent have been obtained, the EDTA is main component in cleaning agent. The electrochemical method for monitor corrosion of materials during cleaning process has been completed in the laboratory. The study of the medium scale test of chemical cleaning technique have had the optimum chemical cleaning technique for remove corrosion product in SG secondary side of PWR. (9 refs., 4 figs., 11 tabs.)

  9. Academic Motivation Scale: adaptation and psychometric analyses for high school and college students

    Directory of Open Access Journals (Sweden)

    Stover JB

    2012-07-01

    Full Text Available Juliana Beatriz Stover,1 Guadalupe de la Iglesia,1 Antonio Ria,l Boubeta,2 Mercedes Fernández Liporace11Buenos Aires University and National Research Council (CONICET, Buenos Aires, Argentina; 2Santiago de Compostela University, Santiago de Compostela, SpainAbstract: The Academic Motivation Scale (AMS, supported in Self-Determination Theory, has been applied in recent decades as well in high school as in college education. Although several versions in Spanish are available, the underlying linguistic and cultural differences raise important issues when they are applied to Latin-American population. Consequently an adapted version of the AMS was developed, and its construct validity was analyzed in Argentine students. Results obtained on a sample that included 723 students from Buenos Aires (393 high school and 330 college students verified adequate psychometric properties in this new version, solving some controversies regarded to its dimensionality.Keywords: Academic Motivation, self-determination, confirmatory factor analysis, internal consistency

  10. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  11. Landscape genetic analyses reveal fine-scale effects of forest fragmentation in an insular tropical bird.

    Science.gov (United States)

    Khimoun, Aurélie; Peterman, William; Eraud, Cyril; Faivre, Bruno; Navarro, Nicolas; Garnier, Stéphane

    2017-10-01

    Within the framework of landscape genetics, resistance surface modelling is particularly relevant to explicitly test competing hypotheses about landscape effects on gene flow. To investigate how fragmentation of tropical forest affects population connectivity in a forest specialist bird species, we optimized resistance surfaces without a priori specification, using least-cost (LCP) or resistance (IBR) distances. We implemented a two-step procedure in order (i) to objectively define the landscape thematic resolution (level of detail in classification scheme to describe landscape variables) and spatial extent (area within the landscape boundaries) and then (ii) to test the relative role of several landscape features (elevation, roads, land cover) in genetic differentiation in the Plumbeous Warbler (Setophaga plumbea). We detected a small-scale reduction of gene flow mainly driven by land cover, with a negative impact of the nonforest matrix on landscape functional connectivity. However, matrix components did not equally constrain gene flow, as their conductivity increased with increasing structural similarity with forest habitat: urban areas and meadows had the highest resistance values whereas agricultural areas had intermediate resistance values. Our results revealed a higher performance of IBR compared to LCP in explaining gene flow, reflecting suboptimal movements across this human-modified landscape, challenging the common use of LCP to design habitat corridors and advocating for a broader use of circuit theory modelling. Finally, our results emphasize the need for an objective definition of landscape scales (landscape extent and thematic resolution) and highlight potential pitfalls associated with parameterization of resistance surfaces. © 2017 John Wiley & Sons Ltd.

  12. An efficient permeability scaling-up technique applied to the discretized flow equations

    Energy Technology Data Exchange (ETDEWEB)

    Urgelli, D.; Ding, Yu [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    Grid-block permeability scaling-up for numerical reservoir simulations has been discussed for a long time in the literature. It is now recognized that a full permeability tensor is needed to get an accurate reservoir description at large scale. However, two major difficulties are encountered: (1) grid-block permeability cannot be properly defined because it depends on boundary conditions; (2) discretization of flow equations with a full permeability tensor is not straightforward and little work has been done on this subject. In this paper, we propose a new method, which allows us to get around both difficulties. As the two major problems are closely related, a global approach will preserve the accuracy. So, in the proposed method, the permeability up-scaling technique is integrated in the discretized numerical scheme for flow simulation. The permeability is scaled-up via the transmissibility term, in accordance with the fluid flow calculation in the numerical scheme. A finite-volume scheme is particularly studied, and the transmissibility scaling-up technique for this scheme is presented. Some numerical examples are tested for flow simulation. This new method is compared with some published numerical schemes for full permeability tensor discretization where the full permeability tensor is scaled-up through various techniques. Comparing the results with fine grid simulations shows that the new method is more accurate and more efficient.

  13. Structural analyses on piping systems of sodium reactors. 2. Eigenvalue analyses of hot-leg pipelines of large scale sodium reactors

    International Nuclear Information System (INIS)

    Furuhashi, Ichiro; Kasahara, Naoto

    2002-01-01

    Two types of finite element models analyzed eigenvalues of hot-leg pipelines of a large-scale sodium reactor. One is a beam element model, which is usual for pipe analyses. The other is a shell element model to evaluate particular modes in thin pipes with large diameters. Summary of analysis results: (1) A beam element model and a order natural frequency. A beam element model is available to get the first order vibration mode. (2) The maximum difference ratio of beam mode natural frequencies was 14% between a beam element model with no shear deformations and a shell element model. However, its difference becomes very small, when shear deformations are considered in beam element. (3) In the first order horizontal mode, the Y-piece acts like a pendulum, and the elbow acts like the hinge. The natural frequency is strongly affected by the bending and shear rigidities of the outer supporting pipe. (4) In the first order vertical mode, the vertical sections of the outer and inner pipes moves in the axial-directional piston mode, the horizontal section of inner pipe behaves like the cantilever, and the elbow acts like the hinge. The natural frequency is strongly affected by the axial rigidity of outer supporting pipe. (5) Both effective masses and participation factors were small for particular shell modes. (author)

  14. ANALYSING ORGANIZATIONAL CHANGES - THE CONNECTION BETWEEN THE SCALE OF CHANGE AND EMPLOYEES ATTITUDES

    Directory of Open Access Journals (Sweden)

    Ujhelyi Maria

    2015-07-01

    Full Text Available In the 21st century all organizations have to cope with challenges caused by trigger events in the environment. The key to organizational success is how fast and efficiently they are able to react. In 2014 we conducted a research survey on this topic with the contribution of Hungarian students on Bachelor courses in Business Administration and Management. They visited organizations which had gone through a significant programme of change within the last 5 years. The owners, managers or HR managers responsible for changes were asked to fill in the questionnaires about the features of these organisational changes. Several issues regarding change management were covered, besides general information about the companies. Respondents were asked about the trigger events and the nature of changes, and about the process of change and participation in it. One group of questions asked leaders about employees’ attitude to change, another section sought information about the methods used in the process. In this paper, after a short literature review, we will analyse the adaptation methods used by organizations and the connection between the scope of change and employees’ attitude toward change.

  15. Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses.

    Science.gov (United States)

    Montenegro-Burke, J Rafael; Phommavongsay, Thiery; Aisporna, Aries E; Huan, Tao; Rinehart, Duane; Forsberg, Erica; Poole, Farris L; Thorgersen, Michael P; Adams, Michael W W; Krantz, Gregory; Fields, Matthew W; Northen, Trent R; Robbins, Paul D; Niedernhofer, Laura J; Lairson, Luke; Benton, H Paul; Siuzdak, Gary

    2016-10-04

    Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.

  16. Evaluating cultural competence among Japanese clinical nurses: Analyses of a translated scale.

    Science.gov (United States)

    Noji, Ariko; Mochizuki, Yuki; Nosaki, Akiko; Glaser, Dale; Gonzales, Lucia; Mizobe, Akiko; Kanda, Katsuya

    2017-06-01

    This paper describes the factor analysis testing and construct validation of the Japanese version of the Caffrey Cultural Competence Health Services (J-CCCHS). The inventory, composed of 28 items, was translated using language and subject matter experts. Psychometric testing (exploratory factor, alpha reliability, and confirmatory factor analyses) was undertaken with nurses (N = 7494, 92% female, mean age 32.6 years) from 19 hospitals across Japan. Principal components extraction with varimax rotation yielded a 5-factor solution (62.31% variance explained) that was labeled: knowledge, comfort-proximal, comfort-distal, awareness, and awareness of national policy. Cronbach α for the subscales ranged from 0.756 to 0.892. In confirmatory factor analysis using the robust maximum likelihood estimator, the chi-square test was as follows: χ 2 (340) = 14604.44, P differences in J-CCCHS subscale scores between predefined groups. Taking into consideration that this is the first foray into construct validation for this instrument, and that fit was improved when a subsequent data driven model was tested, and it has the ability to distinguish between known groups that are expected to differ in cultural competence, the instrument can be of value to clinicians and educators alike. © 2017 John Wiley & Sons Australia, Ltd.

  17. CSNI Project for Fracture Analyses of Large-Scale International Reference Experiments (Project FALSIRE)

    International Nuclear Information System (INIS)

    Bass, B.R.; Pugh, C.E.; Keeney-Walker, J.; Schulz, H.; Sievers, J.

    1993-06-01

    This report summarizes the recently completed Phase I of the Project for Fracture Analysis of Large-Scale International Reference Experiments (Project FALSIRE). Project FALSIRE was created by the Fracture Assessment Group (FAG) of Principal Working Group No. 3 (PWG/3) of the Organization for Economic Cooperation and Development (OECD)/Nuclear Energy Agency's (NEA's) Committee on the Safety of Nuclear Installations (CSNI). Motivation for the project was derived from recognition by the CSNI-PWG/3 that inconsistencies were being revealed in predictive capabilities of a variety of fracture assessment methods, especially in ductile fracture applications. As a consequence, the CSNI/FAG was formed to evaluate fracture prediction capabilities currently used in safety assessments of nuclear components. Members are from laboratories and research organizations in Western Europe, Japan, and the United States of America (USA). On behalf of the CSNI/FAG, the US Nuclear Regulatory Commission's (NRC's) Heavy-Section Steel Technology (HSST) Program at the Oak Ridge National Laboratory (ORNL) and the Gesellschaft fuer Anlagen--und Reaktorsicherheit (GRS), Koeln, Federal Republic of Germany (FRG) had responsibility for organization arrangements related to Project FALSIRE. The group is chaired by H. Schulz from GRS, Koeln, FRG

  18. The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data.

    Science.gov (United States)

    Thompson, Paul M; Stein, Jason L; Medland, Sarah E; Hibar, Derrek P; Vasquez, Alejandro Arias; Renteria, Miguel E; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J; Martin, Nicholas G; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C; Andreassen, Ole A; Apostolova, Liana G; Appel, Katja; Armstrong, Nicola J; Aribisala, Benjamin; Bastin, Mark E; Bauer, Michael; Bearden, Carrie E; Bergmann, Orjan; Binder, Elisabeth B; Blangero, John; Bockholt, Henry J; Bøen, Erlend; Bois, Catherine; Boomsma, Dorret I; Booth, Tom; Bowman, Ian J; Bralten, Janita; Brouwer, Rachel M; Brunner, Han G; Brohawn, David G; Buckner, Randy L; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R; Calhoun, Vince D; Cannon, Dara M; Cantor, Rita M; Carless, Melanie A; Caseras, Xavier; Cavalleri, Gianpiero L; Chakravarty, M Mallar; Chang, Kiki D; Ching, Christopher R K; Christoforou, Andrea; Cichon, Sven; Clark, Vincent P; Conrod, Patricia; Coppola, Giovanni; Crespo-Facorro, Benedicto; Curran, Joanne E; Czisch, Michael; Deary, Ian J; de Geus, Eco J C; den Braber, Anouk; Delvecchio, Giuseppe; Depondt, Chantal; de Haan, Lieuwe; de Zubicaray, Greig I; Dima, Danai; Dimitrova, Rali; Djurovic, Srdjan; Dong, Hongwei; Donohoe, Gary; Duggirala, Ravindranath; Dyer, Thomas D; Ehrlich, Stefan; Ekman, Carl Johan; Elvsåshagen, Torbjørn; Emsell, Louise; Erk, Susanne; Espeseth, Thomas; Fagerness, Jesen; Fears, Scott; Fedko, Iryna; Fernández, Guillén; Fisher, Simon E; Foroud, Tatiana; Fox, Peter T; Francks, Clyde; Frangou, Sophia; Frey, Eva Maria; Frodl, Thomas; Frouin, Vincent; Garavan, Hugh; Giddaluru, Sudheer; Glahn, David C; Godlewska, Beata; Goldstein, Rita Z; Gollub, Randy L; Grabe, Hans J; Grimm, Oliver; Gruber, Oliver; Guadalupe, Tulio; Gur, Raquel E; Gur, Ruben C; Göring, Harald H H; Hagenaars, Saskia; Hajek, Tomas; Hall, Geoffrey B; Hall, Jeremy; Hardy, John; Hartman, Catharina A; Hass, Johanna; Hatton, Sean N; Haukvik, Unn K; Hegenscheid, Katrin; Heinz, Andreas; Hickie, Ian B; Ho, Beng-Choon; Hoehn, David; Hoekstra, Pieter J; Hollinshead, Marisa; Holmes, Avram J; Homuth, Georg; Hoogman, Martine; Hong, L Elliot; Hosten, Norbert; Hottenga, Jouke-Jan; Hulshoff Pol, Hilleke E; Hwang, Kristy S; Jack, Clifford R; Jenkinson, Mark; Johnston, Caroline; Jönsson, Erik G; Kahn, René S; Kasperaviciute, Dalia; Kelly, Sinead; Kim, Sungeun; Kochunov, Peter; Koenders, Laura; Krämer, Bernd; Kwok, John B J; Lagopoulos, Jim; Laje, Gonzalo; Landen, Mikael; Landman, Bennett A; Lauriello, John; Lawrie, Stephen M; Lee, Phil H; Le Hellard, Stephanie; Lemaître, Herve; Leonardo, Cassandra D; Li, Chiang-Shan; Liberg, Benny; Liewald, David C; Liu, Xinmin; Lopez, Lorna M; Loth, Eva; Lourdusamy, Anbarasu; Luciano, Michelle; Macciardi, Fabio; Machielsen, Marise W J; Macqueen, Glenda M; Malt, Ulrik F; Mandl, René; Manoach, Dara S; Martinot, Jean-Luc; Matarin, Mar; Mather, Karen A; Mattheisen, Manuel; Mattingsdal, Morten; Meyer-Lindenberg, Andreas; McDonald, Colm; McIntosh, Andrew M; McMahon, Francis J; McMahon, Katie L; Meisenzahl, Eva; Melle, Ingrid; Milaneschi, Yuri; Mohnke, Sebastian; Montgomery, Grant W; Morris, Derek W; Moses, Eric K; Mueller, Bryon A; Muñoz Maniega, Susana; Mühleisen, Thomas W; Müller-Myhsok, Bertram; Mwangi, Benson; Nauck, Matthias; Nho, Kwangsik; Nichols, Thomas E; Nilsson, Lars-Göran; Nugent, Allison C; Nyberg, Lars; Olvera, Rene L; Oosterlaan, Jaap; Ophoff, Roel A; Pandolfo, Massimo; Papalampropoulou-Tsiridou, Melina; Papmeyer, Martina; Paus, Tomas; Pausova, Zdenka; Pearlson, Godfrey D; Penninx, Brenda W; Peterson, Charles P; Pfennig, Andrea; Phillips, Mary; Pike, G Bruce; Poline, Jean-Baptiste; Potkin, Steven G; Pütz, Benno; Ramasamy, Adaikalavan; Rasmussen, Jerod; Rietschel, Marcella; Rijpkema, Mark; Risacher, Shannon L; Roffman, Joshua L; Roiz-Santiañez, Roberto; Romanczuk-Seiferth, Nina; Rose, Emma J; Royle, Natalie A; Rujescu, Dan; Ryten, Mina; Sachdev, Perminder S; Salami, Alireza; Satterthwaite, Theodore D; Savitz, Jonathan; Saykin, Andrew J; Scanlon, Cathy; Schmaal, Lianne; Schnack, Hugo G; Schork, Andrew J; Schulz, S Charles; Schür, Remmelt; Seidman, Larry; Shen, Li; Shoemaker, Jody M; Simmons, Andrew; Sisodiya, Sanjay M; Smith, Colin; Smoller, Jordan W; Soares, Jair C; Sponheim, Scott R; Sprooten, Emma; Starr, John M; Steen, Vidar M; Strakowski, Stephen; Strike, Lachlan; Sussmann, Jessika; Sämann, Philipp G; Teumer, Alexander; Toga, Arthur W; Tordesillas-Gutierrez, Diana; Trabzuni, Daniah; Trost, Sarah; Turner, Jessica; Van den Heuvel, Martijn; van der Wee, Nic J; van Eijk, Kristel; van Erp, Theo G M; van Haren, Neeltje E M; van 't Ent, Dennis; van Tol, Marie-Jose; Valdés Hernández, Maria C; Veltman, Dick J; Versace, Amelia; Völzke, Henry; Walker, Robert; Walter, Henrik; Wang, Lei; Wardlaw, Joanna M; Weale, Michael E; Weiner, Michael W; Wen, Wei; Westlye, Lars T; Whalley, Heather C; Whelan, Christopher D; White, Tonya; Winkler, Anderson M; Wittfeld, Katharina; Woldehawariat, Girma; Wolf, Christiane; Zilles, David; Zwiers, Marcel P; Thalamuthu, Anbupalam; Schofield, Peter R; Freimer, Nelson B; Lawrence, Natalia S; Drevets, Wayne

    2014-06-01

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA's first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way.

  19. 454 pyrosequencing analyses of bacterial and archaeal richness in 21 full-scale biogas digesters.

    Science.gov (United States)

    Sundberg, Carina; Al-Soud, Waleed A; Larsson, Madeleine; Alm, Erik; Yekta, Sepehr S; Svensson, Bo H; Sørensen, Søren J; Karlsson, Anna

    2013-09-01

    The microbial community of 21 full-scale biogas reactors was examined using 454 pyrosequencing of 16S rRNA gene sequences. These reactors included seven (six mesophilic and one thermophilic) digesting sewage sludge (SS) and 14 (ten mesophilic and four thermophilic) codigesting (CD) various combinations of wastes from slaughterhouses, restaurants, households, etc. The pyrosequencing generated more than 160,000 sequences representing 11 phyla, 23 classes, and 95 genera of Bacteria and Archaea. The bacterial community was always both more abundant and more diverse than the archaeal community. At the phylum level, the foremost populations in the SS reactors included Actinobacteria, Proteobacteria, Chloroflexi, Spirochetes, and Euryarchaeota, while Firmicutes was the most prevalent in the CD reactors. The main bacterial class in all reactors was Clostridia. Acetoclastic methanogens were detected in the SS, but not in the CD reactors. Their absence suggests that methane formation from acetate takes place mainly via syntrophic acetate oxidation in the CD reactors. A principal component analysis of the communities at genus level revealed three clusters: SS reactors, mesophilic CD reactors (including one thermophilic CD and one SS), and thermophilic CD reactors. Thus, the microbial composition was mainly governed by the substrate differences and the process temperature. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  20. Large Scale Analyses and Visualization of Adaptive Amino Acid Changes Projects.

    Science.gov (United States)

    Vázquez, Noé; Vieira, Cristina P; Amorim, Bárbara S R; Torres, André; López-Fernández, Hugo; Fdez-Riverola, Florentino; Sousa, José L R; Reboiro-Jato, Miguel; Vieira, Jorge

    2018-03-01

    When changes at few amino acid sites are the target of selection, adaptive amino acid changes in protein sequences can be identified using maximum-likelihood methods based on models of codon substitution (such as codeml). Although such methods have been employed numerous times using a variety of different organisms, the time needed to collect the data and prepare the input files means that tens or hundreds of coding regions are usually analyzed. Nevertheless, the recent availability of flexible and easy to use computer applications that collect relevant data (such as BDBM) and infer positively selected amino acid sites (such as ADOPS), means that the entire process is easier and quicker than before. However, the lack of a batch option in ADOPS, here reported, still precludes the analysis of hundreds or thousands of sequence files. Given the interest and possibility of running such large-scale projects, we have also developed a database where ADOPS projects can be stored. Therefore, this study also presents the B+ database, which is both a data repository and a convenient interface that looks at the information contained in ADOPS projects without the need to download and unzip the corresponding ADOPS project file. The ADOPS projects available at B+ can also be downloaded, unzipped, and opened using the ADOPS graphical interface. The availability of such a database ensures results repeatability, promotes data reuse with significant savings on the time needed for preparing datasets, and effortlessly allows further exploration of the data contained in ADOPS projects.

  1. Psychometric Properties of the Heart Disease Knowledge Scale: Evidence from Item and Confirmatory Factor Analyses.

    Science.gov (United States)

    Lim, Bee Chiu; Kueh, Yee Cheng; Arifin, Wan Nor; Ng, Kok Huan

    2016-07-01

    Heart disease knowledge is an important concept for health education, yet there is lack of evidence on proper validated instruments used to measure levels of heart disease knowledge in the Malaysian context. A cross-sectional, survey design was conducted to examine the psychometric properties of the adapted English version of the Heart Disease Knowledge Questionnaire (HDKQ). Using proportionate cluster sampling, 788 undergraduate students at Universiti Sains Malaysia, Malaysia, were recruited and completed the HDKQ. Item analysis and confirmatory factor analysis (CFA) were used for the psychometric evaluation. Construct validity of the measurement model was included. Most of the students were Malay (48%), female (71%), and from the field of science (51%). An acceptable range was obtained with respect to both the difficulty and discrimination indices in the item analysis results. The difficulty index ranged from 0.12-0.91 and a discrimination index of ≥ 0.20 were reported for the final retained 23 items. The final CFA model showed an adequate fit to the data, yielding a 23-item, one-factor model [weighted least squares mean and variance adjusted scaled chi-square difference = 1.22, degrees of freedom = 2, P-value = 0.544, the root mean square error of approximation = 0.03 (90% confidence interval = 0.03, 0.04); close-fit P-value = > 0.950]. Adequate psychometric values were obtained for Malaysian undergraduate university students using the 23-item, one-factor model of the adapted HDKQ.

  2. CSNI Project for Fracture Analyses of Large-Scale International Reference Experiments (Project FALSIRE)

    Energy Technology Data Exchange (ETDEWEB)

    Bass, B.R.; Pugh, C.E.; Keeney-Walker, J. [Oak Ridge National Lab., TN (United States); Schulz, H.; Sievers, J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Gemany)

    1993-06-01

    This report summarizes the recently completed Phase I of the Project for Fracture Analysis of Large-Scale International Reference Experiments (Project FALSIRE). Project FALSIRE was created by the Fracture Assessment Group (FAG) of Principal Working Group No. 3 (PWG/3) of the Organization for Economic Cooperation and Development (OECD)/Nuclear Energy Agency`s (NEA`s) Committee on the Safety of Nuclear Installations (CSNI). Motivation for the project was derived from recognition by the CSNI-PWG/3 that inconsistencies were being revealed in predictive capabilities of a variety of fracture assessment methods, especially in ductile fracture applications. As a consequence, the CSNI/FAG was formed to evaluate fracture prediction capabilities currently used in safety assessments of nuclear components. Members are from laboratories and research organizations in Western Europe, Japan, and the United States of America (USA). On behalf of the CSNI/FAG, the US Nuclear Regulatory Commission`s (NRC`s) Heavy-Section Steel Technology (HSST) Program at the Oak Ridge National Laboratory (ORNL) and the Gesellschaft fuer Anlagen--und Reaktorsicherheit (GRS), Koeln, Federal Republic of Germany (FRG) had responsibility for organization arrangements related to Project FALSIRE. The group is chaired by H. Schulz from GRS, Koeln, FRG.

  3. Time and frequency domain analyses of the Hualien Large-Scale Seismic Test

    International Nuclear Information System (INIS)

    Kabanda, John; Kwon, Oh-Sung; Kwon, Gunup

    2015-01-01

    Highlights: • Time- and frequency-domain analysis methods are verified against each other. • The two analysis methods are validated against Hualien LSST. • The nonlinear time domain (NLTD) analysis resulted in more realistic response. • The frequency domain (FD) analysis shows amplification at resonant frequencies. • The NLTD analysis requires significant modeling and computing time. - Abstract: In the nuclear industry, the equivalent-linear frequency domain analysis method has been the de facto standard procedure primarily due to the method's computational efficiency. This study explores the feasibility of applying the nonlinear time domain analysis method for the soil–structure-interaction analysis of nuclear power facilities. As a first step, the equivalency of the time and frequency domain analysis methods is verified through a site response analysis of one-dimensional soil, a dynamic impedance analysis of soil–foundation system, and a seismic response analysis of the entire soil–structure system. For the verifications, an idealized elastic soil–structure system is used to minimize variables in the comparison of the two methods. Then, the verified analysis methods are used to develop time and frequency domain models of Hualien Large-Scale Seismic Test. The predicted structural responses are compared against field measurements. The models are also analyzed with an amplified ground motion to evaluate discrepancies of the time and frequency domain analysis methods when the soil–structure system behaves beyond the elastic range. The analysis results show that the equivalent-linear frequency domain analysis method amplifies certain frequency bands and tends to result in higher structural acceleration than the nonlinear time domain analysis method. A comparison with field measurements shows that the nonlinear time domain analysis method better captures the frequency distribution of recorded structural responses than the frequency domain

  4. Global Precipitation Analyses at Time Scales of Monthly to 3-Hourly

    Science.gov (United States)

    Adler, Robert F.; Huffman, George; Curtis, Scott; Bolvin, David; Nelkin, Eric; Einaudi, Franco (Technical Monitor)

    2002-01-01

    Global precipitation analysis covering the last few decades and the impact of the new TRMM precipitation observations are discussed. The 20+ year, monthly, globally complete precipitation analysis of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP) is used to explore global and regional variations and trends and is compared to the much shorter TRMM (Tropical Rainfall Measuring Mission) tropical data set. The GPCP data set shows no significant trend in precipitation over the twenty years, unlike the positive trend in global surface temperatures over the past century. Regional trends are also analyzed. A trend pattern that is a combination of both El Nino and La Nina precipitation features is evident in the Goodyear data set. This pattern is related to an increase with time in the number of combined months of El Nino and La Nina during the Goodyear period. Monthly anomalies of precipitation are related to ENRON variations with clear signals extending into middle and high latitudes of both hemispheres. The GPCP daily, 1 degree latitude-longitude analysis, which is available from January 1997 to the present is described and the evolution of precipitation patterns on this time scale related to El Nino and La Nina is described. Finally, a TRMM-based Based analysis is described that uses TRMM to calibrate polar-orbit microwave observations from SSM/I and geosynchronous OR observations and merges the various calibrated observations into a final, Baehr resolution map. This TRMM standard product will be available for the entire TRMM period (January Represent). A real-time version of this merged product is being produced and is available at 0.25 degree latitude-longitude resolution over the latitude range from 50 deg. N -50 deg. S. Examples will be shown, including its use in monitoring flood conditions.

  5. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE

  6. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  7. Development of a Body Image Concern Scale using both exploratory and confirmatory factor analyses in Chinese university students

    Directory of Open Access Journals (Sweden)

    He W

    2017-05-01

    Full Text Available Wenxin He, Qiming Zheng, Yutian Ji, Chanchan Shen, Qisha Zhu, Wei Wang Department of Clinical Psychology and Psychiatry, School of Public Health, Zhejiang University College of Medicine, Hangzhou, People’s Republic of China Background: The body dysmorphic disorder is prevalent in general population and in psychiatric, dermatological, and plastic-surgery patients, but there lacks a structure-validated, comprehensive self-report measure of body image concerns, which is established through both exploratory and confirmatory factor analyses. Methods: We have composed a 34-item matrix targeting the body image concerns and trialed it in 328 male and 365 female Chinese university students. Answers to the matrix dealt with treatments including exploratory factor analyses, reserve of qualified items, and confirmatory factor analyses of latent structures. Results: Six latent factors, namely the Social Avoidance, Appearance Dissatisfaction, Preoccupation with Reassurance, Perceived Distress/Discrimination, Defect Hiding, and Embarrassment in Public, were identified. The factors and their respective items have composed a 24-item questionnaire named as the Body Image Concern Scale. Each factor earned a satisfactory internal reliability, and the intercorrelations between these factors were in a median level. Women scored significantly higher than men did on the Appearance Dissatisfaction, Preoccupation with Reassurance, and Defect Hiding. Conclusion: The Body Image Concern Scale has displayed its structure validation and gender preponderance in Chinese university students. Keywords: body dysmorphic disorder, body image, factor analysis, questionnaire development

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  9. The use of production management techniques in the construction of large scale physics detectors

    CERN Document Server

    Bazan, A; Estrella, F; Kovács, Z; Le Flour, T; Le Goff, J M; Lieunard, S; McClatchey, R; Murray, S; Varga, L Z; Vialle, J P; Zsenei, M

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so- called Workflow Management software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector ...

  10. Acupuncture-Related Techniques for Psoriasis: A Systematic Review with Pairwise and Network Meta-Analyses of Randomized Controlled Trials.

    Science.gov (United States)

    Yeh, Mei-Ling; Ko, Shu-Hua; Wang, Mei-Hua; Chi, Ching-Chi; Chung, Yu-Chu

    2017-12-01

    There has be a large body of evidence on the pharmacological treatments for psoriasis, but whether nonpharmacological interventions are effective in managing psoriasis remains largely unclear. This systematic review conducted pairwise and network meta-analyses to determine the effects of acupuncture-related techniques on acupoint stimulation for the treatment of psoriasis and to determine the order of effectiveness of these remedies. This study searched the following databases from inception to March 15, 2016: Medline, PubMed, Cochrane Central Register of Controlled Trials, EBSCO (including Academic Search Premier, American Doctoral Dissertations, and CINAHL), Airiti Library, and China National Knowledge Infrastructure. Randomized controlled trials (RCTs) on the effects of acupuncture-related techniques on acupoint stimulation as intervention for psoriasis were independently reviewed by two researchers. A total of 13 RCTs with 1,060 participants were included. The methodological quality of included studies was not rigorous. Acupoint stimulation, compared with nonacupoint stimulation, had a significant treatment for psoriasis. However, the most common adverse events were thirst and dry mouth. Subgroup analysis was further done to confirm that the short-term treatment effect was superior to that of the long-term effect in treating psoriasis. Network meta-analysis identified acupressure or acupoint catgut embedding, compared with medication, and had a significant effect for improving psoriasis. It was noted that acupressure was the most effective treatment. Acupuncture-related techniques could be considered as an alternative or adjuvant therapy for psoriasis in short term, especially of acupressure and acupoint catgut embedding. This study recommends further well-designed, methodologically rigorous, and more head-to-head randomized trials to explore the effects of acupuncture-related techniques for treating psoriasis.

  11. FALSIRE Phase II. CSNI project for Fracture Analyses of Large-Scale International Reference Experiments (Phase II). Comparison report

    International Nuclear Information System (INIS)

    Sievers, J.; Schulz, H.; Bass, R.; Pugh, C.; Keeney, J.

    1996-11-01

    A summary of Phase II of the Project for Fracture Analysis of Large-Scale International Reference Experiments (FALSIRE) is presented. A FALSIRE II Workshop focused on analyses of reference fracture experiments. More than 30 participants representing 22 organizations from 12 countries took part in the workshop. Final results for 45 analyses of the reference experiments were received from the participating analysts. For each experiment, analysis results provided estimates of variables that include temperature, crack-mouth-opening displacement, stress, strain, and applied K and J values. The data were sent electronically to the Organizing Committee, who assembled the results into a comparative data base using a special-purpose computer program. A comparative assessment and discussion of the analysis results are presented in the report. Generally, structural responses of the test specimens were predicted with tolerable scatter bands. (orig./DG)

  12. Identification of the Scale of Changes in Personnel Motivation Techniques at Mechanical-Engineering Enterprises

    Directory of Open Access Journals (Sweden)

    Melnyk Olga G.

    2016-02-01

    Full Text Available The method for identification of the scale of changes in personnel motivation techniques at mechanical-engineering enterprises based on structural and logical sequence of implementation of relevant stages (identification of the mission, strategy and objectives of the enterprise; forecasting the development of the enterprise business environment; SWOT-analysis of actual motivation techniques, deciding on the scale of changes in motivation techniques, choosing providers for changing personnel motivation techniques, choosing an alternative to changing motivation techniques, implementation of changes in motivation techniques; control over changes in motivation techniques. It has been substantiated that the improved method enables providing a systematic and analytical justification for management decisionmaking in this field and choosing the best for the mechanical-engineering enterprise scale and variant of changes in motivation techniques. The method for identification of the scale of changes in motivation techniques at mechanical-engineering enterprises takes into account the previous, current and prospective character. Firstly, the approach is based on considering the past state in the motivational sphere of the mechanical-engineering enterprise; secondly, the method involves identifying the current state of personnel motivation techniques; thirdly, within the method framework the prospective, which is manifested in strategic vision of the enterprise development as well as in forecasting the development of its business environment, is taken into account. The advantage of the proposed method is that the level of its specification may vary depending on the set goals, resource constraints and necessity. Among other things, this method allows integrating various formalized and non-formalized causal relationships in the sphere of personnel motivation at machine-building enterprises and management of relevant processes. This creates preconditions for a

  13. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  14. Chromatographic techniques used in the laboratory scale fractionation and purification of plasma

    International Nuclear Information System (INIS)

    Siti Najila Mohd Janib; Wan Hamirul Bahrin Wan Kamal; Shaharuddin Mohd

    2004-01-01

    Chromatography is a powerful technique used in the separation as well as purification of proteins for use as biopharmaceuticals or medicines. Scientists use many different chromatographic techniques in biotechnology as they bring a molecule from its initial identification stage to the stage of it becoming a marketed product. The most commonly used of these techniques is liquid chromatography (1,C). This technique can be used to separate the target molecule from undesired contaminants, as well as to analyse the final product for the requisite purity as established by governmental regulatory groups such as the FDA. Some examples of LC techniques include: ion exchange (IEC), hydrophobic interaction (HIC), gel filtration (GF), affinity (AC) and reverse phase (RPC) chromatography. These techniques are very versatile and can be used at any stage of the purification process i.e. capture, intermediate purification phase and polishing. The choice of a particular technique is dependent upon the nature of the target protein as well as its intended final use. This paper describes the preliminary work done on the chromatographic purification of factor VIII (FVIII), factor IX (FIX), albumin and IgG from plasma. Results, in particular, in the isolation of albumin and IgG using IEC, have been promising. Preparation and production of cryoprecipitate to yield FVIII and FIX have also been successful. (Author)

  15. Comparison of pre-test analyses with the Sizewell-B 1:10 scale prestressed concrete containment test

    International Nuclear Information System (INIS)

    Dameron, R.A.; Rashid, Y.R.; Parks, M.B.

    1991-01-01

    This paper describes pretest analyses of a one-tenth scale model of the Sizewell-B prestressed concrete containment building. The work was performed by ANATECH Research Corp. under contract with Sandia National Laboratories (SNL). Hydraulic testing of the model was conducted in the United Kingdom by the Central Electricity Generating Board (CEGB). In order to further their understanding of containment behavior, the USNRC, through an agreement with the United Kingdom Atomic Energy Authority (UKAEA), also participated in the test program with SNL serving as their technical agent. The analyses that were conducted included two global axisymmetric models with ''bonded'' and ''unbonded'' analytical treatment of meridional tendons, a 3D quarter model of the structure, an axisymmetric representation of the equipment hatch region, and local plan stress and r-θ models of a buttress. Results of these analyses are described and compared with the results of the test. A global hoop failure at midheight of the cylinder and a shear/bending type failure at the base of the cylinder wall were both found to have roughly equal probability of occurrence; however, the shear failure mode had higher uncertainty associated with it. Consequently, significant effort was dedicated to improving the modeling capability for concrete shear behavior. This work is also described briefly. 5 refs., 7 figs

  16. Comparison of pre-test analyses with the Sizewell-B 1:10 scale prestressed concrete containment test

    International Nuclear Information System (INIS)

    Dameron, R.A.; Rashid, Y.R.; Parks, M.B.

    1991-01-01

    This paper describes pretest analyses of a one-tenth scale model of the 'Sizewell-B' prestressed concrete containment building. The work was performed by ANATECH Research Corp. under contract with Sandia National Laboratories (SNL). Hydraulic testing of the model was conducted in the United Kingdom by the Central Electricity Generating Board (CEGB). In order to further their understanding of containment behavior, the USNRC, through an agreement with the United Kingdom Atomic Energy Authority (UKAEA), also participated in the test program with SNL serving as their technical agent. The analyses that were conducted included two global axisymmetric models with 'bonded' and 'unbonded' analytical treatment of meridional tendons, a 3D quarter model of the structure, an axisymmetric representation of the equipment hatch region, and local plane stress and r-θ models of a buttress. Results of these analyses are described and compared with the results of the test. A global hoop failure at midheight of the cylinder and a shear/bending type failure at the base of the cylinder wall were both found to have roughly equal probability of occurrence; however, the shear failure mode had higher uncertainty associated with it. Consequently, significant effort was dedicated to improving the modeling capability for concrete shear behavior. This work is also described briefly. (author)

  17. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico

    Science.gov (United States)

    Haer, Toon; Botzen, W. J. Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M.; Ward, Philip J.

    2018-06-01

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  18. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Miscellaneous -- Volume 3, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, L.M.; Jordon, W.C. [Oak Ridge National Lab., TN (United States); Edwards, A.L. [Oak Ridge National Lab., TN (United States)]|[Lawrence Livermore National Lab., CA (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries.

  19. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Landers, N.F.; Petrie, L.M.; Knight, J.R. [Oak Ridge National Lab., TN (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries.

  20. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Miscellaneous -- Volume 3, Revision 4

    International Nuclear Information System (INIS)

    Petrie, L.M.; Jordon, W.C.; Edwards, A.L.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries

  1. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    International Nuclear Information System (INIS)

    Landers, N.F.; Petrie, L.M.; Knight, J.R.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries

  2. Dynamical scaling in polymer solutions investigated by the neutron spin echo technique

    International Nuclear Information System (INIS)

    Richter, D.; Ewen, B.

    1979-01-01

    Chain dynamics in polymer solutions was investigated by means of the recently developed neutron spin echo spectroscopy. - By this technique, it was possible for the first time to verify unambiguously the scaling predictions of the Zimm model in the case of single chain behaviour and to observe the cross over to many chain behaviour. The segmental diffusion of single chains exhibits deviations from a simple exponential law, indicating the importance of memory effects. (orig.) [de

  3. Contact mechanics at nanometric scale using nanoindentation technique for brittle and ductile materials.

    Science.gov (United States)

    Roa, J J; Rayon, E; Morales, M; Segarra, M

    2012-06-01

    In the last years, Nanoindentation or Instrumented Indentation Technique has become a powerful tool to study the mechanical properties at micro/nanometric scale (commonly known as hardness, elastic modulus and the stress-strain curve). In this review, the different contact mechanisms (elastic and elasto-plastic) are discussed, the recent patents for each mechanism (elastic and elasto-plastic) are summarized in detail, and the basic equations employed to know the mechanical behaviour for brittle and ductile materials are described.

  4. The use of production management techniques in the construction of large scale physics detectors

    International Nuclear Information System (INIS)

    Bazan, A.; Chevenier, G.; Estrella, F.

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Workflow Management Software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an Information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector construction. This is the first time industrial production techniques have been deployed to this extent in detector construction

  5. Regional scales of fire danger rating in the forest: improved technique

    Directory of Open Access Journals (Sweden)

    A. V. Volokitina

    2017-04-01

    Full Text Available Wildland fires distribute unevenly in time and over area under the influence of weather and other factors. It is unfeasible to air patrol the whole forest area daily during a fire season as well as to keep all fire suppression forces constantly alert. Daily work and preparedness of forest fire protection services is regulated by the level of fire danger according to weather conditions (Nesterov’s index. PV-1 index, fire hazard class (Melekhov’s scale, regional scales (earlier called local scales. Unfortunately, there is still no unified comparable technique of making regional scales. As a result, it is difficult to maneuver forest fire protection resources, since the techniques currently used are not approved and not tested for their performance. They give fire danger rating incomparable even for neighboring regions. The paper analyzes the state-of-the-art in Russia and abroad. It is stated the irony is that with factors of fire danger measured quantitatively, the fire danger itself as a function has no quantitative expression. Thus, selection of an absolute criteria is of high importance for improvement of daily fire danger rating. On the example of the Chunsky forest ranger station (Krasnoyarsk Krai, an improved technique is suggested of making comparable local scales of forest fire danger rating based on an absolute criterion of fire danger rating – a probable density of active fires per million ha. A method and an algorithm are described of automatized local scales of fire danger that should facilitate effective creation of similar scales for any forest ranger station or aviation regional office using a database on forest fires and weather conditions. The information system of distant monitoring by Federal Forestry Agency of Russia is analyzed for its application in making local scales. To supplement the existing weather station net it is suggested that automatic compact weather stations or, if the latter is not possible, simple

  6. A QUANTITATIVE METHOD FOR ANALYSING 3-D BRANCHING IN EMBRYONIC KIDNEYS: DEVELOPMENT OF A TECHNIQUE AND PRELIMINARY DATA

    Directory of Open Access Journals (Sweden)

    Gabriel Fricout

    2011-05-01

    Full Text Available The normal human adult kidney contains between 300,000 and 1 million nephrons (the functional units of the kidney. Nephrons develop at the tips of the branching ureteric duct, and therefore ureteric duct branching morphogenesis is critical for normal kidney development. Current methods for analysing ureteric branching are mostly qualitative and those quantitative methods that do exist do not account for the 3- dimensional (3D shape of the ureteric "tree". We have developed a method for measuring the total length of the ureteric tree in 3D. This method is described and preliminary data are presented. The algorithm allows for performing a semi-automatic segmentation of a set of grey level confocal images and an automatic skeletonisation of the resulting binary object. Measurements of length are automatically obtained, and numbers of branch points are manually counted. The final representation can be reconstructed by means of 3D volume rendering software, providing a fully rotating 3D perspective of the skeletonised tree, making it possible to identify and accurately measure branch lengths. Preliminary data shows the total length estimates obtained with the technique to be highly reproducible. Repeat estimates of total tree length vary by just 1-2%. We will now use this technique to further define the growth of the ureteric tree in vitro, under both normal culture conditions, and in the presence of various levels of specific molecules suspected of regulating ureteric growth. The data obtained will provide fundamental information on the development of renal architecture, as well as the regulation of nephron number.

  7. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    Science.gov (United States)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  8. Advanced techniques for energy-efficient industrial-scale continuous chromatography

    Energy Technology Data Exchange (ETDEWEB)

    DeCarli, J.P. II (Dow Chemical Co., Midland, MI (USA)); Carta, G. (Virginia Univ., Charlottesville, VA (USA). Dept. of Chemical Engineering); Byers, C.H. (Oak Ridge National Lab., TN (USA))

    1989-11-01

    Continuous annular chromatography (CAC) is a developing technology that allows truly continuous chromatographic separations. Previous work has demonstrated the utility of this technology for the separation of various materials by isocratic elution on a bench scale. Novel applications and improved operation of the process were studied in this work, demonstrating that CAC is a versatile apparatus which is capable of separations at high throughput. Three specific separation systems were investigated. Pilot-scale separations at high loadings were performed using an industrial sugar mixture as an example of scale-up for isocratic separations. Bench-scale experiments of a low concentration metal ion mixture were performed to demonstrate stepwise elution, a chromatographic technique which decreases dilution and increases sorbent capacity. Finally, the separation of mixtures of amino acids by ion exchange was investigated to demonstrate the use of displacement development on the CAC. This technique, which perhaps has the most potential, when applied to the CAC allowed simultaneous separation and concentration of multicomponent mixtures on a continuous basis. Mathematical models were developed to describe the CAC performance and optimize the operating conditions. For all the systems investigated, the continuous separation performance of the CAC was found to be very nearly the same as the batchwise performance of conventional chromatography. the technology appears, thus, to be very promising for industrial applications. 43 figs., 9 tabs.

  9. Examination of an eHealth literacy scale and a health literacy scale in a population with moderate to high cardiovascular risk: Rasch analyses.

    Directory of Open Access Journals (Sweden)

    Sarah S Richtering

    Full Text Available Electronic health (eHealth strategies are evolving making it important to have valid scales to assess eHealth and health literacy. Item response theory methods, such as the Rasch measurement model, are increasingly used for the psychometric evaluation of scales. This paper aims to examine the internal construct validity of an eHealth and health literacy scale using Rasch analysis in a population with moderate to high cardiovascular disease risk.The first 397 participants of the CONNECT study completed the electronic health Literacy Scale (eHEALS and the Health Literacy Questionnaire (HLQ. Overall Rasch model fit as well as five key psychometric properties were analysed: unidimensionality, response thresholds, targeting, differential item functioning and internal consistency.The eHEALS had good overall model fit (χ2 = 54.8, p = 0.06, ordered response thresholds, reasonable targeting and good internal consistency (person separation index (PSI 0.90. It did, however, appear to measure two constructs of eHealth literacy. The HLQ subscales (except subscale 5 did not fit the Rasch model (χ2: 18.18-60.60, p: 0.00-0.58 and had suboptimal targeting for most subscales. Subscales 6 to 9 displayed disordered thresholds indicating participants had difficulty distinguishing between response options. All subscales did, nonetheless, demonstrate moderate to good internal consistency (PSI: 0.62-0.82.Rasch analyses demonstrated that the eHEALS has good measures of internal construct validity although it appears to capture different aspects of eHealth literacy (e.g. using eHealth and understanding eHealth. Whilst further studies are required to confirm this finding, it may be necessary for these constructs of the eHEALS to be scored separately. The nine HLQ subscales were shown to measure a single construct of health literacy. However, participants' scores may not represent their actual level of ability, as distinction between response categories was unclear for

  10. Industrial scale production of stable isotopes employing the technique of plasma separation

    International Nuclear Information System (INIS)

    Stevenson, N.R.; Bigelow, T.S.; Tarallo, F.J.

    2003-01-01

    Calutrons, centrifuges, diffusion and distillation processes are some of the devices and techniques that have been employed to produce substantial quantities of enriched stable isotopes. Nevertheless, the availability of enriched isotopes in sufficient quantities for industrial applications remains very restricted. Industries such as those involved with medicine, semiconductors, nuclear fuel, propulsion, and national defense have identified the potential need for various enriched isotopes in large quantities. Economically producing most enriched (non-gaseous) isotopes in sufficient quantities has so far eluded commercial producers. The plasma separation process is a commercial technique now available for producing large quantities of a wide range of enriched isotopes. Until recently, this technique has mainly been explored with small-scale ('proof-of-principle') devices that have been built and operated at research institutes. The new Theragenics TM facility at Oak Ridge, TN houses the only existing commercial scale PSP system. This device, which successfully operated in the 1980's, has recently been re-commissioned and is planned to be used to produce a variety of isotopes. Progress and the capabilities of this device and it's potential for impacting the world's supply of stable isotopes in the future is summarized. This technique now holds promise of being able to open the door to allowing new and exciting applications of these isotopes in the future. (author)

  11. The integration of novel diagnostics techniques for multi-scale monitoring of large civil infrastructures

    Directory of Open Access Journals (Sweden)

    F. Soldovieri

    2008-11-01

    Full Text Available In the recent years, structural monitoring of large infrastructures (buildings, dams, bridges or more generally man-made structures has raised an increased attention due to the growing interest about safety and security issues and risk assessment through early detection. In this framework, aim of the paper is to introduce a new integrated approach which combines two sensing techniques acting on different spatial and temporal scales. The first one is a distributed optic fiber sensor based on the Brillouin scattering phenomenon, which allows a spatially and temporally continuous monitoring of the structure with a "low" spatial resolution (meter. The second technique is based on the use of Ground Penetrating Radar (GPR, which can provide detailed images of the inner status of the structure (with a spatial resolution less then tens centimetres, but does not allow a temporal continuous monitoring. The paper describes the features of these two techniques and provides experimental results concerning preliminary test cases.

  12. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    Science.gov (United States)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the

  13. Viscoplastic-dynamic analyses of small-scale fracture tests to obtain crack arrest toughness values for PTS conditions

    International Nuclear Information System (INIS)

    Kanninen, M.F.; Hudak, S.J. Jr; Dexter, R.J.; Couque, H.; O'Donoghue, P.E.; Polch, E.Z.

    1988-01-01

    Reliable predictions of crack arrest at the high upper shelf toughness conditions involved in postulated pressurized thermal shock (PTS) events require procedures beyond those utilized in conventional fracture mechanics treatments. To develop such a procedure, viscoplastic-dynamic fracture mechanics finite element analyses, viscoplastic material characterization testing, and small-scale crack propagation and arrest experimentation are being combines in this research. The approach couples SwRI's viscoplastic-dynamic fracture mechanics finite element code VISCRK with experiments using duplex 4340/A533B steel compact specimens. The experiments are simulated by VISCRK computations employing the Bodner-Partom viscoplastic constitutive relation and the nonlinear fracture mechanics parameter T. The goal is to develop temperature-dependent crack arrest toughness values for A533B steel. While only room temperature K Ia values have been obtained so far, these have been found to agree closely with those obtained from wide plate tests. (author)

  14. Clustering structures of large proteins using multifractal analyses based on a 6-letter model and hydrophobicity scale of amino acids

    International Nuclear Information System (INIS)

    Yang Jianyi; Yu Zuguo; Anh, Vo

    2009-01-01

    The Schneider and Wrede hydrophobicity scale of amino acids and the 6-letter model of protein are proposed to study the relationship between the primary structure and the secondary structural classification of proteins. Two kinds of multifractal analyses are performed on the two measures obtained from these two kinds of data on large proteins. Nine parameters from the multifractal analyses are considered to construct the parameter spaces. Each protein is represented by one point in these spaces. A procedure is proposed to separate large proteins in the α, β, α + β and α/β structural classes in these parameter spaces. Fisher's linear discriminant algorithm is used to assess our clustering accuracy on the 49 selected large proteins. Numerical results indicate that the discriminant accuracies are satisfactory. In particular, they reach 100.00% and 84.21% in separating the α proteins from the {β, α + β, α/β} proteins in a parameter space; 92.86% and 86.96% in separating the β proteins from the {α + β, α/β} proteins in another parameter space; 91.67% and 83.33% in separating the α/β proteins from the α + β proteins in the last parameter space.

  15. Extension and application of a scaling technique for duplication of in-flight aerodynamic heat flux in ground test facilities

    NARCIS (Netherlands)

    Veraar, R.G.

    2009-01-01

    To enable direct experimental duplication of the inflight heat flux distribution on supersonic and hypersonic vehicles, an aerodynamic heating scaling technique has been developed. The scaling technique is based on the analytical equations for convective heat transfer for laminar and turbulent

  16. Evaluating the factor structure, item analyses, and internal consistency of hospital anxiety and depression scale in Iranian infertile patients

    Directory of Open Access Journals (Sweden)

    Payam Amini

    2017-09-01

    Full Text Available Background: The hospital anxiety and depression scale (HADS is a common screening tool designed to measure the level of anxiety and depression in different factor structures and has been extensively used in non-psychiatric populations and individuals experiencing fertility problems. Objective: The aims of this study were to evaluate the factor structure, item analyses, and internal consistency of HADS in Iranian infertile patients. Materials and Methods: This cross-sectional study included 651 infertile patients (248 men and 403 women referred to a referral infertility Center in Tehran, Iran between January 2014 and January 2015. Confirmatory factor analysis was used to determine the underlying factor structure of the HADS among one, two, and threefactor models. Several goodness of fit indices were utilized such as comparative, normed and goodness of fit indices, Akaike information criterion, and the root mean squared error of approximation. In addition to HADS, the Satisfaction with Life Scale questionnaires as well as demographic and clinical information were administered to all patients. Results: The goodness of fit indices through CFAs exposed that three and onefactor model provided the best and worst fit to the total, male and female datasets compared to the other factor structure models for the infertile patients. The Cronbach’s alpha for anxiety and depression subscales were 0.866 and 0.753 respectively. The HADS subscales significantly correlated with SWLS, indicating an acceptable convergent validity. Conclusion: The HADS was found to be a three-factor structure screening instrument in the field of infertility.

  17. Contribution of analytical nuclear techniques in the reconstruction of the Brazilian pre-history analysing archaeological ceramics of Tupiguarani tradition

    International Nuclear Information System (INIS)

    Faria, Gleikam Lopes de Oliveira; Menezes, Maria Angela de B.C.; Silva, Maria Aparecida

    2011-01-01

    Due to the high importance of the material vestiges for a culture of a nation, the Brazilian Council for Environment determined that the license to establish new enterprises are subjected to a technical report concerning environmental impact, including archaeological sites affected by that enterprise. Therefore, answering the report related to the Program for Prospection and Rescue of the Archaeological Patrimony of the Areas impacted by the installation of the Second Line of Samarco Mining Pipeline, the archaeological interventions were carried out along the coast of Espirito Santo. Tupi-Guarani Tradition vestiges were found there, where the main evidence was a interesting ceramics. Archaeology can fill the gap between ancient population and modern society elucidating the evidences found in archaeological sites. In this context, several ceramic fragments found in the archaeological sites - Hiuton and Bota-Fora - were analyzed by neutron activation technique, k 0-standardization method, at CDTN using the TRIGA MARK I IPR-R1 reactor, in order to characterize their elemental composition. The elements As, Ba, Br, Ce, Co, Cr, Cs, Eu, Fe, Ga, Hf, K, La, Na, Nd, Rb, Sb, Sc, Sm, Ta, Tb, Th, U, Yb, Zn and Zr were determined. Applying R software, a robust multivariate statistical analysis, the results pointed out that the pottery from the sites was made with clay from different sources. The X-ray powder diffraction analyses were carried out to determine the mineral composition and Moessbauer spectroscopy was applied to provide information on both the degree of burning and atmosphere in order to reconstruct the Indian burning strategies temperature used on pottery production. (author)

  18. Complementary techniques for solid oxide cell characterisation on micro- and nano-scale

    International Nuclear Information System (INIS)

    Wiedenmann, D.; Hauch, A.; Grobety, B.; Mogensen, M.; Vogt, U.

    2009-01-01

    High temperature steam electrolysis by solid oxide electrolysis cells (SOEC) is a way with great potential to transform clean and renewable energy from non-fossil sources to synthetic fuels such as hydrogen, methane or dimethyl ether, which have been identified as promising alternative energy carriers. Also, as SOEC can operate in the reverse mode as solid oxide fuel cells (SOFC), during high peak hours e.g. hydrogen can be used in a very efficient way to reconvert chemically stored energy into electrical energy. As solid oxide cells (SOC) are working at high temperatures (700-900 o C), material degradation and evaporation can occur e.g. from the cell sealing material, leading to poisoning effects and aging mechanisms which are decreasing the cell efficiency and long-term durability. In order to investigate such cell degradation processes, thorough examination on SOC often requires the chemical and structural characterisation on the microscopic and the nanoscopic level. The combination of different microscope techniques like conventional scanning electron microscopy (SEM), electron-probe microanalysis (EPMA) and the focused ion-beam (FIB) preparation technique for transmission electron microscopy (TEM) allows performing post mortem analysis on a multi scale level of cells after testing. These complementary techniques can be used to characterize structural and chemical changes over a large and representative sample area (micro-scale) on the one hand, and also on the nano-scale level for selected sample details on the other hand. This article presents a methodical approach for the structural and chemical characterisation of changes in aged cathode-supported electrolysis cells produced at Riso DTU, Denmark. Also, results from the characterisation of impurities at the electrolyte/hydrogen interface caused by evaporation from sealing material are discussed. (author)

  19. Preionization Techniques in a kJ-Scale Dense Plasma Focus

    Science.gov (United States)

    Povilus, Alexander; Shaw, Brian; Chapman, Steve; Podpaly, Yuri; Cooper, Christopher; Falabella, Steve; Prasad, Rahul; Schmidt, Andrea

    2016-10-01

    A dense plasma focus (DPF) is a type of z-pinch device that uses a high current, coaxial plasma gun with an implosion phase to generate dense plasmas. These devices can accelerate a beam of ions to MeV-scale energies through strong electric fields generated by instabilities during the implosion of the plasma sheath. The formation of these instabilities, however, relies strongly on the history of the plasma sheath in the device, including the evolution of the gas breakdown in the device. In an effort to reduce variability in the performance of the device, we attempt to control the initial gas breakdown in the device by seeding the system with free charges before the main power pulse arrives. We report on the effectiveness of two techniques developed for a kJ-scale DPF at LLNL, a miniature primer spark gap and pulsed, 255nm LED illumination. Prepared by LLNL under Contract DE-AC52-07NA27344.

  20. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Nudging technique for scale bridging in air quality/climate atmospheric composition modelling

    Directory of Open Access Journals (Sweden)

    A. Maurizi

    2012-04-01

    Full Text Available The interaction between air quality and climate involves dynamical scales that cover a very wide range. Bridging these scales in numerical simulations is fundamental in studies devoted to megacity/hot-spot impacts on larger scales. A technique based on nudging is proposed as a bridging method that can couple different models at different scales.

    Here, nudging is used to force low resolution chemical composition models with a run of a high resolution model on a critical area. A one-year numerical experiment focused on the Po Valley hot spot is performed using the BOLCHEM model to asses the method.

    The results show that the model response is stable to perturbation induced by the nudging and that, taking the high resolution run as a reference, performances of the nudged run increase with respect to the non-forced run. The effect outside the forcing area depends on transport and is significant in a relevant number of events although it becomes weak on seasonal or yearly basis.

  2. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    Science.gov (United States)

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  3. Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots

    Science.gov (United States)

    2010-04-01

    you are performing the low crawl 4.25 5.00 Drive the robot while you are negotiating the hill 6.00 5.00 Drive the robot while you are climbing the... stairs 4.67 5.00 Drive the robot while you are walking 5.70 5.27 HMD It was fairly doable. 1 When you’re looking through the lens, it’s not...Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots by Elizabeth S. Redden, Rodger A. Pettitt

  4. Gallium Nitride: A Nano scale Study using Electron Microscopy and Associated Techniques

    International Nuclear Information System (INIS)

    Mohammed Benaissa; Vennegues, Philippe

    2008-01-01

    A complete nano scale study on GaN thin films doped with Mg. This study was carried out using TEM and associated techniques such as HREM, CBED, EDX and EELS. It was found that the presence of triangular defects (of few nanometers in size) within GaN:Mg films were at the origin of unexpected electrical and optical behaviors, such as a decrease in the free hole density at high Mg doping. It is shown that these defects are inversion domains limited with inversion-domains boundaries. (author)

  5. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  6. Round-robin pretest analyses of a 1:6-scale reinforced concrete containment model subject to static internal pressurization

    International Nuclear Information System (INIS)

    Clauss, D.B.

    1987-05-01

    Analyses of a 1:6-scale reinforced concrete containment model that will be tested to failure at Sandia National Laboratories in the spring of 1987 were conducted by the following organizations in the United States and Europe: Sandia National Laboratories (USA), Argonne National Laboratory (USA), Electric Power Research Institute (USA), Commissariat a L'Energie Atomique (France), HM Nuclear Installations Inspectorate (UK), Comitato Nazionale per la ricerca e per lo sviluppo dell'Energia Nucleare e delle Energie Alternative (Italy), UK Atomic Energy Authority, Safety and Reliability Directorate (UK), Gesellschaft fuer Reaktorsicherheit (FRG), Brookhaven National Laboratory (USA), and Central Electricity Generating Board (UK). Each organization was supplied with a standard information package, which included construction drawings and actual material properties for most of the materials used in the model. Each organization worked independently using their own analytical methods. This report includes descriptions of the various analytical approaches and pretest predictions submitted by each organization. Significant milestones that occur with increasing pressure, such as damage to the concrete (cracking and crushing) and yielding of the steel components, and the failure pressure (capacity) and failure mechanism are described. Analytical predictions for pressure histories of strain in the liner and rebar and displacements are compared at locations where experimental results will be available after the test. Thus, these predictions can be compared to one another and to experimental results after the test

  7. Evaluation of a modified 16-item Readiness for Interprofessional Learning Scale (RIPLS): Exploratory and confirmatory factor analyses.

    Science.gov (United States)

    Yu, Tzu-Chieh; Jowsey, Tanisha; Henning, Marcus

    2018-04-18

    The Readiness for Interprofessional Learning Scale (RIPLS) was developed to assess undergraduate readiness for engaging in interprofessional education (IPE). It has become an accepted and commonly used instrument. To determine utility of a modified 16-item RIPLS instrument, exploratory and confirmatory factor analyses were performed. Data used were collected from a pre- and post-intervention study involving 360 New Zealand undergraduate students from one university. Just over half of the participants were enrolled in medicine (51%) while the remainder were in pharmacy (27%) and nursing (22%). The intervention was a two-day simulation-based IPE course focused on managing unplanned acute medical problems in hospital wards ("ward calls"). Immediately prior to the course, 288 RIPLS were collected and immediately afterwards, 322 (response rates 80% and 89%, respectively). Exploratory factor analysis involving principal axis factoring with an oblique rotation method was conducted using pre-course data. The scree plot suggested a three-factor solution over two- and four-factor solutions. Subsequent confirmatory factor analysis performed using post-course data demonstrated partial goodness-of-fit for this suggested three-factor model. Based on these findings, further robust psychometric testing of the RIPLS or modified versions of it is recommended before embarking on its use in evaluative research in various healthcare education settings.

  8. SCALE-4 [Standardized Computer Analyses for Licensing Evaluation]: An improved computational system for spent-fuel cask analysis

    International Nuclear Information System (INIS)

    Parks, C.V.

    1989-01-01

    The purpose of this paper is to provide specific information regarding improvements available with Version 4.0 of the SCALE system and discuss the future of SCALE within the current computing and regulatory environment. The emphasis focuses on the improvements in SCALE-4 over that available in SCALE-3. 10 refs., 1 fig., 1 tab

  9. Large-scale chromosome folding versus genomic DNA sequences: A discrete double Fourier transform technique.

    Science.gov (United States)

    Chechetkin, V R; Lobzin, V V

    2017-08-07

    Using state-of-the-art techniques combining imaging methods and high-throughput genomic mapping tools leaded to the significant progress in detailing chromosome architecture of various organisms. However, a gap still remains between the rapidly growing structural data on the chromosome folding and the large-scale genome organization. Could a part of information on the chromosome folding be obtained directly from underlying genomic DNA sequences abundantly stored in the databanks? To answer this question, we developed an original discrete double Fourier transform (DDFT). DDFT serves for the detection of large-scale genome regularities associated with domains/units at the different levels of hierarchical chromosome folding. The method is versatile and can be applied to both genomic DNA sequences and corresponding physico-chemical parameters such as base-pairing free energy. The latter characteristic is closely related to the replication and transcription and can also be used for the assessment of temperature or supercoiling effects on the chromosome folding. We tested the method on the genome of E. coli K-12 and found good correspondence with the annotated domains/units established experimentally. As a brief illustration of further abilities of DDFT, the study of large-scale genome organization for bacteriophage PHIX174 and bacterium Caulobacter crescentus was also added. The combined experimental, modeling, and bioinformatic DDFT analysis should yield more complete knowledge on the chromosome architecture and genome organization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Validation of a large-scale audit technique for CT dose optimisation

    International Nuclear Information System (INIS)

    Wood, T. J.; Davis, A. W.; Moore, C. S.; Beavis, A. W.; Saunderson, J. R.

    2008-01-01

    The expansion and increasing availability of computed tomography (CT) imaging means that there is a greater need for the development of efficient optimisation strategies that are able to inform clinical practice, without placing a significant burden on limited departmental resources. One of the most fundamental aspects to any optimisation programme is the collection of patient dose information, which can be compared with appropriate diagnostic reference levels. This study has investigated the implementation of a large-scale audit technique, which utilises data that already exist in the radiology information system, to determine typical doses for a range of examinations on four CT scanners. This method has been validated against what is considered the 'gold standard' technique for patient dose audits, and it has been demonstrated that results equivalent to the 'standard-sized patient' can be inferred from this much larger data set. This is particularly valuable where CT optimisation is concerned as it is considered a 'high dose' technique, and hence close monitoring of patient dose is particularly important. (authors)

  11. Introduction of Functional Structures in Nano-Scales into Engineering Polymer Films Using Radiation Technique

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Y., E-mail: maekawa.yasunari@jaea.go.jp [Japan Atomic Energy Agency (JAEA), Quantum Beam Science Directorate, High Performance Polymer Group, 1233 Watanuki-Machi, Takasaki, Gunma-ken 370-1292 (Japan)

    2010-07-01

    Introduction of functional regions in nanometer scale in polymeric films using γ-rays, EB, and ion beams are proposed. Two approaches to build nano-scale functional domains in polymer substrates are proposed: 1) Radiation-induced grafting to transfer nano-scale polymer crystalline structures (morphology), acting as a nano-template, to nano-scale graft polymer regions. The obtained polymers with nano structures can be applied to high performance polymer membranes. 2) Fabrication of nanopores and functional domains in engineering plastic films using ion beams, which deposit the energy in very narrow region of polymer films. Hydrophilic grafting polymers are introduced into hydrophobic fluorinated polymers, cross-linked PTFE (cPTFE) and aromatic hydrocarbon polymer, poly(ether ether ketone (PEEK), which is known to have lamella and crystallite in the polymer films. Then, the hierarchical structures of graft domains are analyzed by a small angle neutron scattering (SANS) experiment. From these analyses, the different structures and the different formation of graft domains were observed in fluorinated and hydrocarbon polymer substrates. the grafted domains in the cPTFE film, working as an ion channel, grew as covering the crystallite and the size of domain seems to be similar to that of crystallite. On the other hand, the PEEK-based PEM has a smaller domain size and it seems to grow independently on the crystallites of PEEK substrate. For nano-fabrication of polymer films using heavy ion beams, the energy distribution in radial direction, which is perpendicular to ion trajectory, is mainly concerned. For penumbra, we re-estimated effective radius of penumbra, in which radiation induced grafting took place, for several different ion beams. We observed the different diameters of the ion channels consisting of graft polymers. The channel sizes were quite in good agreement with the effective penumbra which possess the absorption doses more than 1 kGy. (author)

  12. Introduction of Functional Structures in Nano-Scales into Engineering Polymer Films Using Radiation Technique

    International Nuclear Information System (INIS)

    Maekawa, Y.

    2010-01-01

    Introduction of functional regions in nanometer scale in polymeric films using γ-rays, EB, and ion beams are proposed. Two approaches to build nano-scale functional domains in polymer substrates are proposed: 1) Radiation-induced grafting to transfer nano-scale polymer crystalline structures (morphology), acting as a nano-template, to nano-scale graft polymer regions. The obtained polymers with nano structures can be applied to high performance polymer membranes. 2) Fabrication of nanopores and functional domains in engineering plastic films using ion beams, which deposit the energy in very narrow region of polymer films. Hydrophilic grafting polymers are introduced into hydrophobic fluorinated polymers, cross-linked PTFE (cPTFE) and aromatic hydrocarbon polymer, poly(ether ether ketone (PEEK), which is known to have lamella and crystallite in the polymer films. Then, the hierarchical structures of graft domains are analyzed by a small angle neutron scattering (SANS) experiment. From these analyses, the different structures and the different formation of graft domains were observed in fluorinated and hydrocarbon polymer substrates. the grafted domains in the cPTFE film, working as an ion channel, grew as covering the crystallite and the size of domain seems to be similar to that of crystallite. On the other hand, the PEEK-based PEM has a smaller domain size and it seems to grow independently on the crystallites of PEEK substrate. For nano-fabrication of polymer films using heavy ion beams, the energy distribution in radial direction, which is perpendicular to ion trajectory, is mainly concerned. For penumbra, we re-estimated effective radius of penumbra, in which radiation induced grafting took place, for several different ion beams. We observed the different diameters of the ion channels consisting of graft polymers. The channel sizes were quite in good agreement with the effective penumbra which possess the absorption doses more than 1 kGy. (author)

  13. Analysis of Grassland Ecosystem Physiology at Multiple Scales Using Eddy Covariance, Stable Isotope and Remote Sensing Techniques

    Science.gov (United States)

    Flanagan, L. B.; Geske, N.; Emrick, C.; Johnson, B. G.

    2006-12-01

    Grassland ecosystems typically exhibit very large annual fluctuations in above-ground biomass production and net ecosystem productivity (NEP). Eddy covariance flux measurements, plant stable isotope analyses, and canopy spectral reflectance techniques have been applied to study environmental constraints on grassland ecosystem productivity and the acclimation responses of the ecosystem at a site near Lethbridge, Alberta, Canada. We have observed substantial interannual variation in grassland productivity during 1999-2005. In addition, there was a strong correlation between peak above-ground biomass production and NEP calculated from eddy covariance measurements. Interannual variation in NEP was strongly controlled by the total amount of precipitation received during the growing season (April-August). We also observed significant positive correlations between a multivariate ENSO index and total growing season precipitation, and between the ENSO index and annual NEP values. This suggested that a significant fraction of the annual variability in grassland productivity was associated with ENSO during 1999-2005. Grassland productivity varies asymmetrically in response to changes in precipitation with increases in productivity during wet years being much more pronounced than reductions during dry years. Strong increases in plant water-use efficiency, based on carbon and oxygen stable isotope analyses, contribute to the resilience of productivity during times of drought. Within a growing season increased stomatal limitation of photosynthesis, associated with improved water-use efficiency, resulted in apparent shifts in leaf xanthophyll cycle pigments and changes to the Photochemical Reflectance Index (PRI) calculated from hyper-spectral reflectance measurements conducted at the canopy-scale. These shifts in PRI were apparent before seasonal drought caused significant reductions in leaf area index (LAI) and changes to canopy-scale "greenness" based on NDVI values. With

  14. Residence time distribution measurements in a pilot-scale poison tank using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Goswami, Sunil; Samantray, J S; Sharma, V K; Maheshwari, N K

    2015-09-01

    Various types of systems are used to control the reactivity and shutting down of a nuclear reactor during emergency and routine shutdown operations. Injection of boron solution (borated water) into the core of a reactor is one of the commonly used methods during emergency operation. A pilot-scale poison tank was designed and fabricated to simulate injection of boron poison into the core of a reactor along with coolant water. In order to design a full-scale poison tank, it was desired to characterize flow of liquid from the tank. Residence time distribution (RTD) measurement and analysis was adopted to characterize the flow dynamics. Radiotracer technique was applied to measure RTD of aqueous phase in the tank using Bromine-82 as a radiotracer. RTD measurements were carried out with two different modes of operation of the tank and at different flow rates. In Mode-1, the radiotracer was instantaneously injected at the inlet and monitored at the outlet, whereas in Mode-2, the tank was filled with radiotracer and its concentration was measured at the outlet. From the measured RTD curves, mean residence times (MRTs), dead volume and fraction of liquid pumped in with time were determined. The treated RTD curves were modeled using suitable mathematical models. An axial dispersion model with high degree of backmixing was found suitable to describe flow when operated in Mode-1, whereas a tanks-in-series model with backmixing was found suitable to describe flow of the poison in the tank when operated in Mode-2. The results were utilized to scale-up and design a full-scale poison tank for a nuclear reactor. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. GT-WGS: an efficient and economic tool for large-scale WGS analyses based on the AWS cloud service.

    Science.gov (United States)

    Wang, Yiqi; Li, Gen; Ma, Mark; He, Fazhong; Song, Zhuo; Zhang, Wei; Wu, Chengkun

    2018-01-19

    Whole-genome sequencing (WGS) plays an increasingly important role in clinical practice and public health. Due to the big data size, WGS data analysis is usually compute-intensive and IO-intensive. Currently it usually takes 30 to 40 h to finish a 50× WGS analysis task, which is far from the ideal speed required by the industry. Furthermore, the high-end infrastructure required by WGS computing is costly in terms of time and money. In this paper, we aim to improve the time efficiency of WGS analysis and minimize the cost by elastic cloud computing. We developed a distributed system, GT-WGS, for large-scale WGS analyses utilizing the Amazon Web Services (AWS). Our system won the first prize on the Wind and Cloud challenge held by Genomics and Cloud Technology Alliance conference (GCTA) committee. The system makes full use of the dynamic pricing mechanism of AWS. We evaluate the performance of GT-WGS with a 55× WGS dataset (400GB fastq) provided by the GCTA 2017 competition. In the best case, it only took 18.4 min to finish the analysis and the AWS cost of the whole process is only 16.5 US dollars. The accuracy of GT-WGS is 99.9% consistent with that of the Genome Analysis Toolkit (GATK) best practice. We also evaluated the performance of GT-WGS performance on a real-world dataset provided by the XiangYa hospital, which consists of 5× whole-genome dataset with 500 samples, and on average GT-WGS managed to finish one 5× WGS analysis task in 2.4 min at a cost of $3.6. WGS is already playing an important role in guiding therapeutic intervention. However, its application is limited by the time cost and computing cost. GT-WGS excelled as an efficient and affordable WGS analyses tool to address this problem. The demo video and supplementary materials of GT-WGS can be accessed at https://github.com/Genetalks/wgs_analysis_demo .

  16. Comparison of residual NAPL source removal techniques in 3D metric scale experiments

    Science.gov (United States)

    Atteia, O.; Jousse, F.; Cohen, G.; Höhener, P.

    2017-07-01

    the contaminant fluxes, which were different for each technique. This paper presents the first comparison of four remediation techniques at the scale of 1 m3 tanks including heterogeneities. Sparging, persulfate and surfactant only remove 50% of the mass, while it is more than 99% for thermal. In terms of flux removal oxidant addition performs better when density effects are used.

  17. Massive Cloud Computing Processing of P-SBAS Time Series for Displacement Analyses at Large Spatial Scale

    Science.gov (United States)

    Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.

    2016-12-01

    A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.

  18. Comparative analyses of population-scale phenomic data in electronic medical records reveal race-specific disease networks

    Science.gov (United States)

    Glicksberg, Benjamin S.; Li, Li; Badgeley, Marcus A.; Shameer, Khader; Kosoy, Roman; Beckmann, Noam D.; Pho, Nam; Hakenberg, Jörg; Ma, Meng; Ayers, Kristin L.; Hoffman, Gabriel E.; Dan Li, Shuyu; Schadt, Eric E.; Patel, Chirag J.; Chen, Rong; Dudley, Joel T.

    2016-01-01

    Motivation: Underrepresentation of racial groups represents an important challenge and major gap in phenomics research. Most of the current human phenomics research is based primarily on European populations; hence it is an important challenge to expand it to consider other population groups. One approach is to utilize data from EMR databases that contain patient data from diverse demographics and ancestries. The implications of this racial underrepresentation of data can be profound regarding effects on the healthcare delivery and actionability. To the best of our knowledge, our work is the first attempt to perform comparative, population-scale analyses of disease networks across three different populations, namely Caucasian (EA), African American (AA) and Hispanic/Latino (HL). Results: We compared susceptibility profiles and temporal connectivity patterns for 1988 diseases and 37 282 disease pairs represented in a clinical population of 1 025 573 patients. Accordingly, we revealed appreciable differences in disease susceptibility, temporal patterns, network structure and underlying disease connections between EA, AA and HL populations. We found 2158 significantly comorbid diseases for the EA cohort, 3265 for AA and 672 for HL. We further outlined key disease pair associations unique to each population as well as categorical enrichments of these pairs. Finally, we identified 51 key ‘hub’ diseases that are the focal points in the race-centric networks and of particular clinical importance. Incorporating race-specific disease comorbidity patterns will produce a more accurate and complete picture of the disease landscape overall and could support more precise understanding of disease relationships and patient management towards improved clinical outcomes. Contacts: rong.chen@mssm.edu or joel.dudley@mssm.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307606

  19. Comparative analyses of population-scale phenomic data in electronic medical records reveal race-specific disease networks.

    Science.gov (United States)

    Glicksberg, Benjamin S; Li, Li; Badgeley, Marcus A; Shameer, Khader; Kosoy, Roman; Beckmann, Noam D; Pho, Nam; Hakenberg, Jörg; Ma, Meng; Ayers, Kristin L; Hoffman, Gabriel E; Dan Li, Shuyu; Schadt, Eric E; Patel, Chirag J; Chen, Rong; Dudley, Joel T

    2016-06-15

    Underrepresentation of racial groups represents an important challenge and major gap in phenomics research. Most of the current human phenomics research is based primarily on European populations; hence it is an important challenge to expand it to consider other population groups. One approach is to utilize data from EMR databases that contain patient data from diverse demographics and ancestries. The implications of this racial underrepresentation of data can be profound regarding effects on the healthcare delivery and actionability. To the best of our knowledge, our work is the first attempt to perform comparative, population-scale analyses of disease networks across three different populations, namely Caucasian (EA), African American (AA) and Hispanic/Latino (HL). We compared susceptibility profiles and temporal connectivity patterns for 1988 diseases and 37 282 disease pairs represented in a clinical population of 1 025 573 patients. Accordingly, we revealed appreciable differences in disease susceptibility, temporal patterns, network structure and underlying disease connections between EA, AA and HL populations. We found 2158 significantly comorbid diseases for the EA cohort, 3265 for AA and 672 for HL. We further outlined key disease pair associations unique to each population as well as categorical enrichments of these pairs. Finally, we identified 51 key 'hub' diseases that are the focal points in the race-centric networks and of particular clinical importance. Incorporating race-specific disease comorbidity patterns will produce a more accurate and complete picture of the disease landscape overall and could support more precise understanding of disease relationships and patient management towards improved clinical outcomes. rong.chen@mssm.edu or joel.dudley@mssm.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  20. Insights into SCP/TAPS proteins of liver flukes based on large-scale bioinformatic analyses of sequence datasets.

    Directory of Open Access Journals (Sweden)

    Cinzia Cantacessi

    Full Text Available BACKGROUND: SCP/TAPS proteins of parasitic helminths have been proposed to play key roles in fundamental biological processes linked to the invasion of and establishment in their mammalian host animals, such as the transition from free-living to parasitic stages and the modulation of host immune responses. Despite the evidence that SCP/TAPS proteins of parasitic nematodes are involved in host-parasite interactions, there is a paucity of information on this protein family for parasitic trematodes of socio-economic importance. METHODOLOGY/PRINCIPAL FINDINGS: We conducted the first large-scale study of SCP/TAPS proteins of a range of parasitic trematodes of both human and veterinary importance (including the liver flukes Clonorchis sinensis, Opisthorchis viverrini, Fasciola hepatica and F. gigantica as well as the blood flukes Schistosoma mansoni, S. japonicum and S. haematobium. We mined all current transcriptomic and/or genomic sequence datasets from public databases, predicted secondary structures of full-length protein sequences, undertook systematic phylogenetic analyses and investigated the differential transcription of SCP/TAPS genes in O. viverrini and F. hepatica, with an emphasis on those that are up-regulated in the developmental stages infecting the mammalian host. CONCLUSIONS: This work, which sheds new light on SCP/TAPS proteins, guides future structural and functional explorations of key SCP/TAPS molecules associated with diseases caused by flatworms. Future fundamental investigations of these molecules in parasites and the integration of structural and functional data could lead to new approaches for the control of parasitic diseases.

  1. Multi-scale ancient DNA analyses confirm the western origin of Michelsberg farmers and document probable practices of human sacrifice.

    Directory of Open Access Journals (Sweden)

    Alice Beau

    Full Text Available In Europe, the Middle Neolithic is characterized by an important diversification of cultures. In northeastern France, the appearance of the Michelsberg culture has been correlated with major cultural changes and interpreted as the result of the settlement of new groups originating from the Paris Basin. This cultural transition has been accompanied by the expansion of particular funerary practices involving inhumations within circular pits and individuals in "non-conventional" positions (deposited in the pits without any particular treatment. If the status of such individuals has been highly debated, the sacrifice hypothesis has been retained for the site of Gougenheim (Alsace. At the regional level, the analysis of the Gougenheim mitochondrial gene pool (SNPs and HVR-I sequence analyses permitted us to highlight a major genetic break associated with the emergence of the Michelsberg in the region. This genetic discontinuity appeared to be linked to new affinities with farmers from the Paris Basin, correlated to a noticeable hunter-gatherer legacy. All of the evidence gathered supports (i the occidental origin of the Michelsberg groups and (ii the potential implication of this migration in the progression of the hunter-gatherer legacy from the Paris Basin to Alsace / Western Germany at the beginning of the Late Neolithic. At the local level, we noted some differences in the maternal gene pool of individuals in "conventional" vs. "non-conventional" positions. The relative genetic isolation of these sub-groups nicely echoes both their social distinction and the hypothesis of sacrifices retained for the site. Our investigation demonstrates that a multi-scale aDNA study of ancient communities offers a unique opportunity to disentangle the complex relationships between cultural and biological evolution.

  2. Three-dimensional nanometer scale analyses of precipitate structures and local compositions in titanium aluminide engineering alloys

    Science.gov (United States)

    Gerstl, Stephan S. A.

    Titanium aluminide (TiAl) alloys are among the fastest developing class of materials for use in high temperature structural applications. Their low density and high strength make them excellent candidates for both engine and airframe applications. Creep properties of TiAl alloys, however, have been a limiting factor in applying the material to a larger commercial market. In this research, nanometer scale compositional and structural analyses of several TiAl alloys, ranging from model Ti-Al-C ternary alloys to putative commercial alloys with 10 components are investigated utilizing three dimensional atom probe (3DAP) and transmission electron microscopies. Nanometer sized borides, silicides, and carbide precipitates are involved in strengthening TiAl alloys, however, chemical partitioning measurements reveal oxygen concentrations up to 14 at. % within the precipitate phases, resulting in the realization of oxycarbide formation contributing to the precipitation strengthening of TiAl alloys. The local compositions of lamellar microstructures and a variety of precipitates in the TiAl system, including boride, silicide, binary carbides, and intermetallic carbides are investigated. Chemical partitioning of the microalloying elements between the alpha2/gamma lamellar phases, and the precipitate/gamma-matrix phases are determined. Both W and Hf have been shown to exhibit a near interfacial excess of 0.26 and 0.35 atoms nm-2 respectively within ca. 7 nm of lamellar interfaces in a complex TiAl alloy. In the case of needle-shaped perovskite Ti3AlC carbide precipitates, periodic domain boundaries are observed 5.3+/-0.8 nm apart along their growth axis parallel to the TiAl[001] crystallographic direction with concomitant composition variations after 24 hrs. at 800°C.

  3. Large-scale genome-wide association studies and meta-analyses of longitudinal change in adult lung function.

    Directory of Open Access Journals (Sweden)

    Wenbo Tang

    Full Text Available Genome-wide association studies (GWAS have identified numerous loci influencing cross-sectional lung function, but less is known about genes influencing longitudinal change in lung function.We performed GWAS of the rate of change in forced expiratory volume in the first second (FEV1 in 14 longitudinal, population-based cohort studies comprising 27,249 adults of European ancestry using linear mixed effects model and combined cohort-specific results using fixed effect meta-analysis to identify novel genetic loci associated with longitudinal change in lung function. Gene expression analyses were subsequently performed for identified genetic loci. As a secondary aim, we estimated the mean rate of decline in FEV1 by smoking pattern, irrespective of genotypes, across these 14 studies using meta-analysis.The overall meta-analysis produced suggestive evidence for association at the novel IL16/STARD5/TMC3 locus on chromosome 15 (P  =  5.71 × 10(-7. In addition, meta-analysis using the five cohorts with ≥3 FEV1 measurements per participant identified the novel ME3 locus on chromosome 11 (P  =  2.18 × 10(-8 at genome-wide significance. Neither locus was associated with FEV1 decline in two additional cohort studies. We confirmed gene expression of IL16, STARD5, and ME3 in multiple lung tissues. Publicly available microarray data confirmed differential expression of all three genes in lung samples from COPD patients compared with controls. Irrespective of genotypes, the combined estimate for FEV1 decline was 26.9, 29.2 and 35.7 mL/year in never, former, and persistent smokers, respectively.In this large-scale GWAS, we identified two novel genetic loci in association with the rate of change in FEV1 that harbor candidate genes with biologically plausible functional links to lung function.

  4. A scaled underwater launch system accomplished by stress wave propagation technique

    International Nuclear Information System (INIS)

    Wei Yanpeng; Wang Yiwei; Huang Chenguang; Fang Xin; Duan Zhuping

    2011-01-01

    A scaled underwater launch system based on the stress wave theory and the slip Hopkinson pressure bar (SHPB) technique is developed to study the phenomenon of cavitations and other hydrodynamic features of high-speed submerged bodies. The present system can achieve a transient acceleration in the water instead of long-time acceleration outside the water. The projectile can obtain a maximum speed of 30 m/s in about 200 μs by the SHPB launcher. The cavitation characteristics in the stage of acceleration and deceleration are captured by the high-speed camera. The processes of cavitation inception, development and collapse are also simulated with the business software FLUENT, and the results are in good agreement with experiment. There is about 20-30% energy loss during the launching processes, the mechanism of energy loss is also preliminary investigated by measuring the energy of the incident bar and the projectile. (authors)

  5. Very large scale characterization of graphene mechanical devices using a colorimetry technique.

    Science.gov (United States)

    Cartamil-Bueno, Santiago Jose; Centeno, Alba; Zurutuza, Amaia; Steeneken, Peter Gerard; van der Zant, Herre Sjoerd Jan; Houri, Samer

    2017-06-08

    We use a scalable optical technique to characterize more than 21 000 circular nanomechanical devices made of suspended single- and double-layer graphene on cavities with different diameters (D) and depths (g). To maximize the contrast between suspended and broken membranes we used a model for selecting the optimal color filter. The method enables parallel and automatized image processing for yield statistics. We find the survival probability to be correlated with a structural mechanics scaling parameter given by D 4 /g 3 . Moreover, we extract a median adhesion energy of Γ = 0.9 J m -2 between the membrane and the native SiO 2 at the bottom of the cavities.

  6. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  7. Clinical and molecular analyses of Beckwith-Wiedemann syndrome: Comparison between spontaneous conception and assisted reproduction techniques.

    Science.gov (United States)

    Tenorio, Jair; Romanelli, Valeria; Martin-Trujillo, Alex; Fernández, García-Moya; Segovia, Mabel; Perandones, Claudia; Pérez Jurado, Luis A; Esteller, Manel; Fraga, Mario; Arias, Pedro; Gordo, Gema; Dapía, Irene; Mena, Rocío; Palomares, María; Pérez de Nanclares, Guiomar; Nevado, Julián; García-Miñaur, Sixto; Santos-Simarro, Fernando; Martinez-Glez, Víctor; Vallespín, Elena; Monk, David; Lapunzina, Pablo

    2016-10-01

    Beckwith-Wiedemann syndrome (BWS) is an overgrowth syndrome characterized by an excessive prenatal and postnatal growth, macrosomia, macroglossia, and hemihyperplasia. The molecular basis of this syndrome is complex and heterogeneous, involving genes located at 11p15.5. BWS is correlated with assisted reproductive techniques. BWS in individuals born following assisted reproductive techniques has been found to occur four to nine times higher compared to children with to BWS born after spontaneous conception. Here, we report a series of 187 patients with to BWS born either after assisted reproductive techniques or conceived naturally. Eighty-eight percent of BWS patients born via assisted reproductive techniques had hypomethylation of KCNQ1OT1:TSS-DMR in comparison with 49% for patients with BWS conceived naturally. None of the patients with BWS born via assisted reproductive techniques had hypermethylation of H19/IGF2:IG-DMR, neither CDKN1 C mutations nor patUPD11. We did not find differences in the frequency of multi-locus imprinting disturbances between groups. Patients with BWS born via assisted reproductive techniques had an increased frequency of advanced bone age, congenital heart disease, and decreased frequency of earlobe anomalies but these differences may be explained by the different molecular background compared to those with BWS and spontaneous fertilization. We conclude there is a correlation of the molecular etiology of BWS with the type of conception. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Préparation à l'analyse de données dans Virgo : aspects de techniques informatiques et de techniques d'analyse pour la recherche de coalescences de binaires

    OpenAIRE

    Buskulic , D.

    2006-01-01

    Le détecteur interférométrique d'ondes gravitationnelles Virgo est en phase de mise au point, il devrait atteindre une sensibilité lui permettant de prendre des données scientifiques dès la deuxième moitié de l'année 2006. La préparation à l'analyse de ces données est en cours et le mémoire traite de plusieurs aspects :- Un environnement d'analyse, VEGA, a été mis au point. Il permet à un utilisateur physicien d'accéder et de gérer les données provenant de Virgo, de développer un code d'analy...

  9. Towards improved hydrologic predictions using data assimilation techniques for water resource management at the continental scale

    Science.gov (United States)

    Naz, Bibi; Kurtz, Wolfgang; Kollet, Stefan; Hendricks Franssen, Harrie-Jan; Sharples, Wendy; Görgen, Klaus; Keune, Jessica; Kulkarni, Ketan

    2017-04-01

    More accurate and reliable hydrologic simulations are important for many applications such as water resource management, future water availability projections and predictions of extreme events. However, simulation of spatial and temporal variations in the critical water budget components such as precipitation, snow, evaporation and runoff is highly uncertain, due to errors in e.g. model structure and inputs (hydrologic parameters and forcings). In this study, we use data assimilation techniques to improve the predictability of continental-scale water fluxes using in-situ measurements along with remotely sensed information to improve hydrologic predications for water resource systems. The Community Land Model, version 3.5 (CLM) integrated with the Parallel Data Assimilation Framework (PDAF) was implemented at spatial resolution of 1/36 degree (3 km) over the European CORDEX domain. The modeling system was forced with a high-resolution reanalysis system COSMO-REA6 from Hans-Ertel Centre for Weather Research (HErZ) and ERA-Interim datasets for time period of 1994-2014. A series of data assimilation experiments were conducted to assess the efficiency of assimilation of various observations, such as river discharge data, remotely sensed soil moisture, terrestrial water storage and snow measurements into the CLM-PDAF at regional to continental scales. This setup not only allows to quantify uncertainties, but also improves streamflow predictions by updating simultaneously model states and parameters utilizing observational information. The results from different regions, watershed sizes, spatial resolutions and timescales are compared and discussed in this study.

  10. Distribution-analytical techniques in the study of AD/HD: Delta plot analyses reveal deficits in response inhibition that are eliminated by methylphenidate treatment

    NARCIS (Netherlands)

    Ridderinkhof, K.R.; Scheres, A.; Oosterlaan, J.; Sergeant, J.A.

    2005-01-01

    The authors highlight the utility of distribution-analytical techniques in the study of individual differences and clinical disorders. Cognitive deficits associated with attention-deficit/hyperactivity disorder (AD/HD) were examined by using delta-plot analyses of performance data (reaction time and

  11. Comparisons of Particle Tracking Techniques and Galerkin Finite Element Methods in Flow Simulations on Watershed Scales

    Science.gov (United States)

    Shih, D.; Yeh, G.

    2009-12-01

    This paper applies two numerical approximations, the particle tracking technique and Galerkin finite element method, to solve the diffusive wave equation in both one-dimensional and two-dimensional flow simulations. The finite element method is one of most commonly approaches in numerical problems. It can obtain accurate solutions, but calculation times may be rather extensive. The particle tracking technique, using either single-velocity or average-velocity tracks to efficiently perform advective transport, could use larger time-step sizes than the finite element method to significantly save computational time. Comparisons of the alternative approximations are examined in this poster. We adapt the model WASH123D to examine the work. WASH123D is an integrated multimedia, multi-processes, physics-based computational model suitable for various spatial-temporal scales, was first developed by Yeh et al., at 1998. The model has evolved in design capability and flexibility, and has been used for model calibrations and validations over the course of many years. In order to deliver a locally hydrological model in Taiwan, the Taiwan Typhoon and Flood Research Institute (TTFRI) is working with Prof. Yeh to develop next version of WASH123D. So, the work of our preliminary cooperationx is also sketched in this poster.

  12. Techniques for extracting single-trial activity patterns from large-scale neural recordings

    Science.gov (United States)

    Churchland, Mark M; Yu, Byron M; Sahani, Maneesh; Shenoy, Krishna V

    2008-01-01

    Summary Large, chronically-implanted arrays of microelectrodes are an increasingly common tool for recording from primate cortex, and can provide extracellular recordings from many (order of 100) neurons. While the desire for cortically-based motor prostheses has helped drive their development, such arrays also offer great potential to advance basic neuroscience research. Here we discuss the utility of array recording for the study of neural dynamics. Neural activity often has dynamics beyond that driven directly by the stimulus. While governed by those dynamics, neural responses may nevertheless unfold differently for nominally identical trials, rendering many traditional analysis methods ineffective. We review recent studies – some employing simultaneous recording, some not – indicating that such variability is indeed present both during movement generation, and during the preceding premotor computations. In such cases, large-scale simultaneous recordings have the potential to provide an unprecedented view of neural dynamics at the level of single trials. However, this enterprise will depend not only on techniques for simultaneous recording, but also on the use and further development of analysis techniques that can appropriately reduce the dimensionality of the data, and allow visualization of single-trial neural behavior. PMID:18093826

  13. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  14. Simplified field-in-field technique for a large-scale implementation in breast radiation treatment

    International Nuclear Information System (INIS)

    Fournier-Bidoz, Nathalie; Kirova, Youlia M.; Campana, Francois; Dendale, Rémi; Fourquet, Alain

    2012-01-01

    We wanted to evaluate a simplified “field-in-field” technique (SFF) that was implemented in our department of Radiation Oncology for breast treatment. This study evaluated 15 consecutive patients treated with a simplified field in field technique after breast-conserving surgery for early-stage breast cancer. Radiotherapy consisted of whole-breast irradiation to the total dose of 50 Gy in 25 fractions, and a boost of 16 Gy in 8 fractions to the tumor bed. We compared dosimetric outcomes of SFF to state-of-the-art electronic surface compensation (ESC) with dynamic leaves. An analysis of early skin toxicity of a population of 15 patients was performed. The median volume receiving at least 95% of the prescribed dose was 763 mL (range, 347–1472) for SFF vs. 779 mL (range, 349–1494) for ESC. The median residual 107% isodose was 0.1 mL (range, 0–63) for SFF and 1.9 mL (range, 0–57) for ESC. Monitor units were on average 25% higher in ESC plans compared with SFF. No patient treated with SFF had acute side effects superior to grade 1-NCI scale. SFF created homogenous 3D dose distributions equivalent to electronic surface compensation with dynamic leaves. It allowed the integration of a forward planned concomitant tumor bed boost as an additional multileaf collimator subfield of the tangential fields. Compared with electronic surface compensation with dynamic leaves, shorter treatment times allowed better radiation protection to the patient. Low-grade acute toxicity evaluated weekly during treatment and 2 months after treatment completion justified the pursuit of this technique for all breast patients in our department.

  15. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    Science.gov (United States)

    Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.

    2014-02-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.

  16. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    International Nuclear Information System (INIS)

    Rasam, A R A; Ghazali, R; Noor, A M M; Mohd, W M N W; Hamid, J R A; Bazlan, M J; Ahmad, N

    2014-01-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia

  17. Instruments and techniques for analysing the time-resolved transverse phase space distribution of high-brightness electron beams

    International Nuclear Information System (INIS)

    Rudolph, Jeniffa

    2012-01-01

    This thesis deals with the instruments and techniques used to characterise the transverse phase space distribution of high-brightness electron beams. In particular, methods are considered allowing to measure the emittance as a function of the longitudinal coordinate within the bunch (slice emittance) with a resolution in the ps to sub-ps range. The main objective of this work is the analysis of techniques applicable for the time-resolved phase space characterisation for future high-brightness electron beam sources and single-pass accelerators based on these. The competence built up by understanding and comparing different techniques is to be used for the design and operation of slice diagnostic systems for the Berlin Energy Recovery Linac Project (BERLinPro). In the framework of the thesis, two methods applicable for slice emittance measurements are considered, namely the zero-phasing technique and the use of a transverse deflector. These methods combine the conventional quadrupole scan technique with a transfer of the longitudinal distribution into a transverse distribution. Measurements were performed within different collaborative projects. The experimental setup, the measurement itself and the data analysis are discussed as well as measurement results and simulations. In addition, the phase space tomography technique is introduced. In contrast to quadrupole scan-based techniques, tomography is model-independent and can reconstruct the phase space distribution from simple projected measurements. The developed image reconstruction routine based on the Maximum Entropy algorithm is introduced. The quality of the reconstruction is tested using different model distributions, simulated data and measurement data. The results of the tests are presented. The adequacy of the investigated techniques, the experimental procedures as well as the developed data analysis tools could be verified. The experimental and practical experience gathered during this work, the

  18. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    International Nuclear Information System (INIS)

    Hola, Marketa; Kalvoda, Jiri; Novakova, Hana; Skoda, Radek; Kanicky, Viktor

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp (Cyprinus carpio) scales: (a)A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 μm width and 50 μm depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. (b)Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  19. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    Energy Technology Data Exchange (ETDEWEB)

    Hola, Marketa [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic); Kalvoda, Jiri, E-mail: jkalvoda@centrum.cz [Department of Geological Sciences, Masaryk University of Brno, Kotlarska 2, 611 37 Brno (Czech Republic); Novakova, Hana [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic); Skoda, Radek [Department of Geological Sciences, Masaryk University of Brno, Kotlarska 2, 611 37 Brno (Czech Republic); Kanicky, Viktor [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic)

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp (Cyprinus carpio) scales: (a)A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 {mu}m width and 50 {mu}m depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. (b)Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  20. CO{sub 2} Sequestration Capacity and Associated Aspects of the Most Promising Geologic Formations in the Rocky Mountain Region: Local-Scale Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Laes, Denise; Eisinger, Chris; Morgan, Craig; Rauzi, Steve; Scholle, Dana; Scott, Phyllis; Lee, Si-Yong; Zaluski, Wade; Esser, Richard; Matthews, Vince; McPherson, Brian

    2013-07-30

    The purpose of this report is to provide a summary of individual local-­scale CCS site characterization studies conducted in Colorado, New Mexico and Utah. These site-­ specific characterization analyses were performed as part of the “Characterization of Most Promising Sequestration Formations in the Rocky Mountain Region” (RMCCS) project. The primary objective of these local-­scale analyses is to provide a basis for regional-­scale characterization efforts within each state. Specifically, limits on time and funding will typically inhibit CCS projects from conducting high-­ resolution characterization of a state-­sized region, but smaller (< 10,000 km{sup 2}) site analyses are usually possible, and such can provide insight regarding limiting factors for the regional-­scale geology. For the RMCCS project, the outcomes of these local-­scale studies provide a starting point for future local-­scale site characterization efforts in the Rocky Mountain region.

  1. Item Response Theory Analyses of the Parent and Teacher Ratings of the DSM-IV ADHD Rating Scale

    Science.gov (United States)

    Gomez, Rapson

    2008-01-01

    The graded response model (GRM), which is based on item response theory (IRT), was used to evaluate the psychometric properties of the inattention and hyperactivity/impulsivity symptoms in an ADHD rating scale. To accomplish this, parents and teachers completed the DSM-IV ADHD Rating Scale (DARS; Gomez et al., "Journal of Child Psychology and…

  2. Mapping patient safety: a large-scale literature review using bibliometric visualisation techniques.

    Science.gov (United States)

    Rodrigues, S P; van Eck, N J; Waltman, L; Jansen, F W

    2014-03-13

    The amount of scientific literature available is often overwhelming, making it difficult for researchers to have a good overview of the literature and to see relations between different developments. Visualisation techniques based on bibliometric data are helpful in obtaining an overview of the literature on complex research topics, and have been applied here to the topic of patient safety (PS). On the basis of title words and citation relations, publications in the period 2000-2010 related to PS were identified in the Scopus bibliographic database. A visualisation of the most frequently cited PS publications was produced based on direct and indirect citation relations between publications. Terms were extracted from titles and abstracts of the publications, and a visualisation of the most important terms was created. The main PS-related topics studied in the literature were identified using a technique for clustering publications and terms. A total of 8480 publications were identified, of which the 1462 most frequently cited ones were included in the visualisation. The publications were clustered into 19 clusters, which were grouped into three categories: (1) magnitude of PS problems (42% of all included publications); (2) PS risk factors (31%) and (3) implementation of solutions (19%). In the visualisation of PS-related terms, five clusters were identified: (1) medication; (2) measuring harm; (3) PS culture; (4) physician; (5) training, education and communication. Both analysis at publication and term level indicate an increasing focus on risk factors. A bibliometric visualisation approach makes it possible to analyse large amounts of literature. This approach is very useful for improving one's understanding of a complex research topic such as PS and for suggesting new research directions or alternative research priorities. For PS research, the approach suggests that more research on implementing PS improvement initiatives might be needed.

  3. Mapping patient safety: a large-scale literature review using bibliometric visualisation techniques

    Science.gov (United States)

    Rodrigues, S P; van Eck, N J; Waltman, L; Jansen, F W

    2014-01-01

    Background The amount of scientific literature available is often overwhelming, making it difficult for researchers to have a good overview of the literature and to see relations between different developments. Visualisation techniques based on bibliometric data are helpful in obtaining an overview of the literature on complex research topics, and have been applied here to the topic of patient safety (PS). Methods On the basis of title words and citation relations, publications in the period 2000–2010 related to PS were identified in the Scopus bibliographic database. A visualisation of the most frequently cited PS publications was produced based on direct and indirect citation relations between publications. Terms were extracted from titles and abstracts of the publications, and a visualisation of the most important terms was created. The main PS-related topics studied in the literature were identified using a technique for clustering publications and terms. Results A total of 8480 publications were identified, of which the 1462 most frequently cited ones were included in the visualisation. The publications were clustered into 19 clusters, which were grouped into three categories: (1) magnitude of PS problems (42% of all included publications); (2) PS risk factors (31%) and (3) implementation of solutions (19%). In the visualisation of PS-related terms, five clusters were identified: (1) medication; (2) measuring harm; (3) PS culture; (4) physician; (5) training, education and communication. Both analysis at publication and term level indicate an increasing focus on risk factors. Conclusions A bibliometric visualisation approach makes it possible to analyse large amounts of literature. This approach is very useful for improving one's understanding of a complex research topic such as PS and for suggesting new research directions or alternative research priorities. For PS research, the approach suggests that more research on implementing PS improvement initiatives

  4. Experimental investigations of micro-scale flow and heat transfer phenomena by using molecular tagging techniques

    International Nuclear Information System (INIS)

    Hu, Hui; Jin, Zheyan; Lum, Chee; Nocera, Daniel; Koochesfahani, Manoochehr

    2010-01-01

    Recent progress made in the development of novel molecule-based flow diagnostic techniques, including molecular tagging velocimetry (MTV) and lifetime-based molecular tagging thermometry (MTT), to achieve simultaneous measurements of multiple important flow variables for micro-flows and micro-scale heat transfer studies is reported in this study. The focus of the work described here is the particular class of molecular tagging tracers that relies on phosphorescence. Instead of using tiny particles, especially designed phosphorescent molecules, which can be turned into long-lasting glowing marks upon excitation by photons of appropriate wavelength, are used as tracers for both flow velocity and temperature measurements. A pulsed laser is used to 'tag' the tracer molecules in the regions of interest, and the tagged molecules are imaged at two successive times within the photoluminescence lifetime of the tracer molecules. The measured Lagrangian displacement of the tagged molecules provides the estimate of the fluid velocity. The simultaneous temperature measurement is achieved by taking advantage of the temperature dependence of phosphorescence lifetime, which is estimated from the intensity ratio of the tagged molecules in the acquired two phosphorescence images. The implementation and application of the molecular tagging approach for micro-scale thermal flow studies are demonstrated by two examples. The first example is to conduct simultaneous flow velocity and temperature measurements inside a microchannel to quantify the transient behavior of electroosmotic flow (EOF) to elucidate underlying physics associated with the effects of Joule heating on electrokinematically driven flows. The second example is to examine the time evolution of the unsteady heat transfer and phase changing process inside micro-sized, icing water droplets, which is pertinent to the ice formation and accretion processes as water droplets impinge onto cold wind turbine blades

  5. Spraying Techniques for Large Scale Manufacturing of PEM-FC Electrodes

    Science.gov (United States)

    Hoffman, Casey J.

    Fuel cells are highly efficient energy conversion devices that represent one part of the solution to the world's current energy crisis in the midst of global climate change. When supplied with the necessary reactant gasses, fuel cells produce only electricity, heat, and water. The fuel used, namely hydrogen, is available from many sources including natural gas and the electrolysis of water. If the electricity for electrolysis is generated by renewable energy (e.g., solar and wind power), fuel cells represent a completely 'green' method of producing electricity. The thought of being able to produce electricity to power homes, vehicles, and other portable or stationary equipment with essentially zero environmentally harmful emissions has been driving academic and industrial fuel cell research and development with the goal of successfully commercializing this technology. Unfortunately, fuel cells cannot achieve any appreciable market penetration at their current costs. The author's hypothesis is that: the development of automated, non-contact deposition methods for electrode manufacturing will improve performance and process flexibility, thereby helping to accelerate the commercialization of PEMFC technology. The overarching motivation for this research was to lower the cost of manufacturing fuel cell electrodes and bring the technology one step closer to commercial viability. The author has proven this hypothesis through a detailed study of two non-contact spraying methods. These scalable deposition systems were incorporated into an automated electrode manufacturing system that was designed and built by the author for this research. The electrode manufacturing techniques developed by the author have been shown to produce electrodes that outperform a common lab-scale contact method that was studied as a baseline, as well as several commercially available electrodes. In addition, these scalable, large scale electrode manufacturing processes developed by the author are

  6. Nuclear analytical techniques applied to the large scale measurements of atmospheric aerosols in the amazon region

    International Nuclear Information System (INIS)

    Gerab, Fabio

    1996-03-01

    This work presents the characterization of the atmosphere aerosol collected in different places of the Amazon Basin. We studied both the biogenic emission from the forest and the particulate material which is emitted to the atmosphere due to the large scale man-made burning during the dry season. The samples were collected during a three year period at two different locations in the Amazon, namely the Alta Floresta (MT) and Serra do Navio (AP) regions, using stacked unit filters. These regions represent two different atmospheric compositions: the aerosol is dominated by the forest natural biogenic emission at Serra do Navio, while at Alta Floresta it presents an important contribution from the man-made burning during the dry season. At Alta Floresta we took samples in gold in order to characterize mercury emission to the atmosphere related to the gold prospection activity in Amazon. Airplanes were used for aerosol sampling during the 1992 and 1993 dry seasons to characterize the atmospheric aerosol contents from man-made burning in large Amazonian areas. The samples were analyzed using several nuclear analytic techniques: Particle Induced X-ray Emission for the quantitative analysis of trace elements with atomic number above 11; Particle Induced Gamma-ray Emission for the quantitative analysis of Na; and Proton Microprobe was used for the characterization of individual particles of the aerosol. Reflectancy technique was used in the black carbon quantification, gravimetric analysis to determine the total atmospheric aerosol concentration and Cold Vapor Atomic Absorption Spectroscopy for quantitative analysis of mercury in the particulate from the Alta Floresta gold shops. Ionic chromatography was used to quantify ionic contents of aerosols from the fine mode particulate samples from Serra do Navio. Multivariate statistical analysis was used in order to identify and characterize the sources of the atmospheric aerosol present in the sampled regions. (author)

  7. Japanese standard method for safety evaluation using best estimate code based on uncertainty and scaling analyses with statistical approach

    International Nuclear Information System (INIS)

    Mizokami, Shinya; Hotta, Akitoshi; Kudo, Yoshiro; Yonehara, Tadashi; Watada, Masayuki; Sakaba, Hiroshi

    2009-01-01

    Current licensing practice in Japan consists of using conservative boundary and initial conditions(BIC), assumptions and analytical codes. The safety analyses for licensing purpose are inherently deterministic. Therefore, conservative BIC and assumptions, such as single failure, must be employed for the analyses. However, using conservative analytical codes are not considered essential. The standard committee of Atomic Energy Society of Japan(AESJ) has drawn up the standard for using best estimate codes for safety analyses in 2008 after three-years of discussions reflecting domestic and international recent findings. (author)

  8. Systematic study of the effects of scaling techniques in numerical simulations with application to enhanced geothermal systems

    Science.gov (United States)

    Heinze, Thomas; Jansen, Gunnar; Galvan, Boris; Miller, Stephen A.

    2016-04-01

    Numerical modeling is a well established tool in rock mechanics studies investigating a wide range of problems. Especially for estimating seismic risk of a geothermal energy plants a realistic rock mechanical model is needed. To simulate a time evolving system, two different approaches need to be separated: Implicit methods for solving linear equations are unconditionally stable, while explicit methods are limited by the time step. However, explicit methods are often preferred because of their limited memory demand, their scalability in parallel computing, and simple implementation of complex boundary conditions. In numerical modeling of explicit elastoplastic dynamics the time step is limited by the rock density. Mass scaling techniques, which increase the rock density artificially by several orders, can be used to overcome this limit and significantly reduce computation time. In the context of geothermal energy this is of great interest because in a coupled hydro-mechanical model the time step of the mechanical part is significantly smaller than for the fluid flow. Mass scaling can also be combined with time scaling, which increases the rate of physical processes, assuming that processes are rate independent. While often used, the effect of mass and time scaling and how it may influence the numerical results is rarely-mentioned in publications, and choosing the right scaling technique is typically performed by trial and error. Also often scaling techniques are used in commercial software packages, hidden from the untrained user. To our knowledge, no systematic studies have addressed how mass scaling might affect the numerical results. In this work, we present results from an extensive and systematic study of the influence of mass and time scaling on the behavior of a variety of rock-mechanical models. We employ a finite difference scheme to model uniaxial and biaxial compression experiments using different mass and time scaling factors, and with physical models

  9. Analyses of the Short Periodical Part of the Spectrum of Pole Coordinate Variations Determined by the Astrometric and Laser Technique

    Science.gov (United States)

    Kołaczek, B.; Kosek, W.; Galas, R.

    Series of BIH astrometric (BIH-ASTR) pole coordinates and of CSR LAGEOS laser ranging (CSR-LALAR) pole coordinates determined in the MERIT Campaign in the years 1972 - 1986, 1983 - 1986, respectively, have been filtered by different band pass filters consisting of the law pass Gauss filter and of the high pass Butterworth filter. Filtered residuals were analysed by the MESA-Maximum Entropy Spectra Analysis and by the Ormsby narrow band pass filters in order to find numerically modeled signals approximating these residuals in the best way.

  10. Recent hydrological variability and extreme precipitation events in Moroccan Middle-Atlas mountains: micro-scale analyses of lacustrine sediments

    Science.gov (United States)

    Jouve, Guillaume; Vidal, Laurence; Adallal, Rachid; Bard, Edouard; Benkaddour, Abdel; Chapron, Emmanuel; Courp, Thierry; Dezileau, Laurent; Hébert, Bertil; Rhoujjati, Ali; Simonneau, Anaelle; Sonzogni, Corinne; Sylvestre, Florence; Tachikawa, Kazuyo; Viry, Elisabeth

    2016-04-01

    Since the 1990s, the Mediterranean basin undergoes an increase in precipitation events and extreme droughts likely to intensify in the XXI century, and whose origin is attributable to human activities since 1850 (IPCC, 2013). Regional climate models indicate a strengthening of flood episodes at the end of the XXI century in Morocco (Tramblay et al, 2012). To understand recent hydrological and paleohydrological variability in North Africa, our study focuses on the macro- and micro-scale analysis of sedimentary sequences from Lake Azigza (Moroccan Middle Atlas Mountains) covering the last few centuries. This lake is relevant since local site monitoring revealed that lake water table levels were correlated with precipitation regime (Adallal R., PhD Thesis in progress). The aim of our study is to distinguish sedimentary facies characteristic of low and high lake levels, in order to reconstruct past dry and wet periods during the last two hundred years. Here, we present results from sedimentological (lithology, grain size, microstructures under thin sections), geochemical (XRF) and physical (radiography) analyses on short sedimentary cores (64 cm long) taken into the deep basin of Lake Azigza (30 meters water depth). Cores have been dated (radionuclides 210Pb, 137Cs, and 14C dating). Two main facies were distinguished: one organic-rich facies composed of wood fragments, several reworked layers and characterized by Mn peaks; and a second facies composed of terrigenous clastic sediments, without wood nor reworked layers, and characterized by Fe, Ti, Si and K peaks. The first facies is interpreted as a high lake level stand. Indeed, the highest paleoshoreline is close to the vegetation, and steeper banks can increase the current velocity, allowing the transport of wood fragments in case of extreme precipitation events. Mn peaks are interpreted as Mn oxides precipitations under well-oxygenated deep waters after runoff events. The second facies is linked to periods of

  11. Scaling model for prediction of radionuclide activity in cooling water using a regression triplet technique

    International Nuclear Information System (INIS)

    Silvia Dulanska; Lubomir Matel; Milan Meloun

    2010-01-01

    The decommissioning of the nuclear power plant (NPP) A1 Jaslovske Bohunice (Slovakia) is a complicated set of problems that is highly demanding both technically and financially. The basic goal of the decommissioning process is the total elimination of radioactive materials from the nuclear power plant area, and radwaste treatment to a form suitable for its safe disposal. The initial conditions of decommissioning also include elimination of the operational events, preparation and transport of the fuel from the plant territory, radiochemical and physical-chemical characterization of the radioactive wastes. One of the problems was and still is the processing of the liquid radioactive wastes. Such media is also the cooling water of the long-term storage of spent fuel. A suitable scaling model for predicting the activity of hard-to-detect radionuclides 239,240 Pu, 90 Sr and summary beta in cooling water using a regression triplet technique has been built using the regression triplet analysis and regression diagnostics. (author)

  12. Large scale distribution monitoring of FRP-OF based on BOTDR technique for infrastructures

    Science.gov (United States)

    Zhou, Zhi; He, Jianping; Yan, Kai; Ou, Jinping

    2007-04-01

    BOTDA(R) sensing technique is considered as one of the most practical solution for large-sized structures as the instrument. However, there is still a big obstacle to apply BOTDA(R) in large-scale area due to the high cost and the reliability problem of sensing head which is associated to the sensor installation and survival. In this paper, we report a novel low-cost and high reliable BOTDA(R) sensing head using FRP(Fiber Reinforced Polymer)-bare optical fiber rebar, named BOTDA(R)-FRP-OF. We investigated the surface bonding and its mechanical strength by SEM and intensity experiments. Considering the strain difference between OF and host matrix which may result in measurement error, the strain transfer from host to OF have been theoretically studied. Furthermore, GFRP-OFs sensing properties of strain and temperature at different gauge length were tested under different spatial and readout resolution using commercial BOTDA. Dual FRP-OFs temperature compensation method has also been proposed and analyzed. And finally, BOTDA(R)-OFs have been applied in Tiyu west road civil structure at Guangzhou and Daqing Highway. This novel FRP-OF rebar shows both high strengthen and good sensing properties, which can be used in long-term SHM for civil infrastructures.

  13. Digital Image Correlation Techniques Applied to Large Scale Rocket Engine Testing

    Science.gov (United States)

    Gradl, Paul R.

    2016-01-01

    Rocket engine hot-fire ground testing is necessary to understand component performance, reliability and engine system interactions during development. The J-2X upper stage engine completed a series of developmental hot-fire tests that derived performance of the engine and components, validated analytical models and provided the necessary data to identify where design changes, process improvements and technology development were needed. The J-2X development engines were heavily instrumented to provide the data necessary to support these activities which enabled the team to investigate any anomalies experienced during the test program. This paper describes the development of an optical digital image correlation technique to augment the data provided by traditional strain gauges which are prone to debonding at elevated temperatures and limited to localized measurements. The feasibility of this optical measurement system was demonstrated during full scale hot-fire testing of J-2X, during which a digital image correlation system, incorporating a pair of high speed cameras to measure three-dimensional, real-time displacements and strains was installed and operated under the extreme environments present on the test stand. The camera and facility setup, pre-test calibrations, data collection, hot-fire test data collection and post-test analysis and results are presented in this paper.

  14. Recognition of Activities of Daily Living Based on Environmental Analyses Using Audio Fingerprinting Techniques: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Ivan Miguel Pires

    2018-01-01

    Full Text Available An increase in the accuracy of identification of Activities of Daily Living (ADL is very important for different goals of Enhanced Living Environments and for Ambient Assisted Living (AAL tasks. This increase may be achieved through identification of the surrounding environment. Although this is usually used to identify the location, ADL recognition can be improved with the identification of the sound in that particular environment. This paper reviews audio fingerprinting techniques that can be used with the acoustic data acquired from mobile devices. A comprehensive literature search was conducted in order to identify relevant English language works aimed at the identification of the environment of ADLs using data acquired with mobile devices, published between 2002 and 2017. In total, 40 studies were analyzed and selected from 115 citations. The results highlight several audio fingerprinting techniques, including Modified discrete cosine transform (MDCT, Mel-frequency cepstrum coefficients (MFCC, Principal Component Analysis (PCA, Fast Fourier Transform (FFT, Gaussian mixture models (GMM, likelihood estimation, logarithmic moduled complex lapped transform (LMCLT, support vector machine (SVM, constant Q transform (CQT, symmetric pairwise boosting (SPB, Philips robust hash (PRH, linear discriminant analysis (LDA and discrete cosine transform (DCT.

  15. Analysing data from observer studies in medical imaging research: An introductory guide to free-response techniques

    International Nuclear Information System (INIS)

    Thompson, J.D.; Manning, D.J.; Hogg, P.

    2014-01-01

    Observer performance methods maintain their place in radiology research, particularly in the assessment of the diagnostic accuracy of new and existing techniques, despite not being fully embraced by the wider audience in medical imaging. The receiver operating characteristic (ROC) paradigm has been widely used in research and the latest location sensitive methods allow an analysis that is closer to the clinical scenario. This paper discusses the underpinning theories behind observer performance assessment, exploring the potential sources of error and the development of the ROC method. The paper progresses by explaining the clinical relevance and statistical suitability of the free-response ROC (FROC) paradigm, and the methodological considerations for those wishing to perform an observer performance study

  16. Observation and quantitative analyses of the skeletal and central nervous systems of human embryos and fetuses using microimaging techniques

    International Nuclear Information System (INIS)

    Shiota, Kohei; Yamada, Shigehito; Tsuchiya, Maiko; Nakajima, Takashi; Takakuwa, Tetsuya; Morimoto, Naoki; Ogihara, Naomichi; Katayama, Kazumichi; Kose, Katsumi

    2011-01-01

    High resolution images have been available to trace the organogenesis of the central nervous system (CNS) and crania of human embryo and fetus with microimaging techniques of CT, novel MR microscopy and episcopic fluorescence image capture (EFIC). The helical CT was conducted for Kyoto University's stock specimens of 31 fetuses at gestational stages 8-24 weeks to observe the skeletal development of neuro- and viscero-cranium in 2D and 3D view. Sixty seven landmarks were defined on the images at outer surface and lumen of the skull to analyze the morphological development. Increase of cranial length was found significant relative to width and height in fetus, confirming the faster development of neurocranium than viscero-region. Next, 1.5/2.34 T MR microscopic imaging was conducted for fixed specimens of >1000 embryos at 4-8 weeks after fertilization. For this, a newly developed contrast optimization by mapping the specimen with the relaxation time had been performed to acquire the highest resolution in the world of 80-120 micrometer, which enabled to image primordia of the inner embryonic structures like brain, spinal cord, choroid plexus, skeletons of skull and spinal column. The finding was thought helpful for analysis and diagnosis of their early development. EFIC of embryos was conducted firstly in the world, where spontaneous fluorescence of their cross section was captured by the fluorescent microscope with the resolution as high as <10 micrometer to reconstruct 2D/3D images. EFIC was found to give images of embryonic CNS, ventricular system, layering structures of brain and spinal cord without staining, and to give sequential changes of their volumes quantitated during the development. The reported microimaging techniques were concluded useful for analysis of normal and abnormal early development of CNS and skull in humans. (T.T.)

  17. Anterior stromal puncture with staining: A modified technique for preoperative reference corneal marking for toric lenses and its retrospective analyses

    Directory of Open Access Journals (Sweden)

    Sahil Bhandari

    2016-01-01

    Full Text Available Introduction: Toric intraocular lenses (IOLs are an effective way of compensating preexisting corneal astigmatism during cataract surgery. To achieve success, it is imperative to align the toric IOLs in desired position and preoperative reference marking is one among the three important steps for accurate alignment. To make the marking procedure simpler and effective, we have modified the conventional three-step slit lamp-based technique. Materials and Methods: Patient is seated in front of the slit lamp and asked to keep the chin over chin rest. A 26-gauge bent needle with tip stained by sterile blue ink marker is used to make anterior stromal puncture (ASP at the edges of horizontal 180° axis near the limbus. Results: A total of 58 eyes were retrospectively evaluated. Mean (+/-SD IOL deviation on day 1 and day 30 was 5.7 ± 6.5° and 4.7 ± 5.6°, respectively. Median IOL misalignment on day 1 and day 30 was 3°. Redialing of IOL was required in 2 (3.4% eyes only, all of which were performed within 1 week of surgery. In total, 2 (3.7% eyes had a residual astigmatism of − 0.5 Dcyl and − 1.0 Dcyl, respectively. Conclusion: ASP is an effective technique for reference marking, technically simpler and can be practiced by most of the surgeons. It avoids the necessity of high-end sophisticated machinery and gives a better platform for the reference corneal marking along with the benefit of reproducibility and simplicity.

  18. Challenges in analysing and visualizing large-scale molecular dynamics simulations: domain and defect formation in lung surfactant monolayers

    International Nuclear Information System (INIS)

    Mendez-Villuendas, E; Baoukina, S; Tieleman, D P

    2012-01-01

    Molecular dynamics simulations have rapidly grown in size and complexity, as computers have become more powerful and molecular dynamics software more efficient. Using coarse-grained models like MARTINI system sizes of the order of 50 nm × 50 nm × 50 nm can be simulated on commodity clusters on microsecond time scales. For simulations of biological membranes and monolayers mimicking lung surfactant this enables large-scale transformation and complex mixtures of lipids and proteins. Here we use a simulation of a monolayer with three phospholipid components, cholesterol, lung surfactant proteins, water, and ions on a ten microsecond time scale to illustrate some current challenges in analysis. In the simulation, phase separation occurs followed by formation of a bilayer fold in which lipids and lung surfactant protein form a highly curved structure in the aqueous phase. We use Voronoi analysis to obtain detailed physical properties of the different components and phases, and calculate local mean and Gaussian curvatures of the bilayer fold.

  19. Innovative Techniques for Large-Scale Collection, Processing, and Storage of Eelgrass (Zostera marina) Seeds

    National Research Council Canada - National Science Library

    Orth, Robert J; Marion, Scott R

    2007-01-01

    .... Although methods for hand-collecting, processing and storing eelgrass seeds have advanced to match the scale of collections, the number of seeds collected has limited the scale of restoration efforts...

  20. Large-scale association analyses identify new loci influencing glycemic traits and provide insight into the underlying biological pathways

    DEFF Research Database (Denmark)

    Scott, Robert A; Lagou, Vasiliki; Welch, Ryan P

    2012-01-01

    Through genome-wide association meta-analyses of up to 133,010 individuals of European ancestry without diabetes, including individuals newly genotyped using the Metabochip, we have increased the number of confirmed loci influencing glycemic traits to 53, of which 33 also increase type 2 diabetes...

  1. Large-scale association analyses identify new loci influencing glycemic traits and provide insight into the underlying biological pathways

    NARCIS (Netherlands)

    Scott, Robert A.; Lagou, Vasiliki; Welch, Ryan P.; Wheeler, Eleanor; Montasser, May E.; Luan, Jian'an; Mägi, Reedik; Strawbridge, Rona J.; Rehnberg, Emil; Gustafsson, Stefan; Kanoni, Stavroula; Rasmussen-Torvik, Laura J.; Yengo, Loïc; Lecoeur, Cecile; Shungin, Dmitry; Sanna, Serena; Sidore, Carlo; Johnson, Paul C. D.; Jukema, J. Wouter; Johnson, Toby; Mahajan, Anubha; Verweij, Niek; Thorleifsson, Gudmar; Hottenga, Jouke-Jan; Shah, Sonia; Smith, Albert V.; Sennblad, Bengt; Gieger, Christian; Salo, Perttu; Perola, Markus; Timpson, Nicholas J.; Evans, David M.; Pourcain, Beate St; Wu, Ying; Andrews, Jeanette S.; Hui, Jennie; Bielak, Lawrence F.; Zhao, Wei; Horikoshi, Momoko; Navarro, Pau; Isaacs, Aaron; O'Connell, Jeffrey R.; Stirrups, Kathleen; Vitart, Veronique; Hayward, Caroline; Esko, Tõnu; Mihailov, Evelin; Fraser, Ross M.; Fall, Tove; Voight, Benjamin F.; Raychaudhuri, Soumya; Chen, Han; Lindgren, Cecilia M.; Morris, Andrew P.; Rayner, Nigel W.; Robertson, Neil; Rybin, Denis; Liu, Ching-Ti; Beckmann, Jacques S.; Willems, Sara M.; Chines, Peter S.; Jackson, Anne U.; Kang, Hyun Min; Stringham, Heather M.; Song, Kijoung; Tanaka, Toshiko; Peden, John F.; Goel, Anuj; Hicks, Andrew A.; An, Ping; Müller-Nurasyid, Martina; Franco-Cereceda, Anders; Folkersen, Lasse; Marullo, Letizia; Jansen, Hanneke; Oldehinkel, Albertine J.; Bruinenberg, Marcel; Pankow, James S.; North, Kari E.; Forouhi, Nita G.; Loos, Ruth J. F.; Edkins, Sarah; Varga, Tibor V.; Hallmans, Göran; Oksa, Heikki; Antonella, Mulas; Nagaraja, Ramaiah; Trompet, Stella; Ford, Ian; Bakker, Stephan J. L.; Kong, Augustine; Kumari, Meena; Gigante, Bruna; Herder, Christian; Munroe, Patricia B.; Caulfield, Mark; Antti, Jula; Mangino, Massimo; Small, Kerrin; Miljkovic, Iva; Liu, Yongmei; Atalay, Mustafa; Kiess, Wieland; James, Alan L.; Rivadeneira, Fernando; Uitterlinden, Andre G.; Palmer, Colin N. A.; Doney, Alex S. F.; Willemsen, Gonneke; Smit, Johannes H.; Campbell, Susan; Polasek, Ozren; Bonnycastle, Lori L.; Hercberg, Serge; Dimitriou, Maria; Bolton, Jennifer L.; Fowkes, Gerard R.; Kovacs, Peter; Lindström, Jaana; Zemunik, Tatijana; Bandinelli, Stefania; Wild, Sarah H.; Basart, Hanneke V.; Rathmann, Wolfgang; Grallert, Harald; Maerz, Winfried; Kleber, Marcus E.; Boehm, Bernhard O.; Peters, Annette; Pramstaller, Peter P.; Province, Michael A.; Borecki, Ingrid B.; Hastie, Nicholas D.; Rudan, Igor; Campbell, Harry; Watkins, Hugh; Farrall, Martin; Stumvoll, Michael; Ferrucci, Luigi; Waterworth, Dawn M.; Bergman, Richard N.; Collins, Francis S.; Tuomilehto, Jaakko; Watanabe, Richard M.; de Geus, Eco J. C.; Penninx, Brenda W.; Hofman, Albert; Oostra, Ben A.; Psaty, Bruce M.; Vollenweider, Peter; Wilson, James F.; Wright, Alan F.; Hovingh, G. Kees; Metspalu, Andres; Uusitupa, Matti; Magnusson, Patrik K. E.; Kyvik, Kirsten O.; Kaprio, Jaakko; Price, Jackie F.; Dedoussis, George V.; Deloukas, Panos; Meneton, Pierre; Lind, Lars; Boehnke, Michael; Shuldiner, Alan R.; van Duijn, Cornelia M.; Morris, Andrew D.; Toenjes, Anke; Peyser, Patricia A.; Beilby, John P.; Körner, Antje; Kuusisto, Johanna; Laakso, Markku; Bornstein, Stefan R.; Schwarz, Peter E. H.; Lakka, Timo A.; Rauramaa, Rainer; Adair, Linda S.; Smith, George Davey; Spector, Tim D.; Illig, Thomas; de Faire, Ulf; Hamsten, Anders; Gudnason, Vilmundur; Kivimaki, Mika; Hingorani, Aroon; Keinanen-Kiukaanniemi, Sirkka M.; Saaristo, Timo E.; Boomsma, Dorret I.; Stefansson, Kari; van der Harst, Pim; Dupuis, Josée; Pedersen, Nancy L.; Sattar, Naveed; Harris, Tamara B.; Cucca, Francesco; Ripatti, Samuli; Salomaa, Veikko; Mohlke, Karen L.; Balkau, Beverley; Froguel, Philippe; Pouta, Anneli; Jarvelin, Marjo-Riitta; Wareham, Nicholas J.; Bouatia-Naji, Nabila; McCarthy, Mark I.; Franks, Paul W.; Meigs, James B.; Teslovich, Tanya M.; Florez, Jose C.; Langenberg, Claudia; Ingelsson, Erik; Prokopenko, Inga; Barroso, Inês

    2012-01-01

    Through genome-wide association meta-analyses of up to 133,010 individuals of European ancestry without diabetes, including individuals newly genotyped using the Metabochip, we have increased the number of confirmed loci influencing glycemic traits to 53, of which 33 also increase type 2 diabetes

  2. A robust University-NGO partnership: Analysing school efficiencies in Bolivia with community-based management techniques

    Directory of Open Access Journals (Sweden)

    Joao Neiva de Figueiredo

    2013-09-01

    Full Text Available Community-based management research is a collaborative effort between management, academics and communities in need with the specific goal of achieving social change to foster social justice. Because it is designed to promote and validate joint methods of discovery and community-based sources of knowledge, community-based management research has several unique characteristics, which may affect its execution. This article describes the process of a community-based management research project which is descriptive in nature and uses quantitative techniques to examine school efficiencies in low-income communities in a developing country – Bolivia. The article describes the partnership between a US-based university and a Bolivian not-for-profit organisation, the research context and the history of the research project, including its various phases. It focuses on the (yet unpublished process of the community-based research as opposed to its content (which has been published elsewhere. The article also makes the case that the robust partnership between the US-based university and the Bolivian NGO has been a determining factor in achieving positive results. Strengths and limitations are examined in the hope that the experience may be helpful to others conducting descriptive quantitative management research using community-engaged frameworks in cross-cultural settings. Keywords: international partnership, community-engaged scholarship, education efficiency, multicultural low-income education.

  3. The integrated analyses of digital field mapping techniques and traditional field methods: implications from the Burdur-Fethiye Shear Zone, SW Turkey as a case-study

    Science.gov (United States)

    Elitez, İrem; Yaltırak, Cenk; Zabcı, Cengiz; Şahin, Murat

    2015-04-01

    The precise geological mapping is one of the most important issues in geological studies. Documenting the spatial distribution of geological bodies and their contacts play a crucial role on interpreting the tectonic evolution of any region. Although the traditional field techniques are still accepted to be the most fundamental tools in construction of geological maps, we suggest that the integration of digital technologies to the classical methods significantly increases the resolution and the quality of such products. We simply follow the following steps in integration of the digital data with the traditional field observations. First, we create the digital elevation model (DEM) of the region of interest by interpolating the digital contours of 1:25000 scale topographic maps to 10 m of ground pixel resolution. The non-commercial Google Earth satellite imagery and geological maps of previous studies are draped over the interpolated DEMs in the second stage. The integration of all spatial data is done by using the market leading GIS software, ESRI ArcGIS. We make the preliminary interpretation of major structures as tectonic lineaments and stratigraphic contacts. These preliminary maps are controlled and precisely coordinated during the field studies by using mobile tablets and/or phablets with GPS receivers. The same devices are also used in measuring and recording the geologic structures of the study region. Finally, all digitally collected measurements and observations are added to the GIS database and we finalise our geological map with all available information. We applied this integrated method to map the Burdur-Fethiye Shear Zone (BFSZ) in the southwest Turkey. The BFSZ is an active sinistral 60-to-90 km-wide shear zone, which prolongs about 300 km-long between Suhut-Cay in the northeast and Köyceğiz Lake-Kalkan in the southwest on land. The numerous studies suggest contradictory models not only about the evolution but also about the fault geometry of this

  4. Mean and Covariance Structures Analyses: An Examination of the Rosenberg Self-Esteem Scale among Adolescents and Adults.

    Science.gov (United States)

    Whiteside-Mansell, Leanne; Corwyn, Robert Flynn

    2003-01-01

    Examined the cross-age comparability of the widely used Rosenberg Self-Esteem Scale (RSES) in 414 adolescents and 900 adults in families receiving Aid to Families with Dependent Children. Found similarities of means in the RSES across groups. (SLD)

  5. Le dysfonctionnement socio-spatial des grands ensembles en Algérie: technique de l’analyse wayfinding par méthode “movement traces” et l’analyse morphologique (syntaxe spatiale par logiciel “depthmap”

    Directory of Open Access Journals (Sweden)

    Amara Hima

    2018-03-01

    Full Text Available Résumé La technique de l’analyse syntaxique de la visibilité (Visibility Graph Analysis – VGA et de l’accessibilité (All Line Analysis – ALA par logiciel “DepthMap©(UCL, Londres” et l’analyse du dysfonctionnement wayfinding par méthode “movement traces”, sont utilisées dans ce papier afin de développer un modèle d’analyse et d’investigation de l’impact des changements spatiaux sur le dysfonctionnement socio-spatial du wayfinding, ainsi sur la reproduction urbaine, notamment les transformations des façades et l’appropriation des espaces extérieurs dans les grands ensembles en Algérie. Nous donnons ici le cas d’étude de la cité 1000 logt-Biskra et la cité 500 logt-M’sila. Afin de vérifier cette hypothèse, un modèle d’analyse hybride a été développé par croisement des résultats d’analyses des deux techniques. Par conséquent, le schéma de l’interférence montre que la majorité des piétons préfèrent parcourir les axes courts et droits — caractérisés par une forte propriété syntaxique de visibilité et d’accessibilité (l’intégration, la connectivité et l’intelligibilité — en directions des équipements adjacents et aux milieux des deux cités. Ces itinéraires ont un impact sur les transformations des façades et l’appropriation des espaces extérieurs. Le modèle développé promet de futures recherches sur le plan de la quantification, la modélisation et la simulation du processus de la reproduction urbaine, notamment par les automates cellulaires.

  6. Large-scale association analyses identify new loci influencing glycemic traits and provide insight into the underlying biological pathways

    Science.gov (United States)

    Scott, Robert A; Lagou, Vasiliki; Welch, Ryan P; Wheeler, Eleanor; Montasser, May E; Luan, Jian’an; Mägi, Reedik; Strawbridge, Rona J; Rehnberg, Emil; Gustafsson, Stefan; Kanoni, Stavroula; Rasmussen-Torvik, Laura J; Yengo, Loïc; Lecoeur, Cecile; Shungin, Dmitry; Sanna, Serena; Sidore, Carlo; Johnson, Paul C D; Jukema, J Wouter; Johnson, Toby; Mahajan, Anubha; Verweij, Niek; Thorleifsson, Gudmar; Hottenga, Jouke-Jan; Shah, Sonia; Smith, Albert V; Sennblad, Bengt; Gieger, Christian; Salo, Perttu; Perola, Markus; Timpson, Nicholas J; Evans, David M; Pourcain, Beate St; Wu, Ying; Andrews, Jeanette S; Hui, Jennie; Bielak, Lawrence F; Zhao, Wei; Horikoshi, Momoko; Navarro, Pau; Isaacs, Aaron; O’Connell, Jeffrey R; Stirrups, Kathleen; Vitart, Veronique; Hayward, Caroline; Esko, Tönu; Mihailov, Evelin; Fraser, Ross M; Fall, Tove; Voight, Benjamin F; Raychaudhuri, Soumya; Chen, Han; Lindgren, Cecilia M; Morris, Andrew P; Rayner, Nigel W; Robertson, Neil; Rybin, Denis; Liu, Ching-Ti; Beckmann, Jacques S; Willems, Sara M; Chines, Peter S; Jackson, Anne U; Kang, Hyun Min; Stringham, Heather M; Song, Kijoung; Tanaka, Toshiko; Peden, John F; Goel, Anuj; Hicks, Andrew A; An, Ping; Müller-Nurasyid, Martina; Franco-Cereceda, Anders; Folkersen, Lasse; Marullo, Letizia; Jansen, Hanneke; Oldehinkel, Albertine J; Bruinenberg, Marcel; Pankow, James S; North, Kari E; Forouhi, Nita G; Loos, Ruth J F; Edkins, Sarah; Varga, Tibor V; Hallmans, Göran; Oksa, Heikki; Antonella, Mulas; Nagaraja, Ramaiah; Trompet, Stella; Ford, Ian; Bakker, Stephan J L; Kong, Augustine; Kumari, Meena; Gigante, Bruna; Herder, Christian; Munroe, Patricia B; Caulfield, Mark; Antti, Jula; Mangino, Massimo; Small, Kerrin; Miljkovic, Iva; Liu, Yongmei; Atalay, Mustafa; Kiess, Wieland; James, Alan L; Rivadeneira, Fernando; Uitterlinden, Andre G; Palmer, Colin N A; Doney, Alex S F; Willemsen, Gonneke; Smit, Johannes H; Campbell, Susan; Polasek, Ozren; Bonnycastle, Lori L; Hercberg, Serge; Dimitriou, Maria; Bolton, Jennifer L; Fowkes, Gerard R; Kovacs, Peter; Lindström, Jaana; Zemunik, Tatijana; Bandinelli, Stefania; Wild, Sarah H; Basart, Hanneke V; Rathmann, Wolfgang; Grallert, Harald; Maerz, Winfried; Kleber, Marcus E; Boehm, Bernhard O; Peters, Annette; Pramstaller, Peter P; Province, Michael A; Borecki, Ingrid B; Hastie, Nicholas D; Rudan, Igor; Campbell, Harry; Watkins, Hugh; Farrall, Martin; Stumvoll, Michael; Ferrucci, Luigi; Waterworth, Dawn M; Bergman, Richard N; Collins, Francis S; Tuomilehto, Jaakko; Watanabe, Richard M; de Geus, Eco J C; Penninx, Brenda W; Hofman, Albert; Oostra, Ben A; Psaty, Bruce M; Vollenweider, Peter; Wilson, James F; Wright, Alan F; Hovingh, G Kees; Metspalu, Andres; Uusitupa, Matti; Magnusson, Patrik K E; Kyvik, Kirsten O; Kaprio, Jaakko; Price, Jackie F; Dedoussis, George V; Deloukas, Panos; Meneton, Pierre; Lind, Lars; Boehnke, Michael; Shuldiner, Alan R; van Duijn, Cornelia M; Morris, Andrew D; Toenjes, Anke; Peyser, Patricia A; Beilby, John P; Körner, Antje; Kuusisto, Johanna; Laakso, Markku; Bornstein, Stefan R; Schwarz, Peter E H; Lakka, Timo A; Rauramaa, Rainer; Adair, Linda S; Smith, George Davey; Spector, Tim D; Illig, Thomas; de Faire, Ulf; Hamsten, Anders; Gudnason, Vilmundur; Kivimaki, Mika; Hingorani, Aroon; Keinanen-Kiukaanniemi, Sirkka M; Saaristo, Timo E; Boomsma, Dorret I; Stefansson, Kari; van der Harst, Pim; Dupuis, Josée; Pedersen, Nancy L; Sattar, Naveed; Harris, Tamara B; Cucca, Francesco; Ripatti, Samuli; Salomaa, Veikko; Mohlke, Karen L; Balkau, Beverley; Froguel, Philippe; Pouta, Anneli; Jarvelin, Marjo-Riitta; Wareham, Nicholas J; Bouatia-Naji, Nabila; McCarthy, Mark I; Franks, Paul W; Meigs, James B; Teslovich, Tanya M; Florez, Jose C; Langenberg, Claudia; Ingelsson, Erik; Prokopenko, Inga; Barroso, Inês

    2012-01-01

    Through genome-wide association meta-analyses of up to 133,010 individuals of European ancestry without diabetes, including individuals newly genotyped using the Metabochip, we have raised the number of confirmed loci influencing glycemic traits to 53, of which 33 also increase type 2 diabetes risk (q fasting insulin showed association with lipid levels and fat distribution, suggesting impact on insulin resistance. Gene-based analyses identified further biologically plausible loci, suggesting that additional loci beyond those reaching genome-wide significance are likely to represent real associations. This conclusion is supported by an excess of directionally consistent and nominally significant signals between discovery and follow-up studies. Functional follow-up of these newly discovered loci will further improve our understanding of glycemic control. PMID:22885924

  7. Using Remote Sensing and Spatial Analyses Techniques For Optimum Land Use Planning, West of Suez Canal, Egypt

    International Nuclear Information System (INIS)

    Elnahry, A.H.; Mohamed, E.S.; Nasar, N.

    2008-01-01

    The current study aims at using remote sensing (RS) and Geographic Information System (GIS) techniques for optimum landuse planning of the area located north Ismaillia - south Port Said Governorates on the western side of Suez Canal. It is bounded by longitudes 32 degree 10 and 32 degree 20 E and latitudes 30 0 4 rand 31 0 00' N. Great part of this area is under reclamation and suffering from improper landuse. Ten geomorphologic units were recognized i.e. clay flats, decantation basins, overflow basins, sand sheets, gypsiferous flats, old river terraces, sand flats, turtle backs, lake beds, and recent river terraces. Using US Soil Taxonomy, two soil orders could be identified; Entisols and Aridisols which are represented by ten great groups: Typic Haplosalids, Typic Haplogypsids, Typic Toriorthents, Vertic Argigypsids, Vertic Torrijluvents, Vertic Natrargids ,Typic Torripsamments, Typic Torrifluvens, Aquic Torriorthents and Typic Psammaquents. Surface and ground water with respect to salinity and alkalinity hazards were investigated ,where surface water of the main canals was classified as C2-S 1, C3-S 1 ,C4-S2 and C4-S4, meanwhile the ground water was classified as C3-S 1, C3 -S 1 ,C4-S2 ,C4-SI and C4-S4 .Optimum landuse planning of the studied area includes three approaches i.e., physical planning, optimum cropping pattern and other uses. Physical planning includes designing of three geospatial models. I-treatment plant site selection model. 2-central village site selection model and 3- shortest path for new Canal model. Current cropping pattern was obtained by matching the crop requirements with soil characteristics, where soils of high sand flats and low gypsiferrous flats are currently highly suitable (S2) for sugar beat, alfalfa and cotton, soils of low sand flats are currently highly suitable (S2) for olive, citrus and melon, soils of low recent river terraces are currently highly suitable (S2) for sugar beat, cotton, corn and rice ,soils of moderately

  8. Multi-scale analyses of nest site selection and fledging success by marbled murrelets (Brachyramphus marmoratus) in British Columbia

    OpenAIRE

    Silvergieter, Michael Paul

    2009-01-01

    I studied nesting habitat selection and fledging success by marbled murrelets, a seabird that nests in old-growth forests of high economic value, at two regions of southwestern British Columbia. At Clayoquot Sound, habitat occurs in larger stands, and murrelets selected steeper slopes and patches with more platform trees, and shorter trees, than at random sites. At Desolation Sound, where smaller forest stands predominate, patch scale variables were less important; increased canopy complexity...

  9. Comparative CO{sub 2} flux measurements by eddy covariance technique using open- and closed-path gas analysers over the equatorial Pacific Ocean

    Energy Technology Data Exchange (ETDEWEB)

    Kondo, Fumiyoshi (Graduate School of Natural Science and Technology, Okayama Univ., Okayama (Japan); Atmosphere and Ocean Research Inst., Univ. of Tokyo, Tokyo (Japan)), Email: fkondo@aori.u-tokyo.ac.jp; Tsukamoto, Osamu (Graduate School of Natural Science and Technology, Okayama Univ., Okayama (Japan))

    2012-04-15

    Direct comparison of airsea CO{sub 2} fluxes by open-path eddy covariance (OPEC) and closed-path eddy covariance (CPEC) techniques was carried out over the equatorial Pacific Ocean. Previous studies over oceans have shown that the CO{sub 2} flux by OPEC was larger than the bulk CO{sub 2} flux using the gas transfer velocity estimated by the mass balance technique, while the CO{sub 2} flux by CPEC agreed with the bulk CO{sub 2} flux. We investigated a traditional conflict between the CO{sub 2} flux by the eddy covariance technique and the bulk CO{sub 2} flux, and whether the CO{sub 2} fluctuation attenuated using the closed-path analyser can be measured with sufficient time responses to resolve small CO{sub 2} flux over oceans. Our results showed that the closed-path analyser using a short sampling tube and a high volume air pump can be used to measure the small CO{sub 2} fluctuation over the ocean. Further, the underestimated CO{sub 2} flux by CPEC due to the attenuated fluctuation can be corrected by the bandpass covariance method; its contribution was almost identical to that of H{sub 2}O flux. The CO{sub 2} flux by CPEC agreed with the total CO{sub 2} flux by OPEC with density correction; however, both of them are one order of magnitude larger than the bulk CO{sub 2} flux

  10. Comparative CO2 flux measurements by eddy covariance technique using open- and closed-path gas analysers over the equatorial Pacific Ocean

    Directory of Open Access Journals (Sweden)

    Fumiyoshi Kondo

    2012-04-01

    Full Text Available Direct comparison of air–sea CO2 fluxes by open-path eddy covariance (OPEC and closed-path eddy covariance (CPEC techniques was carried out over the equatorial Pacific Ocean. Previous studies over oceans have shown that the CO2 flux by OPEC was larger than the bulk CO2 flux using the gas transfer velocity estimated by the mass balance technique, while the CO2 flux by CPEC agreed with the bulk CO2 flux. We investigated a traditional conflict between the CO2 flux by the eddy covariance technique and the bulk CO2 flux, and whether the CO2 fluctuation attenuated using the closed-path analyser can be measured with sufficient time responses to resolve small CO2 flux over oceans. Our results showed that the closed-path analyser using a short sampling tube and a high volume air pump can be used to measure the small CO2 fluctuation over the ocean. Further, the underestimated CO2 flux by CPEC due to the attenuated fluctuation can be corrected by the bandpass covariance method; its contribution was almost identical to that of H2O flux. The CO2 flux by CPEC agreed with the total CO2 flux by OPEC with density correction; however, both of them are one order of magnitude larger than the bulk CO2 flux.

  11. Are Moral Disengagement, Neutralization Techniques, and Self-Serving Cognitive Distortions the Same? Developing a Unified Scale of Moral Neutralization of Aggression

    Directory of Open Access Journals (Sweden)

    Denis Ribeaud

    2010-12-01

    Full Text Available

    Can the three concepts of Neutralization Techniques, Moral Disengagement, and Secondary Self-Serving Cognitive Distortions be conceived theoretically and empirically
    as capturing the same cognitive processes and thus be measured with one single scale of Moral Neutralization? First, we show how the different approaches overlap conceptually. Second, in Study 1, we verify that four scales derived from the three conceptions of Moral Neutralization are correlated in such a way that they can be conceived as measuring the same phenomenon. Third, building on the results of Study 1, we derive a unified scale of Moral Neutralization which specifically focuses on the neutralization of aggression and test it in a large general population sample of preadolescents (Study 2. Confirmatory factor analyses suggest a good internal consistency and acceptable cross-gender factorial invariance. Correlation analyses with related behavioral and cognitive constructs corroborate the scale’s criterion and convergent validity. In the final section we present a possible integration of Moral Neutralization in a broader framework of crime causation.

  12. Thermal infrared imagery as a tool for analysing the variability of surface saturated areas at various temporal and spatial scales

    Science.gov (United States)

    Glaser, Barbara; Antonelli, Marta; Pfister, Laurent; Klaus, Julian

    2017-04-01

    Surface saturated areas are important for the on- and offset of hydrological connectivity within the hillslope-riparian-stream continuum. This is reflected in concepts such as variable contributing areas or critical source areas. However, we still lack a standardized method for areal mapping of surface saturation and for observing its spatiotemporal variability. Proof-of-concept studies in recent years have shown the potential of thermal infrared (TIR) imagery to record surface saturation dynamics at various temporal and spatial scales. Thermal infrared imagery is thus a promising alternative to conventional approaches, such as the squishy boot method or the mapping of vegetation. In this study we use TIR images to investigate the variability of surface saturated areas at different temporal and spatial scales in the forested Weierbach catchment (0.45 km2) in western Luxembourg. We took TIR images of the riparian zone with a hand-held FLIR infrared camera at fortnightly intervals over 18 months at nine different locations distributed over the catchment. Not all of the acquired images were suitable for a derivation of the surface saturated areas, as various factors influence the usability of the TIR images (e.g. temperature contrasts, shadows, fog). Nonetheless, we obtained a large number of usable images that provided a good insight into the dynamic behaviour of surface saturated areas at different scales. The images revealed how diverse the evolution of surface saturated areas can be throughout the hydrologic year. For some locations with similar morphology or topography we identified diverging saturation dynamics, while other locations with different morphology / topography showed more similar behaviour. Moreover, we were able to assess the variability of the dynamics of expansion / contraction of saturated areas within the single locations, which can help to better understand the mechanisms behind surface saturation development.

  13. Sentinel-1 data massive processing for large scale DInSAR analyses within Cloud Computing environments through the P-SBAS approach

    Science.gov (United States)

    Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana

    2017-04-01

    -core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.

  14. Separate effects testing and analyses to investigate liner tearing of the 1:6-scale reinforced concrete containment building

    International Nuclear Information System (INIS)

    Spletzer, B.L.; Lambert, L.D.; Bergman, V.L.

    1995-06-01

    The overpressurization of a 1:6-scale reinforced concrete containment building demonstrated that liner tearing is a plausible failure mode in such structures under severe accident conditions. A combined experimental and analytical program was developed to determine the important parameters which affect liner tearing and to develop reasonably simple analytical methods for predicting when tearing will occur. Three sets of test specimens were designed to allow individual control over and investigation of the mechanisms believed to be important in causing failure of the liner plate. The series of tests investigated the effect on liner tearing produced by the anchorage system, the loading conditions, and the transition in thickness from the liner to the insert plate. Before testing, the specimens were analyzed using two- and three-dimensional finite element models. Based on the analysis, the failure mode and corresponding load conditions were predicted for each specimen. Test data and post-test examination of test specimens show mixed agreement with the analytical predictions with regard to failure mode and specimen response for most tests. Many similarities were also observed between the response of the liner in the 1:6-scale reinforced concrete containment model and the response of the test specimens. This work illustrates the fact that the failure mechanism of a reinforced concrete containment building can be greatly influenced by details of liner and anchorage system design. Further, it significantly increases the understanding of containment building response under severe conditions

  15. Numerical simulations of a full-scale polymer electrolyte fuel cell with analysing systematic performance in an automotive application

    International Nuclear Information System (INIS)

    Park, Heesung

    2015-01-01

    Highlights: • A 3-D full-scale fuel cell performance is numerically simulated. • Generated and consumed power in the system is affected by operating condition. • Systematic analysis predicts the net power of conceptual PEFC stack. - Abstract: In fuel cell powered electric vehicles, the net power efficiency is a critical factor in terms of fuel economy and commercialization. Although the fuel cell stack produces enough power to drive the vehicles, the transferred power to the power train could be significantly reduced due to the power consumption to operate the system components of air blower and cooling module. Thus the systematic analysis on the operating condition of the fuel cell stack is essential to predict the net power generation. In this paper numerical simulation is conducted to characterize the fuel cell performance under various operating conditions. Three dimensional and full-scale fuel cell of the active area of 355 cm 2 is numerically modelled with 47.3 million grids to capture the complexities of the fluid dynamics, heat transfer and electrochemical reactions. The proposed numerical model requires large computational time and cost, however, it can be powerful to reasonably predict the fuel cell system performance at the early stage of conceptual design without requiring prototypes. Based on the model, it has been shown that the net power is reduced down to 90% of the gross power due to the power consumption of air blower and cooling module

  16. Developing an attitude towards bullying scale for prisoners: structural analyses across adult men, young adults and women prisoners.

    Science.gov (United States)

    Ireland, Jane L; Power, Christina L; Bramhall, Sarah; Flowers, Catherine

    2009-01-01

    Few studies have attempted to explore attitudes towards bullying among prisoners, despite acknowledgement that attitudes may play an important role. To evaluate the structure of a new attitudinal scale, the Prison Bullying Scale (PBS), with adult men and women in prison and with young male prisoners. That attitudes would be represented as a multidimensional construct and that the PBS structure would be replicated across confirmatory samples. The PBS was developed and confirmed across four independent studies using item parceling and confirmatory factor analysis: Study I comprised 412 adult male prisoners; Study II, 306 adult male prisoners; Study III, 171 male young offenders; and Study IV, 148 adult women prisoners. Attitudes were represented as a multidimensional construct comprising seven core factors. The exploratory analysis was confirmed in adult male samples, with some confirmation among young offenders and adult women. The fit for young offenders was adequate and improved by factor covariance. The fit for women was the poorest overall. The study notes the importance of developing ecologically valid measures and statistically testing these measures prior to their clinical or research use. The development of the PBS holds value both as an assessment and as a research measure and remains the only ecologically validated measure in existence to assess prisoner attitudes towards bullying.

  17. The safety regulation of small-scale coal mines in China: Analysing the interests and influences of stakeholders

    International Nuclear Information System (INIS)

    Song, Xiaoqian; Mu, Xiaoyi

    2013-01-01

    Small scale coal mines (SCMs) have played an important role in China’s energy supply. At the same time, they also suffer from many social, economic, environmental, and safety problems. The Chinese government has made considerable efforts to strengthen the safety regulation of the coal mining industry. Yet, few of these efforts have proven to be very effective. This paper analyzes the interests and influences of key stakeholders in the safety regulation of SCMs, which includes the safety regulator, the local government, the mine owner, and mineworkers. We argue that the effective regulation of coal mine safety must both engage and empower mineworkers. - Highlights: ► Small scale coal mines have played an important role in China's energy supply. ► We analyze the interests and influences of key stakeholders in the safety regulation of small coal mines. ► The mineworkers have the strongest interest but least influence. ► An effective regulation must engage the mineworkers, organize, and empower them.

  18. On Rigorous Drought Assessment Using Daily Time Scale: Non-Stationary Frequency Analyses, Revisited Concepts, and a New Method to Yield Non-Parametric Indices

    Directory of Open Access Journals (Sweden)

    Charles Onyutha

    2017-10-01

    Full Text Available Some of the problems in drought assessments are that: analyses tend to focus on coarse temporal scales, many of the methods yield skewed indices, a few terminologies are ambiguously used, and analyses comprise an implicit assumption that the observations come from a stationary process. To solve these problems, this paper introduces non-stationary frequency analyses of quantiles. How to use non-parametric rescaling to obtain robust indices that are not (or minimally skewed is also introduced. To avoid ambiguity, some concepts on, e.g., incidence, extremity, etc., were revisited through shift from monthly to daily time scale. Demonstrations on the introduced methods were made using daily flow and precipitation insufficiency (precipitation minus potential evapotranspiration from the Blue Nile basin in Africa. Results show that, when a significant trend exists in extreme events, stationarity-based quantiles can be far different from those when non-stationarity is considered. The introduced non-parametric indices were found to closely agree with the well-known standardized precipitation evapotranspiration indices in many aspects but skewness. Apart from revisiting some concepts, the advantages of the use of fine instead of coarse time scales in drought assessment were given. The links for obtaining freely downloadable tools on how to implement the introduced methods were provided.

  19. Methods and Techniques Used to Convey Total System Performance Assessment Analyses and Results for Site Recommendation at Yucca Mountain, Nevada, USA

    International Nuclear Information System (INIS)

    Mattie, Patrick D.; McNeish, Jerry A.; Sevougian, S. David; Andrews, Robert W.

    2001-01-01

    Total System Performance Assessment (TSPA) is used as a key decision-making tool for the potential geologic repository of high level radioactive waste at Yucca Mountain, Nevada USA. Because of the complexity and uncertainty involved in a post-closure performance assessment, an important goal is to produce a transparent document describing the assumptions, the intermediate steps, the results, and the conclusions of the analyses. An important objective for a TSPA analysis is to illustrate confidence in performance projections of the potential repository given a complex system of interconnected process models, data, and abstractions. The methods and techniques used for the recent TSPA analyses demonstrate an effective process to portray complex models and results with transparency and credibility

  20. Assessing public speaking fear with the short form of the Personal Report of Confidence as a Speaker scale: confirmatory factor analyses among a French-speaking community sample.

    Science.gov (United States)

    Heeren, Alexandre; Ceschi, Grazia; Valentiner, David P; Dethier, Vincent; Philippot, Pierre

    2013-01-01

    The main aim of this study was to assess the reliability and structural validity of the French version of the 12-item version of the Personal Report of Confidence as Speaker (PRCS), one of the most promising measurements of public speaking fear. A total of 611 French-speaking volunteers were administered the French versions of the short PRCS, the Liebowitz Social Anxiety Scale, the Fear of Negative Evaluation scale, as well as the Trait version of the Spielberger State-Trait Anxiety Inventory and the Beck Depression Inventory-II, which assess the level of anxious and depressive symptoms, respectively. Regarding its structural validity, confirmatory factor analyses indicated a single-factor solution, as implied by the original version. Good scale reliability (Cronbach's alpha = 0.86) was observed. The item discrimination analysis suggested that all the items contribute to the overall scale score reliability. The French version of the short PRCS showed significant correlations with the Liebowitz Social Anxiety Scale (r = 0.522), the Fear of Negative Evaluation scale (r = 0.414), the Spielberger State-Trait Anxiety Inventory (r = 0.516), and the Beck Depression Inventory-II (r = 0.361). The French version of the short PRCS is a reliable and valid measure for the evaluation of the fear of public speaking among a French-speaking sample. These findings have critical consequences for the measurement of psychological and pharmacological treatment effectiveness in public speaking fear among a French-speaking sample.

  1. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 1, Part 2: Control modules S1--H1; Revision 5

    International Nuclear Information System (INIS)

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system

  2. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 2, Part 3: Functional modules F16--F17; Revision 5

    International Nuclear Information System (INIS)

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system

  3. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 2, Part 3: Functional modules F16--F17; Revision 5

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.

  4. Sensitivity Analysis of Electromagnetic Induction Technique to Determine Soil Salinity in Large –Scale

    Directory of Open Access Journals (Sweden)

    Yousef Hasheminejhad

    2017-02-01

    Full Text Available Introduction: Monitoring and management of saline soils depends on exact and updatable measurements of soil electrical conductivity. Large scale direct measurements are not only expensive but also time consuming. Therefore application of near ground surface sensors could be considered as acceptable time- and cost-saving methods with high accuracy in soil salinity detection. . One of these relatively innovative methods is electromagnetic induction technique. Apparent soil electrical conductivity measurement by electromagnetic induction technique is affected by several key properties of soils including soil moisture and clay content. Materials and Methods: Soil salinity and apparent soil electrical conductivity data of two years of 50000 ha area in Sabzevar- Davarzan plain were used to evaluate the sensitivity of electromagnetic induction to soil moisture and clay content. Locations of the sampling points were determined by the Latin Hypercube Sampling strategy, based on 100 sampling points were selected for the first year and 25 sampling points for the second year. Regarding to difficulties in finding and sampling the points 97 sampling points were found in the area for the first year out of which 82 points were sampled down to 90 cm depth in 30 cm intervals and all of them were measured with electromagnetic induction device at horizontal orientation. The first year data were used for training the model which included 82 points measurement of bulk conductivity and laboratory determination of electrical conductivity of saturated extract, soil texture and moisture content in soil samples. On the other hand, the second year data which were used for testing the model integrated by 25 sampling points and 9 bulk conductivity measurements around each point. Electrical conductivity of saturated extract was just measured as the only parameter in the laboratory for the second year samples. Results and Discussion: Results of the first year showed a

  5. Medieval land use management and geochemistry - spatial analyses on scales from households properties to whole fields systems

    Science.gov (United States)

    Horák, Jan; Janovský, Martin; Klír, Tomáš; Šmejda, Ladislav; Legut-Pintal, Maria

    2017-04-01

    We present the final or preliminary results of our researches of five villages: Spindelbach (Ore Mountains, North-Western Bohemia), Hol (near Prague, Central Bohemia), Lovětín and Regenholz (near Třešť, Czech-Moravian Upland) and Goschwitz (near Wroclaw, Poland). Our research is methodically based on broad spatial sampling of soil samples and mapping of basic soil conditions. We use XRF spectrometry as a main tool for multi-elemental analyses and as a tool for first step screening of large areas. The crucial factor of our methods is also a design of sampling based on a respect to historical land and land use features like parts of village field system or possesions of the households. Also macroscopic visual method of getting data and knowledge of the site is crucial. It was revealed that generally used and acknowledged human indicator - Phosphorus - can be present at only very low levels of concentration, or undetectable, even in the nearness of households. The natural conditions cannot be the causing factor at all cases. This situation is caused also by last human activity intensity and by its spatial manifestation. In such cases, multi-elemental analysis is very useful. Zinc is usually correlated with Phosphorus, which is also connected to Lead. The past human activity indicators are spatially usually connected to modern pollution indicators. These two inputs can be sometimes distinguished by statistical analyses and by spatial visualisation of data. Working with just concentrations can be misleading. Past land use management and its strategies were important for spatial distribution of soil geochemical indicators. Therefore, we can use them not only as quantifiers of human impact on nature, but we can also detect different management or knowledge and experience. As it was revealed e. g. by analyses of households` possessions differences. For example, generally presumed decreasing gradient of management intensity (e.g. manuring) along the distance from

  6. Trace contaminant determination in fish scale by laser-ablation technique

    International Nuclear Information System (INIS)

    Lee, I.; Coutant, C.C.; Arakawa, E.T.

    1993-01-01

    Laser ablation on rings of fish scale has been used to analyze the historical accumulation of polychlorinated biphenyls (PCB) in striped bass in the Watts Bar Reservoir. Rings on a fish scale grow in a pattern that forms a record of the fish's chemical intake. In conjunction with the migration patterns of fish monitored by ecologists, relative PCB concentrations in the seasonal rings of fish scale can be used to study the PCB distribution in the reservoir. In this study, a tightly-focused laser beam from a XeCl excimer laser was used to ablate and ionize a small portion of a fish scale placed in a vacuum chamber. The ions were identified and quantified by a time-of-flight mass spectrometer. Studies of this type can provide valuable information for the Department of Energy (DOE) off-site clean-up efforts as well as identifying the impacts of other sources to local aquatic populations

  7. Cross-section library and processing techniques within the SCALE system

    International Nuclear Information System (INIS)

    Westfall, R.M.

    1986-01-01

    A summary of each of the SCALE system features involved in problem-dependent cross section processing is presented. These features include criticality libraries, shielding libraries, the Standard Composition Library, the SCALE functional modules: BONAMI-S, NITAWL-S, XSDRNPM-S, ICE-S, and the Material Information Processor. The automated procedure for cross-section processing is described with examples. 15 refs

  8. X-ray fluorescence in Member States (Italy): Portable EDXRF in a multi-technique approach for the analyses of large paintings

    International Nuclear Information System (INIS)

    Ridolfi, Stefano

    2014-01-01

    Energy-dispersive X-ray fluorescence (EDXRF) with its portable capability, generally characterized by a small Xray tube and a Si-PIN or Si-drift detector, is particularly useful to analyze works of art. The main aspect that characterizes the EDXRF technique is its non-invasive character. This characteristic that makes the technique so powerful and appealing is on the other hand the main source of uncertainty in XRF measurements on Cultural Heritage. This problem is even more evident when we analyze paintings because of their intrinsic stratigraphic essence. As a matter of fact a painting is made of several layers: the support, which can be mainly of wood, canvas, paper; the preparation layer, mainly gypsums, white lead or ochre; pigment layers and at the end the protective varnish layer. The penetrating power of X rays allows that most of the times the information of all the layers reaches the detector. Most of the information that is in the spectrum arrives from deep layers of which we have no clue. In order to better understand this concept, let us use the equation of A. Markowicz. in which the various uncertainties that influence the analyses with portable EDXRF are reported. Let us adjust this equation for non invasive portable EDXRF analysis. The second, the third and the fourth term do not exist, for obvious reasons. Only the first and the last term influence the total uncertainty of an EDXRF analysis. The ways to reduce the influence of the fifth term is known by any scientist: good stability of the system, long measuring time, correct standard samples, good energy resolution etc. But what about the first term when we are executing a non invasive analysis? An example that shows the influence of the sample representation in the increasing of the uncertainty of a XRF analysis is the case in which we are asked to determine the original pigments used in a painting. If we have no clue of where restoration areas are dislocated on the painting, the probability of

  9. Volume changes at macro- and nano-scale in epoxy resins studied by PALS and PVT experimental techniques

    Energy Technology Data Exchange (ETDEWEB)

    Somoza, A. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina) and CICPBA, Pinto 399, B7000GHG Tandil (Argentina)]. E-mail: asomoza@exa.unicen.edu.ar; Salgueiro, W. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina); Goyanes, S. [LPMPyMC, Depto. de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pabellon I, 1428 Buenos Aires (Argentina); Ramos, J. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain); Mondragon, I. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain)

    2007-02-15

    A systematic study on changes in the volumes at macro- and nano-scale in epoxy systems cured with selected aminic hardeners at different pre-cure temperatures is presented. Free- and macroscopic specific-volumes were measured by PALS and pressure-volume-temperature techniques, respectively. An analysis of the relation existing between macro- and nano-scales of the thermosetting networks developed by the different chemical structures is shown. The result obtained indicates that the structure of the hardeners governs the packing of the molecular chains of the epoxy network.

  10. Micron-scale intra-ring analyses of δ13C in early Eocene Arctic wood from Ellesmere Island

    Science.gov (United States)

    Schubert, B.; Jahren, H.; Eberle, J.; Sternberg, L.

    2009-12-01

    Early Eocene (ca. 53 Ma) fossil assemblages on Ellesmere Island (75 oN paleolatitude), provide rich information about the plant and animal life of the lush polar ecosystems of the time. Fossil wood recovered from Ellesmere Island is abundant and not permineralized; however, morphological features such as growth rings and resin canals have been obliterated by compression. We report on exceptionally high-resolution intra-ring analyses of δ13C within fossil wood, sampled at ~30 micron intervals across several centimeters of wood sample. Clear patterns in systematic seasonal increases and decreases in wood δ13C allowed us to identify at least 5 annual cycles in the wood. The patterns of increase and decrease in δ13C were consistent with patterns observed for evergreen wood, and distinct from the deciduous patterns we have observed for Metasequoia fossil wood from the middle Eocene (ca. 45 Ma) Arctic site on Axel Heiberg Island. We believe that the high point in the δ13C value of wood seen in each cycle corresponds to the highest environmental temperatures during the annual cycle, as has been seen for modern evergreens (e.g., Barbour et al., 2002). Modern studies have also noted that high temperature periods are correlated with the highest vapor-pressure and soil-water deficits of the annual cycle; these environmental factors would cause the plant to change its discrimination during photosynthesis. We will discuss the relatively low amplitude of δ13C fluctuations (0.5-1.0 ‰) clearly defined by Ellesmere fossil wood, in comparison to observations on modern common evergreens (2.0-4.0 ‰), and speculate that this difference implies greatly dampened seasonal temperature fluctuations in Eocene polar environments, relative to today. Barbour M.M., Walcroft A.S., Farquhar G.D., 2002, Seasonal variation in δ13C and δ18O of cellulose from growth rings of Pinus radiata. Plant, Cell and Environment: v. 25, p. 1483-1499.

  11. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  12. Applicability and sensitivity of gamma transmission and radiotracer techniques for mineral scaling studies

    Energy Technology Data Exchange (ETDEWEB)

    Bjoernstad, Tor; Stamatakis, Emanuel

    2006-05-15

    Mineral scaling in petroleum and geothermal production systems creates a substantial problem of flow impairment. It is a priority to develop methods for scale inhibition. To study scaling rates and mechanisms in laboratory flow experiments under simulated reservoir conditions two nuclear methods have been introduced and tested. The first applies the principle of gamma transmission to measure mass increase. Here, we use a 30 MBq source of 133Ba. The other method applies radioactive tracers of one or more of the scaling components. We have used the study of CaC03-precipitation, as an example of the applicability of the method where the main tracer used is 47Ca2+. While the first method must be regarded as an indirect method, the latter is a direct method where the reactions of specific components may be studied. Both methods are on-line, continuous and non-destructive, and capable to study scaling of liquids with saturation ratios as low as SR=1.5 or lower. A lower limit of detection for the transmission method in sand-packed columns with otherwise reasonable experimental parameters is less than 1 mg CaC03 in a 1 cm section of the tube packed with silica sand SiO2. A lower limit of detection for the tracer method with reasonable experimental parameters is less than 1 microgram in the same tube section. (author) (tk)

  13. Applicability and sensitivity of gamma transmission and radiotracer techniques for mineral scaling studies

    International Nuclear Information System (INIS)

    Bjoernstad, Tor; Stamatakis, Emanuel

    2006-05-01

    Mineral scaling in petroleum and geothermal production systems creates a substantial problem of flow impairment. It is a priority to develop methods for scale inhibition. To study scaling rates and mechanisms in laboratory flow experiments under simulated reservoir conditions two nuclear methods have been introduced and tested. The first applies the principle of gamma transmission to measure mass increase. Here, we use a 30 MBq source of 133Ba. The other method applies radioactive tracers of one or more of the scaling components. We have used the study of CaC03-precipitation, as an example of the applicability of the method where the main tracer used is 47Ca2+. While the first method must be regarded as an indirect method, the latter is a direct method where the reactions of specific components may be studied. Both methods are on-line, continuous and non-destructive, and capable to study scaling of liquids with saturation ratios as low as SR=1.5 or lower. A lower limit of detection for the transmission method in sand-packed columns with otherwise reasonable experimental parameters is less than 1 mg CaC03 in a 1 cm section of the tube packed with silica sand SiO2. A lower limit of detection for the tracer method with reasonable experimental parameters is less than 1 microgram in the same tube section. (author) (tk)

  14. Comparison of automated ribosomal intergenic spacer analysis (ARISA) and denaturing gradient gel electrophoresis (DGGE) techniques for analysing the influence of diet on ruminal bacterial diversity.

    Science.gov (United States)

    Saro, Cristina; Molina-Alcaide, Eduarda; Abecia, Leticia; Ranilla, María José; Carro, María Dolores

    2018-04-01

    The objective of this study was to compare the automated ribosomal intergenic spacer analysis (ARISA) and the denaturing gradient gel electrophoresis (DGGE) techniques for analysing the effects of diet on diversity in bacterial pellets isolated from the liquid (liquid-associated bacteria (LAB)) and solid (solid-associated bacteria (SAB)) phase of the rumen. The four experimental diets contained forage to concentrate ratios of 70:30 or 30:70 and had either alfalfa hay or grass hay as forage. Four rumen-fistulated animals (two sheep and two goats) received the diets in a Latin square design. Bacterial pellets (LAB and SAB) were isolated at 2 h post-feeding for DNA extraction and analysed by ARISA and DGGE. The number of peaks in individual samples ranged from 48 to 99 for LAB and from 41 to 95 for SAB with ARISA, and values of DGGE-bands ranged from 27 to 50 for LAB and from 18 to 45 for SAB. The LAB samples from high concentrate-fed animals tended (p forage-fed animals with ARISA, but no differences were identified with DGGE. The SAB samples from high concentrate-fed animals had lower (p forage diets with ARISA, but only a trend was noticed for these parameters with DGGE (p forage type on LAB diversity was detected by any technique. In this study, ARISA detected some changes in ruminal bacterial communities that were not detected by DGGE, and therefore ARISA was considered more appropriate for assessing bacterial diversity of ruminal bacterial pellets. The results highlight the impact of the fingerprinting technique used to draw conclusions on dietary factors affecting bacterial diversity in ruminal bacterial pellets.

  15. Deep Sequencing of Three Loci Implicated in Large-Scale Genome-Wide Association Study Smoking Meta-Analyses.

    Science.gov (United States)

    Clark, Shaunna L; McClay, Joseph L; Adkins, Daniel E; Aberg, Karolina A; Kumar, Gaurav; Nerella, Sri; Xie, Linying; Collins, Ann L; Crowley, James J; Quakenbush, Corey R; Hillard, Christopher E; Gao, Guimin; Shabalin, Andrey A; Peterson, Roseann E; Copeland, William E; Silberg, Judy L; Maes, Hermine; Sullivan, Patrick F; Costello, Elizabeth J; van den Oord, Edwin J

    2016-05-01

    Genome-wide association study meta-analyses have robustly implicated three loci that affect susceptibility for smoking: CHRNA5\\CHRNA3\\CHRNB4, CHRNB3\\CHRNA6 and EGLN2\\CYP2A6. Functional follow-up studies of these loci are needed to provide insight into biological mechanisms. However, these efforts have been hampered by a lack of knowledge about the specific causal variant(s) involved. In this study, we prioritized variants in terms of the likelihood they account for the reported associations. We employed targeted capture of the CHRNA5\\CHRNA3\\CHRNB4, CHRNB3\\CHRNA6, and EGLN2\\CYP2A6 loci and flanking regions followed by next-generation deep sequencing (mean coverage 78×) to capture genomic variation in 363 individuals. We performed single locus tests to determine if any single variant accounts for the association, and examined if sets of (rare) variants that overlapped with biologically meaningful annotations account for the associations. In total, we investigated 963 variants, of which 71.1% were rare (minor allele frequency < 0.01), 6.02% were insertion/deletions, and 51.7% were catalogued in dbSNP141. The single variant results showed that no variant fully accounts for the association in any region. In the variant set results, CHRNB4 accounts for most of the signal with significant sets consisting of directly damaging variants. CHRNA6 explains most of the signal in the CHRNB3\\CHRNA6 locus with significant sets indicating a regulatory role for CHRNA6. Significant sets in CYP2A6 involved directly damaging variants while the significant variant sets suggested a regulatory role for EGLN2. We found that multiple variants implicating multiple processes explain the signal. Some variants can be prioritized for functional follow-up. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Development of a novel once-through flow visualization technique for kinetic study of bulk and surface scaling

    Science.gov (United States)

    Sanni, O.; Bukuaghangin, O.; Huggan, M.; Kapur, N.; Charpentier, T.; Neville, A.

    2017-10-01

    There is a considerable interest to investigate surface crystallization in order to have a full mechanistic understanding of how layers of sparingly soluble salts (scale) build on component surfaces. Despite much recent attention, a suitable methodology to improve on the understanding of the precipitation/deposition systems to enable the construction of an accurate surface deposition kinetic model is still needed. In this work, an experimental flow rig and associated methodology to study mineral scale deposition is developed. The once-through flow rig allows us to follow mineral scale precipitation and surface deposition in situ and in real time. The rig enables us to assess the effects of various parameters such as brine chemistry and scaling indices, temperature, flow rates, and scale inhibitor concentrations on scaling kinetics. Calcium carbonate (CaCO3) scaling at different values of the saturation ratio (SR) is evaluated using image analysis procedures that enable the assessment of surface coverage, nucleation, and growth of the particles with time. The result for turbidity values measured in the flow cell is zero for all the SR considered. The residence time from the mixing point to the sample is shorter than the induction time for bulk precipitation; therefore, there are no crystals in the bulk solution as the flow passes through the sample. The study shows that surface scaling is not always a result of pre-precipitated crystals in the bulk solution. The technique enables both precipitation and surface deposition of scale to be decoupled and for the surface deposition process to be studied in real time and assessed under constant condition.

  17. Review of ultimate pressure capacity test of containment structure and scale model design techniques

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jeong Moon; Choi, In Kil [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-03-01

    This study was performed to obtain the basic knowledge of the scaled model test through the review of experimental studies conducted in foreign countries. The results of this study will be used for the wall segment test planed in next year. It was concluded from the previous studies that the larger the model, the greater the trust of the community in the obtained results. It is recommended that a scale model 1/4 - 1/6 be suitable considering the characteristics of concrete, reinforcement, liner and tendon. Such a large scale model test require large amounts of time and budget. Because of these reasons, it is concluded that the containment wall segment test with analytical studies is efficient for the verification of the ultimate pressure capacity of the containment structures. 57 refs., 46 figs., 11 tabs. (Author)

  18. Distributed and hierarchical control techniques for large-scale power plant systems

    International Nuclear Information System (INIS)

    Raju, G.V.S.; Kisner, R.A.

    1985-08-01

    In large-scale systems, integrated and coordinated control functions are required to maximize plant availability, to allow maneuverability through various power levels, and to meet externally imposed regulatory limitations. Nuclear power plants are large-scale systems. Prime subsystems are those that contribute directly to the behavior of the plant's ultimate output. The prime subsystems in a nuclear power plant include reactor, primary and intermediate heat transport, steam generator, turbine generator, and feedwater system. This paper describes and discusses the continuous-variable control system developed to supervise prime plant subsystems for optimal control and coordination

  19. Photographic and video techniques used in the 1/5-scale Mark I boiling water reactor pressure suppression experiment

    International Nuclear Information System (INIS)

    Dixon, D.; Lord, D.

    1978-01-01

    The report provides a description of the techniques and equipment used for the photographic and video recordings of the air test series conducted on the 1/5 scale Mark I boiling water reactor (BWR) pressure suppression experimental facility at Lawrence Livermore Laboratory (LLL) between March 4, 1977, and May 12, 1977. Lighting and water filtering are discussed in the photographic system section and are also applicable to the video system. The appendices contain information from the photographic and video camera logs

  20. The Use of System Codes in Scaling Studies: Relevant Techniques for Qualifying NPP Nodalizations for Particular Scenarios

    Directory of Open Access Journals (Sweden)

    V. Martinez-Quiroga

    2014-01-01

    Full Text Available System codes along with necessary nodalizations are valuable tools for thermal hydraulic safety analysis. Qualifying both codes and nodalizations is an essential step prior to their use in any significant study involving code calculations. Since most existing experimental data come from tests performed on the small scale, any qualification process must therefore address scale considerations. This paper describes the methodology developed at the Technical University of Catalonia in order to contribute to the qualification of Nuclear Power Plant nodalizations by means of scale disquisitions. The techniques that are presented include the so-called Kv-scaled calculation approach as well as the use of “hybrid nodalizations” and “scaled-up nodalizations.” These methods have revealed themselves to be very helpful in producing the required qualification and in promoting further improvements in nodalization. The paper explains both the concepts and the general guidelines of the method, while an accompanying paper will complete the presentation of the methodology as well as showing the results of the analysis of scaling discrepancies that appeared during the posttest simulations of PKL-LSTF counterpart tests performed on the PKL-III and ROSA-2 OECD/NEA Projects. Both articles together produce the complete description of the methodology that has been developed in the framework of the use of NPP nodalizations in the support to plant operation and control.

  1. Fractal scaling behavior of heart rate variability in response to meditation techniques

    International Nuclear Information System (INIS)

    Alvarez-Ramirez, J.; Rodríguez, E.; Echeverría, J.C.

    2017-01-01

    Highlights: • The scaling properties of heart rate variability in premeditation and meditation states were studied. • Mindfulness meditation induces a decrement of the HRV long-range scaling correlations. • Mindfulness meditation can be regarded as a type of induced deep sleep-like dynamics. - Abstract: The rescaled range (R/S) analysis was used for analyzing the fractal scaling properties of heart rate variability (HRV) of subjects undergoing premeditation and meditation states. Eight novice subjects and four advanced practitioners were considered. The corresponding pre-meditation and meditation HRV data were obtained from the Physionet database. The results showed that mindfulness meditation induces a decrement of the HRV long-range scaling correlations as quantified with the time-variant Hurst exponent. The Hurst exponent for advanced meditation practitioners decreases up to values of 0.5, reflecting uncorrelated (e.g., white noise-like) HRV dynamics. Some parallelisms between mindfulness meditation and deep sleep (Stage 4) are discussed, suggesting that the former can be regarded as a type of induced deep sleep-like dynamics.

  2. Mapping patient safety : A large-scale literature review using bibliometric visualisation techniques

    NARCIS (Netherlands)

    Rodrigues, S.P.; Van Eck, N.J.; Waltman, L.; Jansen, F.W.

    2014-01-01

    Background The amount of scientific literature available is often overwhelming, making it difficult for researchers to have a good overview of the literature and to see relations between different developments. Visualisation techniques based on bibliometric data are helpful in obtaining an overview

  3. Vis-A-Plan /visualize a plan/ management technique provides performance-time scale

    Science.gov (United States)

    Ranck, N. H.

    1967-01-01

    Vis-A-Plan is a bar-charting technique for representing and evaluating project activities on a performance-time basis. This rectilinear method presents the logic diagram of a project as a series of horizontal time bars. It may be used supplementary to PERT or independently.

  4. Assessing public speaking fear with the short form of the Personal Report of Confidence as a Speaker scale: confirmatory factor analyses among a French-speaking community sample

    Directory of Open Access Journals (Sweden)

    Heeren A

    2013-05-01

    Full Text Available Alexandre Heeren,1,2 Grazia Ceschi,3 David P Valentiner,4 Vincent Dethier,1 Pierre Philippot11Université Catholique de Louvain, Louvain-la-Neuve, Belgium; 2National Fund for Scientific Research, Brussels, Belgium; 3Department of Psychology, University of Geneva, Geneva, Switzerland; 4Department of Psychology, Northern Illinois University, DeKalb, IL, USABackground: The main aim of this study was to assess the reliability and structural validity of the French version of the 12-item version of the Personal Report of Confidence as Speaker (PRCS, one of the most promising measurements of public speaking fear.Methods: A total of 611 French-speaking volunteers were administered the French versions of the short PRCS, the Liebowitz Social Anxiety Scale, the Fear of Negative Evaluation scale, as well as the Trait version of the Spielberger State-Trait Anxiety Inventory and the Beck Depression Inventory-II, which assess the level of anxious and depressive symptoms, respectively.Results: Regarding its structural validity, confirmatory factor analyses indicated a single-factor solution, as implied by the original version. Good scale reliability (Cronbach’s alpha = 0.86 was observed. The item discrimination analysis suggested that all the items contribute to the overall scale score reliability. The French version of the short PRCS showed significant correlations with the Liebowitz Social Anxiety Scale (r = 0.522, the Fear of Negative Evaluation scale (r = 0.414, the Spielberger State-Trait Anxiety Inventory (r = 0.516, and the Beck Depression Inventory-II (r = 0.361.Conclusion: The French version of the short PRCS is a reliable and valid measure for the evaluation of the fear of public speaking among a French-speaking sample. These findings have critical consequences for the measurement of psychological and pharmacological treatment effectiveness in public speaking fear among a French-speaking sample.Keywords: social phobia, public speaking, confirmatory

  5. Static analysis: from theory to practice; Static analysis of large-scale embedded code, generation of abstract domains; Analyse statique: de la theorie a la pratique; analyse statique de code embarque de grande taille, generation de domaines abstraits

    Energy Technology Data Exchange (ETDEWEB)

    Monniaux, D.

    2009-06-15

    Software operating critical systems (aircraft, nuclear power plants) should not fail - whereas most computerised systems of daily life (personal computer, ticket vending machines, cell phone) fail from time to time. This is not a simple engineering problem: it is known, since the works of Turing and Cook, that proving that programs work correctly is intrinsically hard. In order to solve this problem, one needs methods that are, at the same time, efficient (moderate costs in time and memory), safe (all possible failures should be found), and precise (few warnings about nonexistent failures). In order to reach a satisfactory compromise between these goals, one can research fields as diverse as formal logic, numerical analysis or 'classical' algorithmics. From 2002 to 2007 I participated in the development of the Astree static analyser. This suggested to me a number of side projects, both theoretical and practical (use of formal proof techniques, analysis of numerical filters...). More recently, I became interested in modular analysis of numerical property and in the applications to program analysis of constraint solving techniques (semi-definite programming, SAT and SAT modulo theory). (author)

  6. Bridging the scales in atmospheric composition simulations using a nudging technique

    Science.gov (United States)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean

  7. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    This work is concerned with the development of a new on-line measuring technique to be used in measurements of the water concentration in a two component oil/water or three component (i.e. multiphase) oil/water/gas flow. The technique is based on using non-intrusive coil detectors and experiments were performed both statically (medium at rest) and dynamically (medium flowing through a flow rig). The various coil detectors were constructed with either one or two coils and specially designed electronics were used. The medium was composed by air, machine oil, and water having different conductivity values, i.e. seawater and salt water with various conductivities (salt concentrations) such as 1 S/m, 4.9 S/m and 9.3 S/m. The experimental measurements done with the different mixtures were further used to mathematically model the physical principle used in the technique. This new technique is based on measuring the coil impedance and signal frequency at the self-resonance frequency of the coil to determine the water concentration in the mix. By using numerous coils it was found, experimentally, that generally both the coil impedance and the self-resonance frequency of the coil decreased as the medium conductivity increased. Both the impedance and the self-resonance frequency of the coil depended on the medium loss due to the induced eddy currents within the conductive media in the mixture, i.e. water. In order to detect relatively low values of the medium loss, the self-resonance frequency of the coil and also of the magnetic field penetrating the media should be relatively high (within the MHz range and higher). Therefore, the technique was called and referred to throughout the entire work as the high frequency magnetic field technique (HFMFT). To practically use the HFMFT, it was necessary to circumscribe an analytical frame to this technique. This was done by working out a mathematical model that relates the impedance and the self-resonance frequency of the coil to the

  8. Fabrication Of Atomic-scale Gold Junctions By Electrochemical Plating Technique Using A Common Medical Disinfectant

    Science.gov (United States)

    Umeno, Akinori; Hirakawa, Kazuhiko

    2005-06-01

    Iodine tincture, a medical liquid familiar as a disinfectant, was introduced as an etching/deposition electrolyte for the fabrication of nanometer-separated gold electrodes. In the gold dissolved iodine tincture, the gold electrodes were grown or eroded slowly in atomic scale, enough to form quantum point contacts. The resistance evolution during the electrochemical deposition showed plateaus at integer multiples of the resistance quantum, (2e2/h)-1, at the room temperature. The iodine tincture is a commercially available common material, which makes the fabrication process to be the simple and cost effective. Moreover, in contrast to the conventional electrochemical approaches, this method is free from highly toxic cyanide compounds or extraordinary strong acid. We expect this method to be a useful interface between single-molecular-scale structures and macroscopic opto-electronic devices.

  9. Synthesis of fish scales gelatin-chitosan crosslinked films by gamma irradiation techniques

    International Nuclear Information System (INIS)

    Erizal; Perkasa, D.P.; Abbas, B.; Sulistioso, G.S.

    2013-01-01

    Gelatin is an important component of fish scales. Nowadays, attention has increased concerning the application of gelatin.The aim of this research was to improve the mechanical properties of gelatin produced from fish scales, which concurrently could increase the usefulness of fish scales. Gelatin (G) is prone to degrade or dissolve in water at room temperature, therefore to enhance its lifetime, it has to be modified with other compound such as chitosan. Chitosan (Cs) is a biodegradable polymer, which has biocompatibility and antibacterial properties. In this study, gelatin solution was mixed with chitosan solution in various ratios (G/Cs: 100/0, 75/25, 50/50, 25/75, 0/100), casted at room temperature to make composite films, then tested for the effectiveness of various gamma irradiation doses (10-40 kGy) for crosslinking of the two polymers. Chemical changes of the films were measured by FT-IR, gel fractions were determined by gravimetry, and mechanical properties were determined by tensile strength and elongation at break using universal testing machine. At optimum conditions ( 30 kGy and 75% Cs), the gel fraction, tensile strength, and elongation at break were higher leading to a stronger composite films as compared to the gelatin film. FTIR spectral analysis showed that gelatin and chitosan formed a crosslinked network. It was concluded that G-Cs films prepared by gamma irradiation have improved their mechanical properties than the gelatin itself. (author)

  10. Mechanisms of mineral scaling in oil and geothermal wells studied in laboratory experiments by nuclear techniques

    International Nuclear Information System (INIS)

    Bjoernstad, T.; Stamatakis, E.

    2006-01-01

    Two independent nuclear methods have been developed and tested for studies of mineral scaling mechanisms and kinetics related to the oil and geothermal industry. The first is a gamma transmission method to measure mass increase with a 30 MBq source of 133 Ba. The other method applies radioactive tracers of one or more of the scaling components. CaCO 3 -precipitation has been used as an example here where the main tracer has been 47 Ca 2+ . While the transmission method is an indirect method, the latter is a direct method where the reactions of specific components may be studied. Both methods are on-line, continuous and non-destructive, and capable to study scaling of liquids with saturation ratios approaching the solubility product. A lower limit for detection of CaCO 3 with the transmission method in sand-packed columns with otherwise reasonable experimental parameters is estimated to be < 1 mg in a 1 cm section of the tube packed with silica sand while the lower limit of detection for the tracer method with reasonable experimental parameters is estimated to < 1 μg in the same tube section. (author)

  11. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    Science.gov (United States)

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  12. Discovery and characterisation of dietary patterns in two Nordic countries. Using non-supervised and supervised multivariate statistical techniques to analyse dietary survey data

    DEFF Research Database (Denmark)

    Edberg, Anna; Freyhult, Eva; Sand, Salomon

    - and inter-national data excerpts. For example, major PCA loadings helped deciphering both shared and disparate features, relating to food groups, across Danish and Swedish preschool consumers. Data interrogation, reliant on the above-mentioned composite techniques, disclosed one outlier dietary prototype...... prototype with the latter property was identified also in the Danish data material, but without low consumption of Vegetables or Fruit & berries. The second MDA-type of data interrogation involved Supervised Learning, also known as Predictive Modelling. These exercises involved the Random Forest (RF...... not elaborated on in-depth, output from several analyses suggests a preference for energy-based consumption data for Cluster Analysis and Predictive Modelling, over those appearing as weight....

  13. Kinematics and strain analyses of the eastern segment of the Pernicana Fault (Mt. Etna, Italy derived from geodetic techniques (1997-2005

    Directory of Open Access Journals (Sweden)

    M. Mattia

    2006-06-01

    Full Text Available This paper analyses the ground deformations occurring on the eastern part of the Pernicana Fault from 1997 to 2005. This segment of the fault was monitored with three local networks based on GPS and EDM techniques. More than seventy GPS and EDM surveys were carried out during the considered period, in order to achieve a higher temporal detail of ground deformation affecting the structure. We report the comparisons among GPS and EDM surveys in terms of absolute horizontal displacements of each GPS benchmark and in terms of strain parameters for each GPS and EDM network. Ground deformation measurements detected a continuous left-lateral movement of the Pernicana Fault. We conclude that, on the easternmost part of the Pernicana Fault, where it branches out into two segments, the deformation is transferred entirely SE-wards by a splay fault.

  14. Bench Scale Treatability Studies of Contaminated Soil Using Soil Washing Technique

    OpenAIRE

    Gupta, M. K.; Srivastava, R. K.; Singh, A. K.

    2010-01-01

    Soil contamination is one of the most widespread and serious environmental problems confronting both the industrialized as well as developing nations like India. Different contaminants have different physicochemical properties, which influence the geochemical reactions induced in the soils and may bring about changes in their engineering and environmental behaviour. Several technologies exist for the remediation of contaminated soil and water. In the present study soil washing technique using...

  15. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  16. Structures and Techniques For Implementing and Packaging Complex, Large Scale Microelectromechanical Systems Using Foundry Fabrication Processes.

    Science.gov (United States)

    1996-06-01

    switches 5-43 Figure 5-27. Mechanical interference between ’Pull Spring’ devices 5-45 Figure 5-28. Array of LIGA mechanical relay switches 5-49...like coating DM Direct metal interconnect technique DMD ™ Digital Micromirror Device EDP Ethylene, diamine, pyrocatechol and water; silicon anisotropic...mechanical systems MOSIS MOS Implementation Service PGA Pin grid array, an electronic die package PZT Lead-zirconate-titanate LIGA Lithographie

  17. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  18. Ex vivo activity quantification in micrometastases at the cellular scale using the α-camera technique

    DEFF Research Database (Denmark)

    Chouin, Nicolas; Lindegren, Sture; Frost, Sofia H L

    2013-01-01

    Targeted α-therapy (TAT) appears to be an ideal therapeutic technique for eliminating malignant circulating, minimal residual, or micrometastatic cells. These types of malignancies are typically infraclinical, complicating the evaluation of potential treatments. This study presents a method of ex...... vivo activity quantification with an α-camera device, allowing measurement of the activity taken up by tumor cells in biologic structures a few tens of microns....

  19. Energy, exergy, economic (3E) analyses and multi-objective optimization of vapor absorption heat transformer using NSGA-II technique

    International Nuclear Information System (INIS)

    Jain, Vaibhav; Sachdeva, Gulshan

    2017-01-01

    Highlights: • Study includes energy, exergy and economic analyses of absorption heat transformer. • It addresses multi-objective optimization study using NSGA-II technique. • Total annual cost and total exergy destruction are simultaneously optimized. • Results with multi-objective optimized design are more acceptable than other. - Abstract: Present paper addresses the energy, exergy and economic (3E) analyses of absorption heat transformer (AHT) working with LiBr-H 2 O fluid pair. The heat exchangers namely absorber, condenser, evaporator, generator and solution heat exchanger are designed for the size and cost estimation of AHT. Later, the effect of operating variables is examined on the system performance, size and cost. Simulation studies showed a conflict between thermodynamic and economic performance of the system. The heat exchangers with lower investment cost showed high irreversible losses and vice versa. Thus, the operating variables of systems are determined economically as well as thermodynamically by implementing non-dominated sort genetic algorithm-II (NSGA-II) technique of multi-objective optimization. In present work, if the cost based optimized design is chosen, total exergy destruction is 2.4% higher than its minimum possible value; whereas, if total exergy based optimized design is chosen, total annual cost is 6.1% higher than its minimum possible value. On the other hands, total annual cost and total exergy destruction are only 1.0% and 0.8%, respectively more from their minimum possible values with multi-objective optimized design. Thus, the multi-objective optimized design of the AHT is best outcome than any other single-objective optimized designs.

  20. POC-scale testing of an advanced fine coal dewatering equipment/technique

    Energy Technology Data Exchange (ETDEWEB)

    Groppo, J.G.; Parekh, B.K. [Univ. of Kentucky, Lexington, KY (United States); Rawls, P. [Department of Energy, Pittsburgh, PA (United States)

    1995-11-01

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 {mu}m) clean coal. Economical dewatering of an ultra-fine clean coal product to a 20 percent level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy`s program to show that ultra-clean coal could be effectively dewatered to 20 percent or lower moisture using either conventional or advanced dewatering techniques. As the contract title suggests, the main focus of the program is on proof-of-concept testing of a dewatering technique for a fine clean coal product. The coal industry is reluctant to use the advanced fine coal recovery technology due to the non-availability of an economical dewatering process. in fact, in a recent survey conducted by U.S. DOE and Battelle, dewatering of fine clean coal was identified as the number one priority for the coal industry. This project will attempt to demonstrate an efficient and economic fine clean coal slurry dewatering process.

  1. Evaluation of different downscaling techniques for hydrological climate-change impact studies at the catchment scale

    Energy Technology Data Exchange (ETDEWEB)

    Teutschbein, Claudia [Stockholm University, Department of Physical Geography and Quaternary Geology, Stockholm (Sweden); Wetterhall, Fredrik [King' s College London, Department of Geography, Strand, London (United Kingdom); Swedish Meteorological and Hydrological Institute, Norrkoeping (Sweden); Seibert, Jan [Stockholm University, Department of Physical Geography and Quaternary Geology, Stockholm (Sweden); Uppsala University, Department of Earth Sciences, Uppsala (Sweden); University of Zurich, Department of Geography, Zurich (Switzerland)

    2011-11-15

    Hydrological modeling for climate-change impact assessment implies using meteorological variables simulated by global climate models (GCMs). Due to mismatching scales, coarse-resolution GCM output cannot be used directly for hydrological impact studies but rather needs to be downscaled. In this study, we investigated the variability of seasonal streamflow and flood-peak projections caused by the use of three statistical approaches to downscale precipitation from two GCMs for a meso-scale catchment in southeastern Sweden: (1) an analog method (AM), (2) a multi-objective fuzzy-rule-based classification (MOFRBC) and (3) the Statistical DownScaling Model (SDSM). The obtained higher-resolution precipitation values were then used to simulate daily streamflow for a control period (1961-1990) and for two future emission scenarios (2071-2100) with the precipitation-streamflow model HBV. The choice of downscaled precipitation time series had a major impact on the streamflow simulations, which was directly related to the ability of the downscaling approaches to reproduce observed precipitation. Although SDSM was considered to be most suitable for downscaling precipitation in the studied river basin, we highlighted the importance of an ensemble approach. The climate and streamflow change signals indicated that the current flow regime with a snowmelt-driven spring flood in April will likely change to a flow regime that is rather dominated by large winter streamflows. Spring flood events are expected to decrease considerably and occur earlier, whereas autumn flood peaks are projected to increase slightly. The simulations demonstrated that projections of future streamflow regimes are highly variable and can even partly point towards different directions. (orig.)

  2. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    Science.gov (United States)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  3. Scaling up quality care for mothers and newborns around the time of birth: an overview of methods and analyses of intervention-specific bottlenecks and solutions.

    Science.gov (United States)

    Dickson, Kim E; Kinney, Mary V; Moxon, Sarah G; Ashton, Joanne; Zaka, Nabila; Simen-Kapeu, Aline; Sharma, Gaurav; Kerber, Kate J; Daelmans, Bernadette; Gülmezoglu, A; Mathai, Matthews; Nyange, Christabel; Baye, Martina; Lawn, Joy E

    2015-01-01

    The Every Newborn Action Plan (ENAP) and Ending Preventable Maternal Mortality targets cannot be achieved without high quality, equitable coverage of interventions at and around the time of birth. This paper provides an overview of the methodology and findings of a nine paper series of in-depth analyses which focus on the specific challenges to scaling up high-impact interventions and improving quality of care for mothers and newborns around the time of birth, including babies born small and sick. The bottleneck analysis tool was applied in 12 countries in Africa and Asia as part of the ENAP process. Country workshops engaged technical experts to complete a tool designed to synthesise "bottlenecks" hindering the scale up of maternal-newborn intervention packages across seven health system building blocks. We used quantitative and qualitative methods and literature review to analyse the data and present priority actions relevant to different health system building blocks for skilled birth attendance, emergency obstetric care, antenatal corticosteroids (ACS), basic newborn care, kangaroo mother care (KMC), treatment of neonatal infections and inpatient care of small and sick newborns. The 12 countries included in our analysis account for the majority of global maternal (48%) and newborn (58%) deaths and stillbirths (57%). Our findings confirm previously published results that the interventions with the most perceived bottlenecks are facility-based where rapid emergency care is needed, notably inpatient care of small and sick newborns, ACS, treatment of neonatal infections and KMC. Health systems building blocks with the highest rated bottlenecks varied for different interventions. Attention needs to be paid to the context specific bottlenecks for each intervention to scale up quality care. Crosscutting findings on health information gaps inform two final papers on a roadmap for improvement of coverage data for newborns and indicate the need for leadership for

  4. Towards large-scale FAME-based bacterial species identification using machine learning techniques.

    Science.gov (United States)

    Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul

    2009-05-01

    In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species

  5. Ranking provinces based on development scale in agriculture sector using taxonomy technique

    Directory of Open Access Journals (Sweden)

    Shahram Rostampour

    2012-08-01

    Full Text Available The purpose of this paper is to determine comparative ranking of agricultural development in different provinces of Iran using taxonomy technique. The independent variables are amount of annual rainfall amount, the number of permanent rivers, the width of pastures and forest, cultivated level of agricultural harvests and garden harvests, number of beehives, the number of fish farming ranches, the number of tractors and combines, the number of cooperative production societies, the number of industrial cattle breeding and aviculture. The results indicate that the maximum development coefficient value is associated with Razavi Khorasan province followed by Mazandaran, East Azarbayjan while the minimum ranking value belongs to Bushehr province.

  6. Using artificial soil sediment mixtures for calibrating fingerprinting techniques at catchment scale

    International Nuclear Information System (INIS)

    Torres Astorga, Romina; Martin, Osvaldo A.; Velasco, Ricardo Hugo; Santos-Villalobos, Sergio de los; Mabit, Lionel; Dercon, Gerd

    2016-01-01

    Soil erosion and related sediment transportation and deposition are key environmental problems in Central Argentina. Certain land use practices, such as intensive grazing, are considered particularly harmful in causing erosion and sediment mobilization. In our studied catchment, Sub Catchment Estancia Grande (630 hectares), 23 km north east from San Luis, characterized by erosive loess soils, we tested sediment source fingerprinting techniques to identify critical hot spots of land degradation, based on the concentration of 43 elements determined by Energy Dispersive X Ray Fluorescence (EDXRF).

  7. Plasmonic nanoparticle lithography: Fast resist-free laser technique for large-scale sub-50 nm hole array fabrication

    Science.gov (United States)

    Pan, Zhenying; Yu, Ye Feng; Valuckas, Vytautas; Yap, Sherry L. K.; Vienne, Guillaume G.; Kuznetsov, Arseniy I.

    2018-05-01

    Cheap large-scale fabrication of ordered nanostructures is important for multiple applications in photonics and biomedicine including optical filters, solar cells, plasmonic biosensors, and DNA sequencing. Existing methods are either expensive or have strict limitations on the feature size and fabrication complexity. Here, we present a laser-based technique, plasmonic nanoparticle lithography, which is capable of rapid fabrication of large-scale arrays of sub-50 nm holes on various substrates. It is based on near-field enhancement and melting induced under ordered arrays of plasmonic nanoparticles, which are brought into contact or in close proximity to a desired material and acting as optical near-field lenses. The nanoparticles are arranged in ordered patterns on a flexible substrate and can be attached and removed from the patterned sample surface. At optimized laser fluence, the nanohole patterning process does not create any observable changes to the nanoparticles and they have been applied multiple times as reusable near-field masks. This resist-free nanolithography technique provides a simple and cheap solution for large-scale nanofabrication.

  8. Novel GIMS technique for deposition of colored Ti/TiO₂ coatings on industrial scale

    Directory of Open Access Journals (Sweden)

    Zdunek Krzysztof

    2016-03-01

    Full Text Available The aim of the present paper has been to verify the effectiveness and usefulness of a novel deposition process named GIMS (Gas Injection Magnetron Sputtering used for the flrst time for deposition of Ti/TiO₂ coatings on large area glass Substrates covered in the condition of industrial scale production. The Ti/TiO₂ coatings were deposited in an industrial System utilizing a set of linear magnetrons with the length of 2400 mm each for covering the 2000 × 3000 mm glasses. Taking into account the speciflc course of the GIMS (multipoint gas injection along the magnetron length and the scale of the industrial facility, the optical coating uniformity was the most important goal to check. The experiments on Ti/TiO₂ coatings deposited by the use of GIMS were conducted on Substrates in the form of glass plates located at the key points along the magnetrons and intentionally non-heated during any stage of the process. Measurements of the coatings properties showed that the thickness and optical uniformity of the 150 nm thick coatings deposited by GIMS in the industrial facility (the thickness differences on the large plates with 2000 mm width did not exceed 20 nm is fully acceptable form the point of view of expected applications e.g. for architectural glazing.

  9. All-automatic swimmer tracking system based on an optimized scaled composite JTC technique

    Science.gov (United States)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2016-04-01

    In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.

  10. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    Science.gov (United States)

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  11. Large-scale nanofabrication of periodic nanostructures using nanosphere-related techniques for green technology applications (Conference Presentation)

    Science.gov (United States)

    Yen, Chen-Chung; Wu, Jyun-De; Chien, Yi-Hsin; Wang, Chang-Han; Liu, Chi-Ching; Ku, Chen-Ta; Chen, Yen-Jon; Chou, Meng-Cheng; Chang, Yun-Chorng

    2016-09-01

    Nanotechnology has been developed for decades and many interesting optical properties have been demonstrated. However, the major hurdle for the further development of nanotechnology depends on finding economic ways to fabricate such nanostructures in large-scale. Here, we demonstrate how to achieve low-cost fabrication using nanosphere-related techniques, such as Nanosphere Lithography (NSL) and Nanospherical-Lens Lithography (NLL). NSL is a low-cost nano-fabrication technique that has the ability to fabricate nano-triangle arrays that cover a very large area. NLL is a very similar technique that uses polystyrene nanospheres to focus the incoming ultraviolet light and exposure the underlying photoresist (PR) layer. PR hole arrays form after developing. Metal nanodisk arrays can be fabricated following metal evaporation and lifting-off processes. Nanodisk or nano-ellipse arrays with various sizes and aspect ratios are routinely fabricated in our research group. We also demonstrate we can fabricate more complicated nanostructures, such as nanodisk oligomers, by combining several other key technologies such as angled exposure and deposition, we can modify these methods to obtain various metallic nanostructures. The metallic structures are of high fidelity and in large scale. The metallic nanostructures can be transformed into semiconductor nanostructures and be used in several green technology applications.

  12. AGARD Flight Test Techniques Series. Volume 9. Aircraft Exterior Noise Measurement and Analysis Techniques. (Le Bruit a l’Exterieur des Aeronefs: Techniques de Mesure et d’Analyse)

    Science.gov (United States)

    1991-04-01

    sugeseed to me to write ui AGARDograpit on A~rlmaft Noie Mms dsurnent Anallysis Techniques’. Being overjoyed, and quite honoured. I realdily agreed to his...Gelt& I )nd Delta 2 terms) Wb) Source Noise Correction - Jet Engine Noise ’) ielts 3 term) (c) Snor"e Noise Correction - Propeller Noise (Delta 3...printed out, since it is impractical to write these down by hand durilg th,. test). One track on each tape-recorder must be used to record a time code

  13. Patterns and sources of adult personality development: growth curve analyses of the NEO PI-R scales in a longitudinal twin study.

    Science.gov (United States)

    Bleidorn, Wiebke; Kandler, Christian; Riemann, Rainer; Spinath, Frank M; Angleitner, Alois

    2009-07-01

    The present study examined the patterns and sources of 10-year stability and change of adult personality assessed by the 5 domains and 30 facets of the Revised NEO Personality Inventory. Phenotypic and biometric analyses were performed on data from 126 identical and 61 fraternal twins from the Bielefeld Longitudinal Study of Adult Twins (BiLSAT). Consistent with previous research, LGM analyses revealed significant mean-level changes in domains and facets suggesting maturation of personality. There were also substantial individual differences in the change trajectories of both domain and facet scales. Correlations between age and trait changes were modest and there were no significant associations between change and gender. Biometric extensions of growth curve models showed that 10-year stability and change of personality were influenced by both genetic as well as environmental factors. Regarding the etiology of change, the analyses uncovered a more complex picture than originally stated, as findings suggest noticeable differences between traits with respect to the magnitude of genetic and environmental effects. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  14. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jeffs, S.P., E-mail: s.p.jeffs@swansea.ac.uk [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Lancaster, R.J. [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Garcia, T.E. [IUTA (University Institute of Industrial Technology of Asturias), University of Oviedo, Edificio Departamental Oeste 7.1.17, Campus Universitario, 33203 Gijón (Spain)

    2015-06-11

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k{sub SP} method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results.

  15. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    International Nuclear Information System (INIS)

    Jeffs, S.P.; Lancaster, R.J.; Garcia, T.E.

    2015-01-01

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k SP method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results

  16. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    International Nuclear Information System (INIS)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries

  17. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.

  18. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.

  19. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    International Nuclear Information System (INIS)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR

  20. Analytical-scale separations of lanthanides : a review of techniques and fundamentals

    International Nuclear Information System (INIS)

    Nash, K. L.; Jensen, M. P.

    1999-01-01

    Separations chemistry is at the heart of most analytical procedures to determine the rare earth content of both man-made and naturally occurring materials. Such procedures are widely used in mineral exploration, fundamental geology and geochemistry, material science, and in the nuclear industry. Chromatographic methods that rely on aqueous solutions containing complexing agents sensitive to the lanthanide cationic radius and cation-exchange phase transfer reactions (using a variety of different solid media) have enjoyed the greatest success for these procedures. In this report, they will briefly summarize the most important methods for completing such analyses. they consider in some detail the basic aqueous (and two-phase) solution chemistry that accounts for separations that work well and offer explanations for why others are less successful

  1. A Procedure to Map Subsidence at the Regional Scale Using the Persistent Scatterer Interferometry (PSI Technique

    Directory of Open Access Journals (Sweden)

    Ascanio Rosi

    2014-10-01

    Full Text Available In this paper, we present a procedure to map subsidence at the regional scale by means of persistent scatterer interferometry (PSI. Subsidence analysis is usually restricted to plain areas and where the presence of this phenomenon is already known. The proposed procedure allows a fast identification of subsidences in large and hilly-mountainous areas. The test area is the Tuscany region, in Central Italy, where several areas are affected by natural and anthropogenic subsidence and where PSI data acquired by the Envisat satellite are available both in ascending and descending orbit. The procedure consists of the definition of the vertical and horizontal components of the deformation measured by satellite at first, then of the calculation of the “real” displacement direction, so that mainly vertical deformations can be individuated and mapped.

  2. A case study of life cycle impacts of small-scale fishing techniques in Thailand

    DEFF Research Database (Denmark)

    Verones, Francesca; Bolowich, Alya F.; Ebata, Keigo

    2017-01-01

    Fish provides an important source of protein, especially in developing countries, and the amounts of fish consumed are increasing worldwide (mostly from aquaculture). More than half of all marine fish are caught by small-scale fishery operations. However, no life cycle assessment (LCA) of small...... inventories for three different seasons (northeast monsoon, southwest monsoon and pre-monsoon), since the time spent on the water and catch varied significantly between the seasons. Our results showed the largest impacts from artisanal fishing operations affect climate change, human toxicity, and fossil...... and metal depletion. Our results are, in terms of global warming potential, comparable with other artisanal fisheries. Between different fishing operations, impacts vary between a factor of 2 (for land transformation impacts) and up to a factor of more than 20 (fossil fuel depletion and marine...

  3. A Robust Decision-Making Technique for Water Management under Decadal Scale Climate Variability

    Science.gov (United States)

    Callihan, L.; Zagona, E. A.; Rajagopalan, B.

    2013-12-01

    Robust decision making, a flexible and dynamic approach to managing water resources in light of deep uncertainties associated with climate variability at inter-annual to decadal time scales, is an analytical framework that detects when a system is in or approaching a vulnerable state. It provides decision makers the opportunity to implement strategies that both address the vulnerabilities and perform well over a wide range of plausible future scenarios. A strategy that performs acceptably over a wide range of possible future states is not likely to be optimal with respect to the actual future state. The degree of success--the ability to avoid vulnerable states and operate efficiently--thus depends on the skill in projecting future states and the ability to select the most efficient strategies to address vulnerabilities. This research develops a robust decision making framework that incorporates new methods of decadal scale projections with selection of efficient strategies. Previous approaches to water resources planning under inter-annual climate variability combining skillful seasonal flow forecasts with climatology for subsequent years are not skillful for medium term (i.e. decadal scale) projections as decision makers are not able to plan adequately to avoid vulnerabilities. We address this need by integrating skillful decadal scale streamflow projections into the robust decision making framework and making the probability distribution of this projection available to the decision making logic. The range of possible future hydrologic scenarios can be defined using a variety of nonparametric methods. Once defined, an ensemble projection of decadal flow scenarios are generated from a wavelet-based spectral K-nearest-neighbor resampling approach using historical and paleo-reconstructed data. This method has been shown to generate skillful medium term projections with a rich variety of natural variability. The current state of the system in combination with the

  4. Scaling Analysis Techniques to Establish Experimental Infrastructure for Component, Subsystem, and Integrated System Testing

    Energy Technology Data Exchange (ETDEWEB)

    Sabharwall, Piyush [Idaho National Laboratory (INL), Idaho Falls, ID (United States); O' Brien, James E. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); McKellar, Michael G. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Housley, Gregory K. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Bragg-Sitton, Shannon M. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2015-03-01

    Hybrid energy system research has the potential to expand the application for nuclear reactor technology beyond electricity. The purpose of this research is to reduce both technical and economic risks associated with energy systems of the future. Nuclear hybrid energy systems (NHES) mitigate the variability of renewable energy sources, provide opportunities to produce revenue from different product streams, and avoid capital inefficiencies by matching electrical output to demand by using excess generation capacity for other purposes when it is available. An essential step in the commercialization and deployment of this advanced technology is scaled testing to demonstrate integrated dynamic performance of advanced systems and components when risks cannot be mitigated adequately by analysis or simulation. Further testing in a prototypical environment is needed for validation and higher confidence. This research supports the development of advanced nuclear reactor technology and NHES, and their adaptation to commercial industrial applications that will potentially advance U.S. energy security, economy, and reliability and further reduce carbon emissions. Experimental infrastructure development for testing and feasibility studies of coupled systems can similarly support other projects having similar developmental needs and can generate data required for validation of models in thermal energy storage and transport, energy, and conversion process development. Experiments performed in the Systems Integration Laboratory will acquire performance data, identify scalability issues, and quantify technology gaps and needs for various hybrid or other energy systems. This report discusses detailed scaling (component and integrated system) and heat transfer figures of merit that will establish the experimental infrastructure for component, subsystem, and integrated system testing to advance the technology readiness of components and systems to the level required for commercial

  5. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  6. Different scale land subsidence and ground fissure monitoring with multiple InSAR techniques over Fenwei basin, China

    Directory of Open Access Journals (Sweden)

    C. Zhao

    2015-11-01

    Full Text Available Fenwei basin, China, composed by several sub-basins, has been suffering severe geo-hazards in last 60 years, including large scale land subsidence and small scale ground fissure, which caused serious infrastructure damages and property losses. In this paper, we apply different InSAR techniques with different SAR data to monitor these hazards. Firstly, combined small baseline subset (SBAS InSAR method and persistent scatterers (PS InSAR method is used to multi-track Envisat ASAR data to retrieve the large scale land subsidence covering entire Fenwei basin, from which different land subsidence magnitudes are analyzed of different sub-basins. Secondly, PS-InSAR method is used to monitor the small scale ground fissure deformation in Yuncheng basin, where different spatial deformation gradient can be clearly discovered. Lastly, different track SAR data are contributed to retrieve two-dimensional deformation in both land subsidence and ground fissure region, Xi'an, China, which can be benefitial to explain the occurrence of ground fissure and the correlation between land subsidence and ground fissure.

  7. Technique for large-scale structural mapping at uranium deposits i in non-metamorphosed sedimentary cover rocks

    International Nuclear Information System (INIS)

    Kochkin, B.T.

    1985-01-01

    The technique for large-scale construction (1:1000 - 1:10000), reflecting small amplitude fracture plicate structures, is given for uranium deposits in non-metamorphozed sedimentary cover rocks. Structure drill log sections, as well as a set of maps with the results of area analysis of hidden disturbances, structural analysis of iso-pachous lines and facies of platform mantle horizons serve as sour ce materials for structural mapplotting. The steps of structural map construction are considered: 1) structural carcass construction; 2) reconstruction of structure contour; 3) time determination of structure initiation; 4) plotting of an additional geologic load

  8. Application od scaling technique for estimation of radionuclide inventory in radioactive waste

    International Nuclear Information System (INIS)

    Hertelendi, E.; Szuecs, Z.; Gulyas, J.; Svingor, E.; Csongor, J.; Ormai, P.; Fritz, A.; Solymosi, J.; Gresits, I.; Vajda, N.; Molnar, Zs.

    1996-01-01

    Safety studies related to the disposal of low- and intermediate waste indicate that the long term risk is determined by the presence of long-lived nuclides such as 14 C, 59 Ni, 63 Ni, 99 Tc, 129 I and the transuranium elements. As most of these nuclides are difficult to measure, the correlation between these critical nuclides and some other easily measurable key nuclides such as 60 Co and 137 Cs has been investigated for typical waste streams of Paks Nuclear Power Plant (Hungary) and scaling factors have been proposed. An automated gamma-scanning monitor has been purchased and calibrated to determine the gamma-emitting radionuclides. Radiochemical methods have been developed to determine significant difficult-to-measure radionuclides. The radionuclides of interest have been 3 H, 14 C, 90 Sr, 55 Fe, 59 Ni, 99 Tc, 129 I and TRUs. The measurements taken so far have revealed brand new information and data on radiological composition of waste of WWER-type reactors. The reliability of the radioanalytical methods was checked by an international intercomparison test. For all radionuclides the Hungarian results were in the average range of the total data set. (author)

  9. An industry-scale mass marking technique for tracing farmed fish escapees.

    Directory of Open Access Journals (Sweden)

    Fletcher Warren-Myers

    Full Text Available Farmed fish escape and enter the environment with subsequent effects on wild populations. Reducing escapes requires the ability to trace individuals back to the point of escape, so that escape causes can be identified and technical standards improved. Here, we tested if stable isotope otolith fingerprint marks delivered during routine vaccination could be an accurate, feasible and cost effective marking method, suitable for industrial-scale application. We tested seven stable isotopes, (134Ba, (135Ba, (136Ba, (137Ba, (86Sr, (87Sr and (26Mg, on farmed Atlantic salmon reared in freshwater, in experimental conditions designed to reflect commercial practice. Marking was 100% successful with individual Ba isotopes at concentrations as low as 0.001 µg. g-1 fish and for Sr isotopes at 1 µg. g-1 fish. Our results suggest that 63 unique fingerprint marks can be made at low cost using Ba (0.0002 - 0.02 $US per mark and Sr (0.46 - 0.82 $US per mark isotopes. Stable isotope fingerprinting during vaccination is feasible for commercial application if applied at a company level within the world's largest salmon producing nations. Introducing a mass marking scheme would enable tracing of escapees back to point of origin, which could drive greater compliance, better farm design and improved management practices to reduce escapes.

  10. Subchains: A Technique to Scale Bitcoin and Improve the User Experience

    Directory of Open Access Journals (Sweden)

    Peter R. Rizun

    2016-12-01

    Full Text Available Orphan risk for large blocks limits Bitcoin’s transactional capacity while the lack of secure instant transactions restricts its usability. Progress on either front would help spur adoption. This paper considers a technique for using fractional-difficulty blocks (weak blocks to build subchains bridging adjacent pairs of real blocks. Subchains reduce orphan risk by propagating blocks layer-by-layer over the entire block interval, rather than all at once when the proof-of-work is solved. Each new layer of transactions helps to secure the transactions included in lower layers, even though none of the transactions have been con-firmed in a real block. Miners are incentivized to cooperate building subchains in order to process more transactions per second (thereby claiming more fee revenue without incur-ring additional orphan risk. The use of subchains also diverts fee revenue towards network hash power rather than dripping it out of the system to pay for orphaned blocks. By nesting subchains, weak block verification times approaching the theoretical limits imposed by speed-of-light constraints would become possible with future technology improvements. As subchains are built on top of the existing Bitcoin protocol, their implementation does not require any changes to Bitcoin’s consensus rules.

  11. Bench Scale Treatability Studies of Contaminated Soil Using Soil Washing Technique

    Directory of Open Access Journals (Sweden)

    M. K. Gupta

    2010-01-01

    Full Text Available Soil contamination is one of the most widespread and serious environmental problems confronting both the industrialized as well as developing nations like India. Different contaminants have different physicochemical properties, which influence the geochemical reactions induced in the soils and may bring about changes in their engineering and environmental behaviour. Several technologies exist for the remediation of contaminated soil and water. In the present study soil washing technique using plain water with surfactants as an enhancer was used to study the remediation of soil contaminated with (i an organic contaminant (engine lubricant oil and (ii an inorganic contaminant (heavy metal. The lubricant engine oil was used at different percentages (by dry weight of the soil to artificially contaminate the soil. It was found that geotechnical properties of the soil underwent large modifications on account of mixing with the lubricant oil. The sorption experiments were conducted with cadmium metal in aqueous medium at different initial concentration of the metal and at varying pH values of the sorbing medium. For the remediation of contaminated soil matrices, a nonionic surfactant was used for the restoration of geotechnical properties of lubricant oil contaminated soil samples, whereas an anionic surfactant was employed to desorb cadmium from the contaminated soil matrix. The surfactant in case of soil contaminated with the lubricant oil was able to restore properties to an extent of 98% vis-à-vis the virgin soil, while up to 54% cadmium was desorbed from the contaminated soil matrix in surfactant aided desorption experiments.

  12. Development of small scale mechanical testing techniques on ion beam irradiated 304 SS

    International Nuclear Information System (INIS)

    Reichardt, A.; Abad, M.D.; Hosemann, P.; Lupinacci, A.; Kacher, J.; Minor, A.; Jiao, Z; Chou, P.

    2015-01-01

    Austenitic stainless steels are widely used for structural components in light water reactors, however uncertainty in their susceptibility to irradiation assisted stress corrosion cracking (IASCC) has made long term performance predictions difficult. In addition, the testing of reactor irradiated materials has proven challenging due to the long irradiation times required, limited sample availability, and unwanted activation. To address these problems, we apply recently developed techniques in nano-indentation and micro-compression testing to small volume samples of 10 dpa proton-beam irradiated 304 stainless steel. Cross sectional nano-indentation was performed on both proton beam irradiated and non-irradiated samples at temperatures ranging from 22 to 300 C. degrees to determine the effects of irradiation and operating temperature on hardening. Micro-compression tests using 2 μm x 2 μm x 5 μm focused-ion beam milled pillars were then performed in situ in an electron microscope to allow for a more accurate look at stress-strain behavior along with real-time observations of localized mechanical deformation. Large sudden slip events and significant increase in yield strength are observed in irradiated micro-compression samples at room temperature. Elevated temperature nano-indentation results reveal the possibility of thermally-activated changes in deformation mechanism for irradiated specimens. Since the deformation mechanism information provided by micro-compression testing can provide valuable information about IASCC susceptibility, future work will involve ex situ micro-compression tests at reactor operating temperature

  13. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  14. Development of spatial scaling technique of forest health sample point information

    Science.gov (United States)

    Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.

    2017-12-01

    Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.

  15. Investigation of the factor structure of the Wechsler Adult Intelligence Scale--Fourth Edition (WAIS-IV): exploratory and higher order factor analyses.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W

    2010-12-01

    The present study examined the factor structure of the Wechsler Adult Intelligence Scale--Fourth Edition (WAIS-IV; D. Wechsler, 2008a) standardization sample using exploratory factor analysis, multiple factor extraction criteria, and higher order exploratory factor analysis (J. Schmid & J. M. Leiman, 1957) not included in the WAIS-IV Technical and Interpretation Manual (D. Wechsler, 2008b). Results indicated that the WAIS-IV subtests were properly associated with the theoretically proposed first-order factors, but all but one factor-extraction criterion recommended extraction of one or two factors. Hierarchical exploratory analyses with the Schmid and Leiman procedure found that the second-order g factor accounted for large portions of total and common variance, whereas the four first-order factors accounted for small portions of total and common variance. It was concluded that the WAIS-IV provides strong measurement of general intelligence, and clinical interpretation should be primarily at that level.

  16. Large-scale User Facility Imaging and Scattering Techniques to Facilitate Basic Medical Research

    International Nuclear Information System (INIS)

    Miller, Stephen D.; Bilheux, Jean-Christophe; Gleason, Shaun Scott; Nichols, Trent L.; Bingham, Philip R.; Green, Mark L.

    2011-01-01

    Conceptually, modern medical imaging can be traced back to the late 1960's and into the early 1970's with the advent of computed tomography . This pioneering work was done by 1979 Nobel Prize winners Godfrey Hounsfield and Allan McLeod Cormack which evolved into the first prototype Computed Tomography (CT) scanner in 1971 and became commercially available in 1972. Unique to the CT scanner was the ability to utilize X-ray projections taken at regular angular increments from which reconstructed three-dimensional (3D) images could be produced. It is interesting to note that the mathematics to realize tomographic images was developed in 1917 by the Austrian mathematician Johann Radon who produced the mathematical relationships to derive 3D images from projections - known today as the Radon Transform . The confluence of newly advancing technologies, particularly in the areas of detectors, X-ray tubes, and computers combined with the earlier derived mathematical concepts ushered in a new era in diagnostic medicine via medical imaging (Beckmann, 2006). Occurring separately but at a similar time as the development of the CT scanner were efforts at the national level within the United States to produce user facilities to support scientific discovery based upon experimentation. Basic Energy Sciences within the United States Department of Energy currently supports 9 major user facilities along with 5 nanoscale science research centers dedicated to measurement sciences and experimental techniques supporting a very broad range of scientific disciplines. Tracing back the active user facilities, the Stanford Synchrotron Radiation Lightsource (SSRL) a SLAC National Accelerator Laboratory was built in 1974 and it was realized that its intense x-ray beam could be used to study protein molecular structure. The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory was commissioned in 1982 and currently has 60 x-ray beamlines optimized for a number of different

  17. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  18. Decomposing the trade-environment nexus for Malaysia: what do the technique, scale, composition, and comparative advantage effect indicate?

    Science.gov (United States)

    Ling, Chong Hui; Ahmed, Khalid; Binti Muhamad, Rusnah; Shahbaz, Muhammad

    2015-12-01

    This paper investigates the impact of trade openness on CO2 emissions using time series data over the period of 1970QI-2011QIV for Malaysia. We disintegrate the trade effect into scale, technique, composition, and comparative advantage effects to check the environmental consequence of trade at four different transition points. To achieve the purpose, we have employed augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) unit root tests in order to examine the stationary properties of the variables. Later, the long-run association among the variables is examined by applying autoregressive distributed lag (ARDL) bounds testing approach to cointegration. Our results confirm the presence of cointegration. Further, we find that scale effect has positive and technique effect has negative impact on CO2 emissions after threshold income level and form inverted U-shaped relationship-hence validates the environmental Kuznets curve hypothesis. Energy consumption adds in CO2 emissions. Trade openness and composite effect improve environmental quality by lowering CO2 emissions. The comparative advantage effect increases CO2 emissions and impairs environmental quality. The results provide the innovative approach to see the impact of trade openness in four sub-dimensions of trade liberalization. Hence, this study attributes more comprehensive policy tool for trade economists to better design environmentally sustainable trade rules and agreements.

  19. Development of proton-induced x-ray emission techniques with application to multielement analyses of human autopsy tissues and obsidian artifacts

    International Nuclear Information System (INIS)

    Nielson, K.K.

    1975-01-01

    A method of trace element analysis using proton-induced x-ray emission (PIXE) techniques with energy dispersive x-ray detection methods is described. Data were processed using the computer program ANALEX. PIXE analysis methods were applied to the analysis of liver, spleen, aorta, kidney medulla, kidney cortex, abdominal fat, pancreas, and hair from autopsies of Pima Indians. Tissues were freeze dried and low temperature ashed before analysis. Concentrations were tabulated for K, Ca, Ti, Mn, Fe, Co, Ni, Cu, Zn, Pb, Se, Br, Rb, Sr, Cd, and Cs and examined for significant differences related to diabetes. Concentrations of Ca and Sr in aorta, Fe and Rb in spleen and Mn in liver had different patterns in diabetics than in nondiabetics. High Cs concentrations were also observed in the kidneys of two subjects who died of renal disorders. Analyses by atomic absorption and PIXE methods were compared. PIXE methods were also applied to elemental analysis of obsidian artifacts from Campeche, Mexico. Based on K, Ba, Mn, Fe, Rb, Sr and Zr concentrations, the artifacts were related to several Guatemalan sources. (Diss. Abstr. Int., B)

  20. Structural validity of the Wechsler Intelligence Scale for Children-Fifth Edition: Confirmatory factor analyses with the 16 primary and secondary subtests.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C

    2017-04-01

    The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Effective combination of DIC, AE, and UPV nondestructive techniques on a scaled model of the Belgian nuclear waste container

    Science.gov (United States)

    Iliopoulos, Sokratis N.; Areias, Lou; Pyl, Lincy; Vantomme, John; Van Marcke, Philippe; Coppens, Erik; Aggelis, Dimitrios G.

    2015-03-01

    Protecting the environment and future generations against the potential hazards arising from high-level and heat emitting radioactive waste is a worldwide concern. Following this direction, the Belgian Agency for Radioactive Waste and Enriched Fissile Materials has come up with the reference design which considers the geological disposal of the waste in purely indurated clay. In this design the wastes are first post-conditioned in massive concrete structures called Supercontainers before being transported to the underground repositories. The Supercontainers are cylindrical structures which consist of four engineering barriers that from the inner to the outer surface are namely: the overpack, the filler, the concrete buffer and possibly the envelope. The overpack, which is made of carbon steel, is the place where the vitrified wastes and spent fuel are stored. The buffer, which is made of concrete, creates a highly alkaline environment ensuring slow and uniform overpack corrosion as well as radiological shielding. In order to evaluate the feasibility to construct such Supercontainers two scaled models have so far been designed and tested. The first scaled model indicated crack formation on the surface of the concrete buffer but the absence of a crack detection and monitoring system precluded defining the exact time of crack initiation, as well as the origin, the penetration depth, the crack path and the propagation history. For this reason, the second scaled model test was performed to obtain further insight by answering to the aforementioned questions using the Digital Image Correlation, Acoustic Emission and Ultrasonic Pulse Velocity nondestructive testing techniques.

  2. Inter-subject FDG PET Brain Networks Exhibit Multi-scale Community Structure with Different Normalization Techniques.

    Science.gov (United States)

    Sperry, Megan M; Kartha, Sonia; Granquist, Eric J; Winkelstein, Beth A

    2018-07-01

    Inter-subject networks are used to model correlations between brain regions and are particularly useful for metabolic imaging techniques, like 18F-2-deoxy-2-(18F)fluoro-D-glucose (FDG) positron emission tomography (PET). Since FDG PET typically produces a single image, correlations cannot be calculated over time. Little focus has been placed on the basic properties of inter-subject networks and if they are affected by group size and image normalization. FDG PET images were acquired from rats (n = 18), normalized by whole brain, visual cortex, or cerebellar FDG uptake, and used to construct correlation matrices. Group size effects on network stability were investigated by systematically adding rats and evaluating local network connectivity (node strength and clustering coefficient). Modularity and community structure were also evaluated in the differently normalized networks to assess meso-scale network relationships. Local network properties are stable regardless of normalization region for groups of at least 10. Whole brain-normalized networks are more modular than visual cortex- or cerebellum-normalized network (p network resolutions where modularity differs most between brain and randomized networks. Hierarchical analysis reveals consistent modules at different scales and clustering of spatially-proximate brain regions. Findings suggest inter-subject FDG PET networks are stable for reasonable group sizes and exhibit multi-scale modularity.

  3. Controlling for Response Bias in Self-Ratings of Personality: A Comparison of Impression Management Scales and the Overclaiming Technique.

    Science.gov (United States)

    Müller, Sascha; Moshagen, Morten

    2018-04-12

    Self-serving response distortions pose a threat to the validity of personality scales. A common approach to deal with this issue is to rely on impression management (IM) scales. More recently, the overclaiming technique (OCT) has been proposed as an alternative and arguably superior measure of such biases. In this study (N = 162), we tested these approaches in the context of self- and other-ratings using the HEXACO personality inventory. To the extent that the OCT and IM scales can be considered valid measures of response distortions, they are expected to account for inflated self-ratings in particular for those personality dimensions that are prone to socially desirable responding. However, the results show that neither the OCT nor IM account for overly favorable self-ratings. The validity of IM as a measure of response biases was further scrutinized by a substantial correlation with other-rated honesty-humility. As such, this study questions the use of both the OCT and IM to assess self-serving response distortions.

  4. Determination of formation heterogeneity at a range of scales using novel multi-electrode resistivity scanning techniques

    International Nuclear Information System (INIS)

    Williams, G.M.; Jackson, P.D.; Ward, R.S.; Sen, M.A.; Meldrum, P.; Lovell, M.

    1991-01-01

    The traditional method of measuring ground resistivity involves passing a current through two outer electrodes, measuring the potential developed across two electrodes in between, and applying Ohm's Law. In the RESCAN system developed by the British Geological Survey, each electrode can be electronically selected and controlled by software to either pass current or measure potential. Thousands of electrodes can be attached to the system either in 2-D surface arrays or along special plastic covered probes driven vertically into the ground or emplaced in boreholes. Under computer control, the resistivity distribution within the emplaced array can be determined automatically with unprecedented detail and speed, and may be displayed as an image. So far, the RESCAN system has been applied at the meso-scale in monitoring the radial migration of an electrolyte introduced into a recharge well in an unconsolidated aquifer; and CORSCAN at the micro-scale on drill cores to evaluate spatial variability in physical properties. The RESCAN technique has considerable potential for determining formation heterogeneity at different scales and provides a basis for developing stochastic models of groundwater and solute flow in heterogeneous systems. 13 figs.; 1 tab.; 12 refs

  5. Analyses of PWR spent fuel composition using SCALE and SWAT code systems to find correction factors for criticality safety applications adopting burnup credit

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Hee Sung; Suyama, Kenya; Mochizuki, Hiroki; Okuno, Hiroshi; Nomura, Yasushi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    The isotopic composition calculations were performed for 26 spent fuel samples from the Obrigheim PWR reactor and 55 spent fuel samples from 7 PWR reactors using the SAS2H module of the SCALE4.4 code system with 27, 44 and 238 group cross-section libraries and the SWAT code system with the 107 group cross-section library. For the analyses of samples from the Obrigheim PWR reactor, geometrical models were constructed for each of SCALE4.4/SAS2H and SWAT. For the analyses of samples from 7 PWR reactors, the geometrical model already adopted in the SCALE/SAS2H was directly converted to the model of SWAT. The four kinds of calculation results were compared with the measured data. For convenience, the ratio of the measured to calculated values was used as a parameter. When the ratio is less than unity, the calculation overestimates the measurement, and the ratio becomes closer to unity, they have a better agreement. For many important nuclides for burnup credit criticality safety evaluation, the four methods applied in this study showed good coincidence with measurements in general. More precise observations showed, however: (1) Less unity ratios were found for Pu-239 and -241 for selected 16 samples out of the 26 samples from the Obrigheim reactor (10 samples were deselected because their burnups were measured with Cs-137 non-destructive method, less reliable than Nd-148 method the rest 16 samples were measured with); (2) Larger than unity ratios were found for Am-241 and Cm-242 for both the 16 and 55 samples; (3) Larger than unity ratios were found for Sm-149 for the 55 samples; (4) SWAT was generally accompanied by larger ratios than those of SAS2H with some exceptions. Based on the measured-to-calculated ratios for 71 samples of a combined set in which 16 selected samples and 55 samples were included, the correction factors that should be multiplied to the calculated isotopic compositions were generated for a conservative estimate of the neutron multiplication factor

  6. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    Science.gov (United States)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone

  7. How to analyse a Big Bang of data: the mammoth project at the Cern physics laboratory in Geneva to recreate the conditions immediately after the universe began requires computing power on an unprecedented scale

    CERN Multimedia

    Thomas, Kim

    2005-01-01

    How to analyse a Big Bang of data: the mammoth project at the Cern physics laboratory in Geneva to recreate the conditions immediately after the universe began requires computing power on an unprecedented scale

  8. Large scale applicability of a Fully Adaptive Non-Intrusive Spectral Projection technique: Sensitivity and uncertainty analysis of a transient

    International Nuclear Information System (INIS)

    Perkó, Zoltán; Lathouwers, Danny; Kloosterman, Jan Leen; Hagen, Tim van der

    2014-01-01

    Highlights: • Grid and basis adaptive Polynomial Chaos techniques are presented for S and U analysis. • Dimensionality reduction and incremental polynomial order reduce computational costs. • An unprotected loss of flow transient is investigated in a Gas Cooled Fast Reactor. • S and U analysis is performed with MC and adaptive PC methods, for 42 input parameters. • PC accurately estimates means, variances, PDFs, sensitivities and uncertainties. - Abstract: Since the early years of reactor physics the most prominent sensitivity and uncertainty (S and U) analysis methods in the nuclear community have been adjoint based techniques. While these are very effective for pure neutronics problems due to the linearity of the transport equation, they become complicated when coupled non-linear systems are involved. With the continuous increase in computational power such complicated multi-physics problems are becoming progressively tractable, hence affordable and easily applicable S and U analysis tools also have to be developed in parallel. For reactor physics problems for which adjoint methods are prohibitive Polynomial Chaos (PC) techniques offer an attractive alternative to traditional random sampling based approaches. At TU Delft such PC methods have been studied for a number of years and this paper presents a large scale application of our Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm for performing the sensitivity and uncertainty analysis of a Gas Cooled Fast Reactor (GFR) Unprotected Loss Of Flow (ULOF) transient. The transient was simulated using the Cathare 2 code system and a fully detailed model of the GFR2400 reactor design that was investigated in the European FP7 GoFastR project. Several sources of uncertainty were taken into account amounting to an unusually high number of stochastic input parameters (42) and numerous output quantities were investigated. The results show consistently good performance of the applied adaptive PC

  9. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  10. Application of the Particle Swarm Optimization (PSO) technique to the thermal-hydraulics project of a PWR reactor core in reduced scale

    International Nuclear Information System (INIS)

    Lima Junior, Carlos Alberto de Souza

    2008-09-01

    The reduced scale models design have been employed by engineers from several different industries fields such as offshore, spatial, oil extraction, nuclear industries and others. Reduced scale models are used in experiments because they are economically attractive than its own prototype (real scale) because in many cases they are cheaper than a real scale one and most of time they are also easier to build providing a way to lead the real scale design allowing indirect investigations and analysis to the real scale system (prototype). A reduced scale model (or experiment) must be able to represent all physical phenomena that occurs and further will do in the real scale one under operational conditions, e.g., in this case the reduced scale model is called similar. There are some different methods to design a reduced scale model and from those two are basic: the empiric method based on the expert's skill to determine which physical measures are relevant to the desired model; and the differential equation method that is based on a mathematical description of the prototype (real scale system) to model. Applying a mathematical technique to the differential equation that describes the prototype then highlighting the relevant physical measures so the reduced scale model design problem may be treated as an optimization problem. Many optimization techniques as Genetic Algorithm (GA), for example, have been developed to solve this class of problems and have also been applied to the reduced scale model design problem as well. In this work, Particle Swarm Optimization (PSO) technique is investigated as an alternative optimization tool for such problem. In this investigation a computational approach, based on particle swarm optimization technique (PSO), is used to perform a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power operation on a forced flow cooling circulation and non-accidental operating conditions. A performance comparison

  11. Investigation of flow behaviour of coal particles in a pilot-scale fluidized bed gasifier (FBG) using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Sharma, V K; Kamudu, M Vidya; Prakash, S G; Krishanamoorthy, S; Anandam, G; Rao, P Seshubabu; Ramani, N V S; Singh, Gursharan; Sonde, R R

    2009-09-01

    Knowledge of residence time distribution (RTD), mean residence time (MRT) and degree of axial mixing of solid phase is required for efficient operation of coal gasification process. Radiotracer technique was used to measure the RTD of coal particles in a pilot-scale fluidized bed gasifier (FBG). Two different radiotracers i.e. lanthanum-140 and gold-198 labeled coal particles (100 gm) were independently used as radiotracers. The radiotracer was instantaneously injected into the coal feed line and monitored at the ash extraction line at the bottom and gas outlet at the top of the gasifier using collimated scintillation detectors. The measured RTD data were treated and MRTs of coal/ash particles were determined. The treated data were simulated using tanks-in-series model. The simulation of RTD data indicated good degree of mixing with small fraction of the feed material bypassing/short-circuiting from the bottom of the gasifier. The results of the investigation were found useful for optimizing the design and operation of the FBG, and scale-up of the gasification process.

  12. The Use of Quality Control and Data Mining Techniques for Monitoring Scaled Scores: An Overview. Research Report. ETS RR-12-20

    Science.gov (United States)

    von Davier, Alina A.

    2012-01-01

    Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…

  13. Research and realization of ten-print data quality control techniques for imperial scale automated fingerprint identification system

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2017-01-01

    Full Text Available As the first individualization-information processing equipment put into practical service worldwide, Automated Fingerprint Identification System (AFIS has always been regarded as the first choice in individualization of criminal suspects or those who died in mass disasters. By integrating data within the existing regional large-scale AFIS database, many countries are constructing an ultra large state-of-the-art AFIS (or Imperial Scale AFIS system. Therefore, it is very important to develop a series of ten-print data quality controlling process for this system of this type, which would insure a substantial matching efficiency, as the pouring data come into this imperial scale being. As the image quality of ten-print data is closely relevant to AFIS matching proficiency, a lot of police departments have allocated huge amount of human and financial resources over this issue by carrying out manual verification works for years. Unfortunately, quality control method above is always proved to be inadequate because it is an astronomical task involved, in which it has always been problematic and less affiant for potential errors. Hence, we will implement quality control in the above procedure with supplementary-acquisition effect caused by the delay of feedback instructions sent from the human verification teams. In this article, a series of fingerprint image quality supervising techniques has been put forward, which makes it possible for computer programs to supervise the ten-print image quality in real-time and more accurate manner as substitute for traditional manual verifications. Besides its prominent advantages in the human and financial expenditures, it has also been proved to obviously improve the image quality of the AFIS ten-print database, which leads up to a dramatic improvement in the AFIS-matching accuracy as well.

  14. Experimental study of the large-scale axially heterogeneous liquid-metal fast breeder reactor at the fast critical assembly: Power distribution measurements and their analyses

    International Nuclear Information System (INIS)

    Iijima, S.; Obu, M.; Hayase, T.; Ohno, A.; Nemoto, T.; Okajima, S.

    1988-01-01

    Power distributions of the large-scale axially heterogeneous liquid-metal fast breeder reactor were studied by using the experiment results of fast critical assemblies XI, XII, and XIII and the results of their analyses. The power distributions were examined by the gamma-scanning method and fission rate measurements using /sup 239/Pu and /sup 238/U fission counters and the foil irradiation method. In addition to the measurements in the reference core, the power distributions were measured in the core with a control rod inserted and in a modified core where the shape of the internal blanket was determined by the radial boundary. The calculation was made by using JENDL-2 and the Japan Atomic Energy Research Institute's standard calculation system for fast reactor neutronics. The power flattening trend, caused by the decrease of the fast neutron flux, was observed in the axial and radial power distributions. The effect of the radial boundary shape of the internal blanket on the power distribution was determined in the core. The thickness of the internal blanket was reduced at its radial boundary. The influence of the internal blanket was observed in the power distributions in the core with a control rod inserted. The calculation predicted the neutron spectrum harder in the internal blanket. In the radial distributions of /sup 239/Pu fission rates, the space dependency of the calculated-to-experiment values was found at the active core close to the internal blanket

  15. Effective behaviour change techniques for physical activity and healthy eating in overweight and obese adults; systematic review and meta-regression analyses.

    Science.gov (United States)

    Samdal, Gro Beate; Eide, Geir Egil; Barth, Tom; Williams, Geoffrey; Meland, Eivind

    2017-03-28

    This systematic review aims to explain the heterogeneity in results of interventions to promote physical activity and healthy eating for overweight and obese adults, by exploring the differential effects of behaviour change techniques (BCTs) and other intervention characteristics. The inclusion criteria specified RCTs with ≥ 12 weeks' duration, from January 2007 to October 2014, for adults (mean age ≥ 40 years, mean BMI ≥ 30). Primary outcomes were measures of healthy diet or physical activity. Two reviewers rated study quality, coded the BCTs, and collected outcome results at short (≤6 months) and long term (≥12 months). Meta-analyses and meta-regressions were used to estimate effect sizes (ES), heterogeneity indices (I 2 ) and regression coefficients. We included 48 studies containing a total of 82 outcome reports. The 32 long term reports had an overall ES = 0.24 with 95% confidence interval (CI): 0.15 to 0.33 and I 2  = 59.4%. The 50 short term reports had an ES = 0.37 with 95% CI: 0.26 to 0.48, and I 2  = 71.3%. The number of BCTs unique to the intervention group, and the BCTs goal setting and self-monitoring of behaviour predicted the effect at short and long term. The total number of BCTs in both intervention arms and using the BCTs goal setting of outcome, feedback on outcome of behaviour, implementing graded tasks, and adding objects to the environment, e.g. using a step counter, significantly predicted the effect at long term. Setting a goal for change; and the presence of reporting bias independently explained 58.8% of inter-study variation at short term. Autonomy supportive and person-centred methods as in Motivational Interviewing, the BCTs goal setting of behaviour, and receiving feedback on the outcome of behaviour, explained all of the between study variations in effects at long term. There are similarities, but also differences in effective BCTs promoting change in healthy eating and physical activity and

  16. Assessing microbial degradation of o-xylene at field-scale from the reduction in mass flow rate combined with compound-specific isotope analyses

    Science.gov (United States)

    Peter, A.; Steinbach, A.; Liedl, R.; Ptak, T.; Michaelis, W.; Teutsch, G.

    2004-07-01

    In recent years, natural attenuation (NA) has evolved into a possible remediation alternative, especially in the case of BTEX spills. In order to be approved by the regulators, biodegradation needs to be demonstrated which requires efficient site investigation and monitoring tools. Three methods—the Integral Groundwater Investigation method, the compound-specific isotope analysis (CSIA) and a newly developed combination of both—were used in this work to quantify at field scale the biodegradation of o-xylene at a former gasworks site which is heavily contaminated with BTEX and PAHs. First, the Integral Groundwater Investigation method [Schwarz, R., Ptak, T., Holder, T., Teutsch, G., 1998. Groundwater risk assessment at contaminated sites: a new investigation approach. In: Herbert, M. and Kovar, K. (Editors), GQ'98 Groundwater Quality: Remediation and Protection. IAHS Publication 250, pp. 68-71; COH 4 (2000) 170] was applied, which allows the determination of mass flow rates of o-xylene by integral pumping tests. Concentration time series obtained during pumping at two wells were used to calculate inversely contaminant mass flow rates at the two control planes that are defined by the diameter of the maximum isochrone. A reactive transport model was used within a Monte Carlo approach to identify biodegradation as the dominant process for reduction in the contaminant mass flow rate between the two consecutive control planes. Secondly, compound-specific carbon isotope analyses of o-xylene were performed on the basis of point-scale samples from the same two wells. The Rayleigh equation was used to quantify the degree of biodegradation that occurred between the wells. Thirdly, a combination of the Integral Groundwater Investigation method and the compound-specific isotope analysis was developed and applied. It comprises isotope measurements during the integral pumping tests and the evaluation of δ13C time series by an inversion algorithm to obtain spatially

  17. Measurements of liquid phase residence time distributions in a pilot-scale continuous leaching reactor using radiotracer technique

    International Nuclear Information System (INIS)

    Pant, H.J.; Sharma, V.K.; Shenoy, K.T.; Sreenivas, T.

    2015-01-01

    An alkaline based continuous leaching process is commonly used for extraction of uranium from uranium ore. The reactor in which the leaching process is carried out is called a continuous leaching reactor (CLR) and is expected to behave as a continuously stirred tank reactor (CSTR) for the liquid phase. A pilot-scale CLR used in a Technology Demonstration Pilot Plant (TDPP) was designed, installed and operated; and thus needed to be tested for its hydrodynamic behavior. A radiotracer investigation was carried out in the CLR for measurement of residence time distribution (RTD) of liquid phase with specific objectives to characterize the flow behavior of the reactor and validate its design. Bromine-82 as ammonium bromide was used as a radiotracer and about 40–60 MBq activity was used in each run. The measured RTD curves were treated and mean residence times were determined and simulated using a tanks-in-series model. The result of simulation indicated no flow abnormality and the reactor behaved as an ideal CSTR for the range of the operating conditions used in the investigation. - Highlights: • Radiotracer technique was applied for evaluation of design of a pilot-scale continuous leaching reactor. • Mean residence time and dead volume were estimated. Dead volume was found to be ranging from 4% to 15% at different operating conditions. • Tank-in-series model was used to simulate the measured RTD data and was found suitable to describe the flow in the reactor. • No flow abnormality was found and the reactor behaved as a well-mixed system. The design of the reactor was validated

  18. 3-D optical profilometry at micron scale with multi-frequency fringe projection using modified fibre optic Lloyd's mirror technique

    Science.gov (United States)

    Inanç, Arda; Kösoğlu, Gülşen; Yüksel, Heba; Naci Inci, Mehmet

    2018-06-01

    A new fibre optic Lloyd's mirror method is developed for extracting 3-D height distribution of various objects at the micron scale with a resolution of 4 μm. The fibre optic assembly is elegantly integrated to an optical microscope and a CCD camera. It is demonstrated that the proposed technique is quite suitable and practical to produce an interference pattern with an adjustable frequency. By increasing the distance between the fibre and the mirror with a micrometre stage in the Lloyd's mirror assembly, the separation between the two bright fringes is lowered down to the micron scale without using any additional elements as part of the optical projection unit. A fibre optic cable, whose polymer jacket is partially stripped, and a microfluidic channel are used as test objects to extract their surface topographies. Point by point sensitivity of the method is found to be around 8 μm, changing a couple of microns depending on the fringe frequency and the measured height. A straightforward calibration procedure for the phase to height conversion is also introduced by making use of the vertical moving stage of the optical microscope. The phase analysis of the acquired image is carried out by One Dimensional Continuous Wavelet Transform for which the chosen wavelet is the Morlet wavelet and the carrier removal of the projected fringe patterns is achieved by reference subtraction. Furthermore, flexible multi-frequency property of the proposed method allows measuring discontinuous heights where there are phase ambiguities like 2π by lowering the fringe frequency and eliminating the phase ambiguity.

  19. Experimental Assessment on the Hysteretic Behavior of a Full-Scale Traditional Chinese Timber Structure Using a Synchronous Loading Technique

    Directory of Open Access Journals (Sweden)

    XiWang Shi

    2018-01-01

    Full Text Available In traditional Chinese timber structures, few tie beams were used between columns, and the column base was placed directly on a stone base. In order to study the hysteretic behavior of such structures, a full-scale model was established. The model size was determined according to the requirements of an eighth grade material system specified in the architectural treatise Ying-zao-fa-shi written during the Song Dynasty. In light of the vertical lift and drop of the test model during horizontal reciprocating motions, the horizontal low-cycle reciprocating loading experiments were conducted using a synchronous loading technique. By analyzing the load-displacement hysteresis curves, envelope curves, deformation capacity, energy dissipation, and change in stiffness under different vertical loads, it is found that the timber frame exhibits obvious signs of self-restoring and favorable plastic deformation capacity. As the horizontal displacement increases, the equivalent viscous damping coefficient generally declines first and then increases. At the same time, the stiffness degrades rapidly first and then decreases slowly. Increasing vertical loading will improve the deformation, energy-dissipation capacity, and stiffness of the timber frame.

  20. Microscopy and Chemical Inversing Techniques to Determine the Photonic Crystal Structure of Iridescent Beetle Scales in the Cerambycidae Family

    Science.gov (United States)

    Richey, Lauren; Gardner, John; Standing, Michael; Jorgensen, Matthew; Bartl, Michael

    2010-10-01

    Photonic crystals (PCs) are periodic structures that manipulate electromagnetic waves by defining allowed and forbidden frequency bands known as photonic band gaps. Despite production of PC structures operating at infrared wavelengths, visible counterparts are difficult to fabricate because periodicities must satisfy the diffraction criteria. As part of an ongoing search for naturally occurring PCs [1], a three-dimensional array of nanoscopic spheres in the iridescent scales of the Cerambycidae insects A. elegans and G. celestis has been found. Such arrays are similar to opal gemstones and self-assembled colloidal spheres which can be chemically inverted to create a lattice-like PC. Through a chemical replication process [2], scanning electron microscopy analysis, sequential focused ion beam slicing and three-dimensional modeling, we analyzed the structural arrangement of the nanoscopic spheres. The study of naturally occurring structures and their inversing techniques into PCs allows for diversity in optical PC fabrication. [1] J.W. Galusha et al., Phys. Rev. E 77 (2008) 050904. [2] J.W. Galusha et al., J. Mater. Chem. 20 (2010) 1277.

  1. Improvement of spectrographic analyses by the use of a mechanical packer in the arc distillation technique; Amelioration de l'analyse spectrograpique par l'utilisation d'un tasseur mecanique dans la methode de distillation dans l'arc

    Energy Technology Data Exchange (ETDEWEB)

    Buffereau, M; Deniaud, S; Pichotin, B; Violet, R [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1965-07-01

    One studies improvement of spectrographic analysis by the 'carrier distillation' method with the help of a mechanical device. Experiments and advantages of such an apparatus are given (precision and reproducibility improvement, operator factor suppression). A routine apparatus (French patent no 976.493) is described. (authors) [French] On etudie l'amelioration des analyses spectrographiques par la methode de distillation avec entra eur dans l'arc au moyen d'un tasseur mecanique. On indique les experiences realisees et les avantages de l'emploi d'un tel appareil (amelioration de la precision et de la reproductibilite, suppression du facteur operateur). On decrit l'appareil de routine objet du brevet no 976.493 du 29 mai 1964. (auteurs)

  2. New Techniques Used in Modeling the 2017 Total Solar Eclipse: Energizing and Heating the Large-Scale Corona

    Science.gov (United States)

    Downs, Cooper; Mikic, Zoran; Linker, Jon A.; Caplan, Ronald M.; Lionello, Roberto; Torok, Tibor; Titov, Viacheslav; Riley, Pete; Mackay, Duncan; Upton, Lisa

    2017-08-01

    Over the past two decades, our group has used a magnetohydrodynamic (MHD) model of the corona to predict the appearance of total solar eclipses. In this presentation we detail recent innovations and new techniques applied to our prediction model for the August 21, 2017 total solar eclipse. First, we have developed a method for capturing the large-scale energized fields typical of the corona, namely the sheared/twisted fields built up through long-term processes of differential rotation and flux-emergence/cancellation. Using inferences of the location and chirality of filament channels (deduced from a magnetofrictional model driven by the evolving photospheric field produced by the Advective Flux Transport model), we tailor a customized boundary electric field profile that will emerge shear along the desired portions of polarity inversion lines (PILs) and cancel flux to create long twisted flux systems low in the corona. This method has the potential to improve the morphological shape of streamers in the low solar corona. Second, we apply, for the first time in our eclipse prediction simulations, a new wave-turbulence-dissipation (WTD) based model for coronal heating. This model has substantially fewer free parameters than previous empirical heating models, but is inherently sensitive to the 3D geometry and connectivity of the coronal field---a key property for modeling/predicting the thermal-magnetic structure of the solar corona. Overall, we will examine the effect of these considerations on white-light and EUV observables from the simulations, and present them in the context of our final 2017 eclipse prediction model.Research supported by NASA's Heliophysics Supporting Research and Living With a Star Programs.

  3. Tests of heat techniques in households. Analysis of the results of the field tests; Praktijkprestaties van warmtetechnieken bij huishoudens. Analyse resultaten veldtesten

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, A.; Friedel, P.; Overman, P. [Energy Matters, Driebergen (Netherlands)

    2012-11-15

    The development within conventional techniques and new techniques for the recovery and generation of heat in the house construction industry led to a need for knowledge of efficiencies of those techniques in practice. Therefore a project has been set up to gain insight into the efficiencies. In the field tests, five heat techniques are investigated: high-efficiency boilers, solar water heaters, balanced ventilation systems with heat recovery (also called heat recovery systems) and heat pump water heaters [Dutch] De ontwikkeling binnen conventionele technieken en nieuwe technieken voor de terugwinning en opwekking van warmte in de woningbouw leidden ertoe dat er bij verschillende partijen in de keten een kennisbehoefte is ontstaan naar de rendementen van deze technieken in de praktijk. Daartoe heeft AgentschapNL een project opgezet om meer inzicht te verkrijgen in deze rendementen. In de veldtesten worden vijf warmtechnieken bekeken: HR-ketels, HRe-ketels, zonneboilers, gebalanceerde ventilatiesystemen met warmteterugwinning (verder WTW-systemen genoemd) en warmtepompboilers. Deze worden op minutenbasis bemeten.

  4. Ship vibration analysis by finite element technique. Pt. II: Vibration analysis / Analyse van scheepstrillingen door middel van de elementenmethode. Dl. II: Trillingsanalyse

    NARCIS (Netherlands)

    Hylarides, S.

    1971-01-01

    In the calculation of the natural frequencies of ships more accurate values are expected when the shell-like structure of ships is taken into account by the finite element technique, especially in the higher-node vibration modes. To avoid large matrix systems an elimination process has been

  5. Numerical analysis of scaling laws for capillary rise in soils; Lois d'echelle pour l'ascension capillaire dans les sols: analyse numerique

    Energy Technology Data Exchange (ETDEWEB)

    Rezzoug, A.; Konig, D.; Triantafyllidis, Th. [Ruhr Bochum Univ. (Germany); Coumoulos, H.; Soga, K. [Cambridge Univ. (United Kingdom)

    2000-07-01

    The capillary movement of water through soils is of interest in many practical environmental engineering problems, especially problems concerning pollutant transport in soils. The potential use of the geotechnical centrifuge to study the capillary phenomena in soils has been proposed and some results have been reported. The main issue in relation is the verification of the scaling laws for the capillary phenomena in soils. However, the theoretical aspect of the capillary rise in relation to the accelerated gravity effect is still poorly understood; further investigation is required on the gravity effect on the capillary pressure, the meniscus form, the scaling of the capillary height and the scaling of the time. A theoretical analysis of the movement in capillary tube, representing soil, is presented. Scaling laws for the capillary height and the time are proposed. The effect of the contact angle changes on the scaling laws is also considered. (authors)

  6. Comparing effects of land reclamation techniques on water pollution and fishery loss for a large-scale offshore airport island in Jinzhou Bay, Bohai Sea, China.

    Science.gov (United States)

    Yan, Hua-Kun; Wang, Nuo; Yu, Tiao-Lan; Fu, Qiang; Liang, Chen

    2013-06-15

    Plans are being made to construct Dalian Offshore Airport in Jinzhou Bay with a reclamation area of 21 km(2). The large-scale reclamation can be expected to have negative effects on the marine environment, and these effects vary depending on the reclamation techniques used. Water quality mathematical models were developed and biology resource investigations were conducted to compare effects of an underwater explosion sediment removal and rock dumping technique and a silt dredging and rock dumping technique on water pollution and fishery loss. The findings show that creation of the artificial island with the underwater explosion sediment removal technique would greatly impact the marine environment. However, the impact for the silt dredging technique would be less. The conclusions from this study provide an important foundation for the planning of Dalian Offshore Airport and can be used as a reference for similar coastal reclamation and marine environment protection. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  7. Identification and Prioritization of Important Attributes of Disease-Modifying Drugs in Decision Making among Patients with Multiple Sclerosis: A Nominal Group Technique and Best-Worst Scaling.

    Science.gov (United States)

    Kremer, Ingrid E H; Evers, Silvia M A A; Jongen, Peter J; van der Weijden, Trudy; van de Kolk, Ilona; Hiligsmann, Mickaël

    2016-01-01

    Understanding the preferences of patients with multiple sclerosis (MS) for disease-modifying drugs and involving these patients in clinical decision making can improve the concordance between medical decisions and patient values and may, subsequently, improve adherence to disease-modifying drugs. This study aims first to identify which characteristics-or attributes-of disease-modifying drugs influence patients´ decisions about these treatments and second to quantify the attributes' relative importance among patients. First, three focus groups of relapsing-remitting MS patients were formed to compile a preliminary list of attributes using a nominal group technique. Based on this qualitative research, a survey with several choice tasks (best-worst scaling) was developed to prioritize attributes, asking a larger patient group to choose the most and least important attributes. The attributes' mean relative importance scores (RIS) were calculated. Nineteen patients reported 34 attributes during the focus groups and 185 patients evaluated the importance of the attributes in the survey. The effect on disease progression received the highest RIS (RIS = 9.64, 95% confidence interval: [9.48-9.81]), followed by quality of life (RIS = 9.21 [9.00-9.42]), relapse rate (RIS = 7.76 [7.39-8.13]), severity of side effects (RIS = 7.63 [7.33-7.94]) and relapse severity (RIS = 7.39 [7.06-7.73]). Subgroup analyses showed heterogeneity in preference of patients. For example, side effect-related attributes were statistically more important for patients who had no experience in using disease-modifying drugs compared to experienced patients (p decision making would be needed and requires eliciting individual preferences.

  8. Applying Data Mining Techniques to Chemical Analyses of Pre-drill Groundwater Samples within the Marcellus Formation Shale Play in Bradford County, Pennsylvania

    Science.gov (United States)

    Wen, T.; Niu, X.; Gonzales, M. S.; Li, Z.; Brantley, S.

    2017-12-01

    Groundwater samples are collected for chemical analyses by shale gas industry consultants in the vicinity of proposed gas wells in Pennsylvania. These data sets are archived so that the chemistry of water from homeowner wells can be compared to chemistry after gas-well drilling. Improved public awareness of groundwater quality issues will contribute to designing strategies for both water resource management and hydrocarbon exploration. We have received water analyses for 11,000 groundwater samples from PA Department of Environmental Protection (PA DEP) in the Marcellus Shale footprint in Bradford County, PA for the years ranging from 2010 to 2016. The PA DEP has investigated these analyses to determine whether gas well drilling or other activities affected water quality. We are currently investigating these analyses to look for patterns in chemistry throughout the study area (related or unrelated to gas drilling activities) and to look for evidence of analytes that may be present at concentrations higher than the advised standards for drinking water. Our preliminary results reveal that dissolved methane concentrations tend to be higher along fault lines in Bradford County [1]. Lead (Pb), arsenic (As), and barium (Ba) are sometimes present at levels above the EPA maximum contaminant level (MCL). Iron (Fe) and manganese (Mn) more frequently violate the EPA standard. We find that concentrations of some chemical analytes (e.g., Ba and Mn) are dependent on bedrock formations (i.e., Catskill vs. Lock Haven) while concentrations of other analytes (e.g., Pb) are not statistically significantly distinct between different bedrock formations. Our investigations are also focused on looking for correlations that might explain water quality patterns with respect to human activities such as gas drilling. However, percentages of water samples failing EPA MCL with respect to Pb, As, and Ba have decreased from previous USGS and PSU studies in the 1990s and 2000s. Public access to

  9. A statistical forecast model using the time-scale decomposition technique to predict rainfall during flood period over the middle and lower reaches of the Yangtze River Valley

    Science.gov (United States)

    Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao

    2018-04-01

    In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.

  10. Thermo-economic and environmental analyses based multi-objective optimization of vapor compression–absorption cascaded refrigeration system using NSGA-II technique

    International Nuclear Information System (INIS)

    Jain, Vaibhav; Sachdeva, Gulshan; Kachhwaha, Surendra Singh; Patel, Bhavesh

    2016-01-01

    Highlights: • It addresses multi-objective optimization study on cascaded refrigeration system. • Cascaded system is a promising decarburizing and energy efficient technology. • NSGA-II technique is used for multi-objective optimization. • Total annual product cost and irreversibility rate are simultaneously optimized. - Abstract: Present work optimizes the performance of 170 kW vapor compression–absorption cascaded refrigeration system (VCACRS) based on combined thermodynamic, economic and environmental parameters using Non-dominated Sort Genetic Algorithm-II (NSGA-II) technique. Two objective functions including the total irreversibility rate (as a thermodynamic criterion) and the total product cost (as an economic criterion) of the system are considered simultaneously for multi-objective optimization of VCACRS. The capital and maintenance costs of the system components, the operational cost, and the penalty cost due to CO_2 emission are included in the total product cost of the system. Three optimized systems including a single-objective thermodynamic optimized, a single-objective economic optimized and a multi-objective optimized are analyzed and compared. The results showed that the multi-objective design considers the combined thermodynamic and total product cost criteria better than the two individual single-objective thermodynamic and total product cost optimized designs.

  11. Observations and Analyses of Heliospheric Faraday Rotation of a Coronal Mass Ejection (CME) Using the LOw Frequency ARray (LOFAR) and Space-Based Imaging Techniques

    Science.gov (United States)

    Bisi, Mario Mark; Jensen, Elizabeth; Sobey, Charlotte; Fallows, Richard; Jackson, Bernard; Barnes, David; Giunta, Alessandra; Hick, Paul; Eftekhari, Tarraneh; Yu, Hsiu-Shan; Odstrcil, Dusan; Tokumaru, Munetoshi; Wood, Brian

    2017-04-01

    Geomagnetic storms of the highest intensity are general driven by coronal mass ejections (CMEs) impacting the Earth's space environment. Their intensity is driven by the speed, density, and, most-importantly, their magnetic-field orientation and magnitude of the incoming solar plasma. The most-significant magnetic-field factor is the North-South component (Bz in Geocentric Solar Magnetic - GSM - coordinates). At present, there are no reliable prediction methods available for this magnetic-field component ahead of the in-situ monitors around the Sun-Earth L1 point. Observations of Faraday rotation (FR) can be used to attempt to determine average magnetic-field orientations in the inner heliosphere. Such a technique has already been well demonstrated through the corona, ionosphere, and also the interstellar medium. Measurements of the polarisation of astronomical (or spacecraft in superior conjunction) radio sources (beacons/radio frequency carriers) through the inner corona of the Sun to obtain the FR have been demonstrated but mostly at relatively-high radio frequencies. Here we show some initial results of true heliospheric FR using the Low Frequency Array (LOFAR) below 200 MHz to investigate the passage of a coronal mass ejection (CME) across the line of sight. LOFAR is a next-generation low-frequency radio interferometer, and a pathfinder to the Square Kilometre Array (SKA) - LOW telescope. We demonstrate preliminary heliospheric FR results through the analysis of observations of pulsar J1022+1001, which commenced on 13 August 2014 at 13:00UT and spanned over 150 minutes in duration. We also show initial comparisons to the FR results via various modelling techniques and additional context information to understand the structure of the inner heliosphere being detected. This observation could indeed pave the way to an experiment which might be implemented for space-weather purposes that will eventually lead to a near-global method for determining the magnetic

  12. An HIV/AIDS Knowledge Scale for Adolescents: Item Response Theory Analyses Based on Data from a Study in South Africa and Tanzania

    Science.gov (United States)

    Aaro, Leif E.; Breivik, Kyrre; Klepp, Knut-Inge; Kaaya, Sylvia; Onya, Hans E.; Wubs, Annegreet; Helleve, Arnfinn; Flisher, Alan J.

    2011-01-01

    A 14-item human immunodeficiency virus/acquired immunodeficiency syndrome knowledge scale was used among school students in 80 schools in 3 sites in Sub-Saharan Africa (Cape Town and Mankweng, South Africa, and Dar es Salaam, Tanzania). For each item, an incorrect or don't know response was coded as 0 and correct response as 1. Exploratory factor…

  13. A technique for recording polycrystalline structure and orientation during in situ deformation cycles of rock analogues using an automated fabric analyser.

    Science.gov (United States)

    Peternell, M; Russell-Head, D S; Wilson, C J L

    2011-05-01

    Two in situ plane-strain deformation experiments on norcamphor and natural ice using synchronous recording of crystal c-axis orientations have been performed with an automated fabric analyser and a newly developed sample press and deformation stage. Without interrupting the deformation experiment, c-axis orientations are determined for each pixel in a 5 × 5 mm sample area at a spatial resolution of 5 μm/pixel. In the case of norcamphor, changes in microstructures and associated crystallographic information, at a strain rate of ∼2 × 10(-5) s(-1), were recorded for the first time during a complete in situ deformation-cycle experiment that consisted of an annealing, deformation and post-deformation annealing path. In the case of natural ice, slower external strain rates (∼1 × 10(-6) s(-1)) enabled the investigation of small changes in the polycrystal aggregate's crystallography and microstructure for small amounts of strain. The technical setup and first results from the experiments are presented. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  14. Analyses of desorbed H2O with temperature programmed desorption technique in sol-gel derived HfO2 thin films

    International Nuclear Information System (INIS)

    Shimizu, H.; Nemoto, D.; Ikeda, M.; Nishide, T.

    2009-01-01

    Hafnium oxide (HfO 2 ) is a promising material for the gate insulator in highly miniaturized silicon (Si) ultra-large-scale-integration (ULSI) devices (32 nm and beyond). In the field chemistry, a sol-gel processing has been used to fabricate HfO 2 thin film with the advantages of low cost, relative simplicity, and easy control of the composition of the layers formed. Temperature-programmed desorption (TPD) has been used not only for analyzing adsorbed gases on the surfaces of bulk sol-gel-derived HfO 2 of sol-gel-derived HfO 2 thin film fired at 350, 450, 550 and 700 deg C in sol-gel derived HfO 2 films in air is investigated using TPD, and also the material characterization of HfO 2 thin films is evaluated by X-ray diffraction (XRD) method. The dielectric constant of the films was also estimated using the capacitance-voltage (C-V) method. TPD is essentially a method of analyzing desorped gases from samples heated by infra-red light as a function of temperature under vacuum conditions using a detector of quadruple mass spectroscopy (QMS). Sol-gel-derived HfO 2 films were fabricated on 76-mm-diameter Si(100) wafers as follows. Hafnia sol solutions were prepared by dissolving HfCl 4 in NH 4 OH solution, followed by the of HCOOH. (author)

  15. Using in-field and remote sensing techniques for the monitoring of small-scale permafrost decline in Northern Quebec

    Science.gov (United States)

    May, Inga; Kim, Jun Su; Spannraft, Kati; Ludwig, Ralf; Hajnsek, Irena; Bernier, Monique; Allard, Michel

    2010-05-01

    Permafrost-affected soils represent about 45% of Canadian arctic and subarctic regions. Under the recently recorded changed climate conditions, the areas located in the discontinuous permafrost zones are likely to belong to the most impacted environments. Degradations of Palsas and lithalsas as being the most distinct permafrost landforms as well as an extension of wetlands have been observe during the past decades by several research teams all over the northern Arctic. These alterations, caused by longer an warmer thawing periods, are expected to become more and more frequent in the future. The effects on human beings and on the surrounding sensitive ecosystems are presumed to be momentous and of high relevance. Hence, there is a high demand for new techniques that are able to detect, and possibly even predict, the behavior of the permafrost within a changing environment. The presented study is part of an international research collaboration between LMU, INRS and UL within the framework of ArcticNet. The project intends to develop a monitoring system strongly based on remote sensing imagery and GIS-based data analysis, using a test site located in northern Quebec (Umiujaq, 56°33' N, 76°33' W). It shall be investigated to which extent the interpretation of satellite imagery is feasible to partially substitute costly and difficult geophysical point measurements, and to provide spatial knowledge about the major factors that control permafrost dynamics and ecosystem change. In a first step, these factors, mainly expected to be determined from changes in topography, vegetation cover and snow cover, are identified and validated by means of several consecutive ground truthing initiatives supporting the analysis of multi-sensoral time series of remotely sensed information. Both sources are used to generate and feed different concepts for modeling permafrost dynamics by ways of parameter retrieval and data assimilation. On this poster, the outcomes of the first project

  16. Use of acoustic emission technique to study the spalling behaviour of oxide scales on Ni-10Cr-8Al containing sulphur and/or yttrium impurity

    International Nuclear Information System (INIS)

    Khanna, A.S.; Quadakkers, W.J.; Jonas, H.

    1989-01-01

    It is now well established that the presence of small amounts of sulphur impurity in a NiCrAl-based alloy causes a deleterious effect on their high temperature oxidation behaviour. It is, however, not clear whether the adverse effect is due to a decrease in the spalling resistance of the oxide scale or due to an enhanced scale growth. In order to confirm which of the factors is dominating, two independent experimental techniques were used in the investigation of the oxidation behaviour of Ni-10Cr-8Al containing sulphur- and/or yttrium additions: conventional thermogravimetry, to study the scale growth rates and acoustic emission analysis to study the scale adherence. The results indicated that the dominant factor responsible for the deleterious effect of sulphur impurity on the oxidation of a Ni-10Cr-8Al alloy, was a significant change in the growth rate and the composition of the scale. Addition of yttrium improved the oxidation behaviour, not only by increasing the scale adherence, but also by reducing the scale growth due to gettering of sulphur. (orig.) [de

  17. Application of various laboratory assay techniques to the verification of the comprehensive nuclear test ban treaty. Analyses of samples from Kuwait and from AFTAC

    International Nuclear Information System (INIS)

    Toivonen, H.; Ikaeheimonen, T.K.; Leppaenen, A.; Poellaenen, R.; Rantavaara, A.; Saxen, R.; Likonen, J.; Zilliacus, R.

    1997-11-01

    Various laboratory assay techniques were applied to two particulate air filters from Kuwait and to one filter salted artificially. The monitoring system, run by the PIDC in Arlington, identified 137 Cs but no 134 Cs in the air samples. Long-term counting using a 100 % HPGe detector in laboratory did not reveal 134 Cs either. Upper limit of the activity ratio 134 Cs/ 137 Cs was estimated to be 0.015 which is below the expected average value of the Chernobyl fall-out (0.025). This finding may indicate that the Cs in the sample has other origin than Chernobyl fall-out. Radiochemical methods to purify Cs from the bulk material were investigated. However, because of low yield, the preliminary efforts failed to improve detection limits. The high-resolution gamma-spectrometry of the artificial sample (AFTAC) identified the following man-made radionuclides: 95 Zr/ 95 Nb, 103 Ru, 137 Cs, 140 Ba/ 140 La, 141 Ce, 147 Nd. 241 Am was found in alpha spectrometry. The isotope ratios indicate that the sample is produced early in November 1996. The presence of Am shows that the material is most likely irradiated high-burnup uranium or plutonium containing transuranium elements before irradiation. Advantages of mass spectrometry were studied and the preliminary results are very promising. However, a separate programme for sample preparation should be launched. (orig.)

  18. Environmental consequence analyses of fish farm emissions related to different scales and exemplified by data from the Baltic--a review.

    Science.gov (United States)

    Gyllenhammar, Andreas; Håkanson, Lars

    2005-08-01

    The aim of this work is to review studies to evaluate how emissions from fish cage farms cause eutrophication effects in marine environments. The focus is on four different scales: (i) the conditions at the site of the farm, (ii) the local scale related to the coastal area where the farm is situated, (iii) the regional scale encompassing many coastal areas and (iv) the international scale including several regional coastal areas. The aim is to evaluate the role of nutrient emissions from fish farms in a general way, but all selected examples come from the Baltic Sea. An important part of this evaluation concerns the method to define the boundaries of a given coastal area. If this is done arbitrarily, one would obtain arbitrary results in the environmental consequence analysis. In this work, the boundary lines between the coast and the sea are drawn using GIS methods (geographical information systems) according to the topographical bottleneck method, which opens a way to determine many fundamental characteristics in the context of mass balance calculations. In mass balance modelling, the fluxes from the fish farm should be compared to other fluxes to, within and from coastal areas. Results collected in this study show that: (1) at the smallest scale (impact areas of fish cage farm often corresponds to the size of a "football field" (50-100 m) if the annual fish production is about 50 ton, (2) at the local scale (1 ha to 100 km2), there exists a simple load diagram (effect-load-sensitivity) to relate the environmental response and effects from a specific load from a fish cage farm. This makes it possible to obtain a first estimate of the maximum allowable fish production in a specific coastal area, (3) at the regional scale (100-10,000 km2), it is possible to create negative nutrient fluxes, i.e., use fish farming as a method to reduce the nutrient loading to the sea. The breaking point is to use more than about 1.1 g wet weight regionally caught wild fish per gram

  19. Construction of a Scale-Questionnaire on the Attitude of the Teaching Staff as Opposed to the Educative Innovation by Means of Techniques of Cooperative Work (CAPIC

    Directory of Open Access Journals (Sweden)

    Joan Andrés Traver Martí

    2007-05-01

    Full Text Available In the present work the construction process of a scale-questionnaire is described to measure the attitude of the teaching staff as opposed to the educational innovation by means of techniques of cooperative work (CAPIC.  In order to carry out its design and elaboration we need on the one hand a model of analysis of the attitudes and an instrument of measurement of the same ones capable of guiding its practical dynamics.  The Theory of the Reasoned Action of Fisbhein and Ajzen (1975, 1980 and the summative scales (Likert have fulfilled, in both cases, this paper.

  20. Analysing the Air: Experiences and Results of Long Term Air Pollution Monitoring in the Asia-Pacific Region Using Nuclear Analysis Techniques

    International Nuclear Information System (INIS)

    Atanacio, Armand J.

    2015-01-01

    Particles present in the air we breathe are now recognized as a major cause of disease and premature death globally. In fact, a World Health Organization (WHO) report recently ranked ambient air pollution as one of the top 10 causes of death in the world, directly contributing annually to around 3.7 million premature deaths worldwide 65% of which occurred in the Asian region alone. Airborne particulate matter (PM) can be generated from natural sources such as windblown soil or coastal sea-spray; as well as anthropogenic sources such as power stations, industry, vehicles and domestic biomass burning. At low concentration these fine pollution particles are too small to be seen by eye, but penetrate deep into our lungs and even our blood stream as our nose and throat are inefficient at filtering them out. At large concentrations, they can also have wider regional effects including reduced visibility, acid rain and even climate variability. The International Atomic Energy Agency (IAEA) in 2000, recognizing air pollution as a significant local, national and global challenge, initiated a collaborative air pollution study involving 14 countries across the greater Asia-pacific region from 2000 to 2015. This has amassed a database containing more than 14,000 data lines of PM mass concentration and the concentration of up to 40 elements using nuclear analytical techniques. It represents the most comprehensive and long-term airborne PM data set compiled to date for the Asia-Pacific region and as will be discussed, can be used to statistically resolve individual source fingerprints and their contributions to total air pollution using Positive Matrix Factorization (PMF). This sort of data necessary for implementing or reviewing the effectiveness of policy level changes aimed at targeted air pollution reduction. (author)

  1. A method to generate fully multi-scale optimal interpolation by combining efficient single process analyses, illustrated by a DINEOF analysis spiced with a local optimal interpolation

    Directory of Open Access Journals (Sweden)

    J.-M. Beckers

    2014-10-01

    Full Text Available We present a method in which the optimal interpolation of multi-scale processes can be expanded into a succession of simpler interpolations. First, we prove how the optimal analysis of a superposition of two processes can be obtained by different mathematical formulations involving iterations and analysis focusing on a single process. From the different mathematical equivalent formulations, we then select the most efficient ones by analyzing the behavior of the different possibilities in a simple and well-controlled test case. The clear guidelines deduced from this experiment are then applied to a real situation in which we combine large-scale analysis of hourly Spinning Enhanced Visible and Infrared Imager (SEVIRI satellite images using data interpolating empirical orthogonal functions (DINEOF with a local optimal interpolation using a Gaussian covariance. It is shown that the optimal combination indeed provides the best reconstruction and can therefore be exploited to extract the maximum amount of useful information from the original data.

  2. Long-time leaching and corrosion tests on full-scale cemented waste forms in the Asse salt mine. Sampling and analyses 2003

    International Nuclear Information System (INIS)

    Kienzler, B.; Schlieker, M.; Bauer, A.; Metz, V.; Meyer, H.

    2004-10-01

    The paper presents the follow-up of experimental findings from full-scale leach tests performed on simulated cemented waste forms for more than 20 years in salt brines and water. Measurements cover pH, density, and the composition of leachates as well as the release of radionuclides such as Cs, U and Np. Indicators for waste form corrosion and radionuclide release is Cs and NO 3 . Corrosion of cemented waste forms depends on the pore volume of the hardened cement which is correlated to the water/cement ratio. The release of radionuclides is evaluated and compared to small-scale laboratory tests. Excellent interpretation of observed concentrations is obtained for uranium and neptunium by comparison with model calculations. (orig.)

  3. Willamette oxygen supplementation studies -- Scale analyses, Dexter water quality parameters, and adult recoveries: Annual progress report, September 30, 1998--September 29, 1999

    International Nuclear Information System (INIS)

    Ewing, R.D.

    1999-01-01

    This report examines the relationship between scale characteristics of returning adults to determine the fork length at which they entered the ocean. These lengths are then related to the length frequencies of fish in the various experimental groups at the time they left the hatchery. This report summarizes the water quality parameters at Dexter Rearing Ponds and presents the complete returns for all experimental groups

  4. Investigation on impact resistance of steel plate reinforced concrete barriers against aircraft impact. Pt.3: Analyses of full-scale aircraft impact

    International Nuclear Information System (INIS)

    Jun Mizuno; Norihide Koshika; Eiichi Tanaka; Atsushi Suzuki; Yoshinori Mihara; Isao Nishimura

    2005-01-01

    Steel plate reinforced concrete (SC) walls and slabs are structural members in which the rebars of reinforced concrete are replaced by steel plates. Steel plate reinforced concrete structures are more attractive structural design alternatives to reinforced concrete structures, especially with thick, heavily reinforced walls and slabs such as nuclear structures, because they enable a much shorter construction period, greater earthquake resistant and more cost effectiveness. Experimental and analytical studies performed by the authors have also shown that SC structures are much more effective in mitigating damage against scaled aircraft models , as described in Parts 1 and 2 of this study. The objective of Part 3 was to determine the protective capability of SC walls and roofs against a full-scale aircraft impact by conducting numerical experiments to investigate the fracture behaviors and limit thicknesses of SC panels and to examine the effectiveness of SC panels in detail under design conditions. Furthermore, a simplified method is proposed for evaluating the localized damage induced by a full-scale engine impact. (authors)

  5. Extraction of bioactives from Orthosiphon stamineus using microwave and ultrasound-assisted techniques: Process optimization and scale up.

    Science.gov (United States)

    Chan, Chung-Hung; See, Tiam-You; Yusoff, Rozita; Ngoh, Gek-Cheng; Kow, Kien-Woh

    2017-04-15

    This work demonstrated the optimization and scale up of microwave-assisted extraction (MAE) and ultrasonic-assisted extraction (UAE) of bioactive compounds from Orthosiphon stamineus using energy-based parameters such as absorbed power density and absorbed energy density (APD-AED) and response surface methodology (RSM). The intensive optimum conditions of MAE obtained at 80% EtOH, 50mL/g, APD of 0.35W/mL, AED of 250J/mL can be used to determine the optimum conditions of the scale-dependent parameters i.e. microwave power and treatment time at various extraction scales (100-300mL solvent loading). The yields of the up scaled conditions were consistent with less than 8% discrepancy and they were about 91-98% of the Soxhlet extraction yield. By adapting APD-AED method in the case of UAE, the intensive optimum conditions of the extraction, i.e. 70% EtOH, 30mL/g, APD of 0.22W/mL, AED of 450J/mL are able to achieve similar scale up results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Large-scale phylogenetic analyses provide insights into unrecognized diversity and historical biogeography of Asian leaf-litter frogs, genus Leptolalax (Anura: Megophryidae).

    Science.gov (United States)

    Chen, Jin-Min; Poyarkov, Nikolay A; Suwannapoom, Chatmongkon; Lathrop, Amy; Wu, Yun-He; Zhou, Wei-Wei; Yuan, Zhi-Yong; Jin, Jie-Qiong; Chen, Hong-Man; Liu, He-Qun; Nguyen, Truong Quang; Nguyen, Sang Ngoc; Duong, Tang Van; Eto, Koshiro; Nishikawa, Kanto; Matsui, Masafumi; Orlov, Nikolai L; Stuart, Bryan L; Brown, Rafe M; Rowley, Jodi J L; Murphy, Robert W; Wang, Ying-Yong; Che, Jing

    2018-07-01

    Southeast Asia and southern China (SEA-SC) harbor a highly diverse and endemic flora and fauna that is under increasing threat. An understanding of the biogeographical history and drivers of this diversity is lacking, especially in some of the most diverse and threatened groups. The Asian leaf-litter frog genus Leptolalax Dubois 1980 is a forest-dependent genus distributed throughout SEA-SC, making it an ideal study group to examine specific biogeographic hypotheses. In addition, the diversity of this genus remains poorly understood, and the phylogenetic relationships among species of Leptolalax and closely related Leptobrachella Smith 1928 remain unclear. Herein, we evaluate species-level diversity based on 48 of the 53 described species from throughout the distribution of Leptolalax. Molecular analyses reveal many undescribed species, mostly in southern China and Indochina. Our well-resolved phylogeny based on multiple nuclear DNA markers shows that Leptolalax is not monophyletic with respect to Leptobrachella and, thus, we assign the former to being a junior synonym of the latter. Similarly, analyses reject monophyly of the two subgenera of Leptolalax. The diversification pattern of the group is complex, involving a high degree of sympatry and prevalence of microendemic species. Northern Sundaland (Borneo) and eastern Indochina (Vietnam) appear to have played pivotal roles as geographical centers of diversification, and paleoclimatic changes and tectonic movements seem to have driven the major divergence of clades. Analyses fail to reject an "upstream" colonization hypothesis, and, thus, the genus appears to have originated in Sundaland and then colonized mainland Asia. Our results reveal that both vicariance and dispersal are responsible for current distribution patterns in the genus. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1998-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  8. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1997-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  9. Changes in lead and zinc lability during weathering-induced acidification of desert mine tailings: Coupling chemical and micro-scale analyses

    International Nuclear Information System (INIS)

    Hayes, Sarah M.; White, Scott A.; Thompson, Thomas L.; Maier, Raina M.; Chorover, Jon

    2009-01-01

    Desert mine tailings may accumulate toxic metals in the near surface centimeters because of low water through-flux rates. Along with other constraints, metal toxicity precludes natural plant colonization even over decadal time scales. Since unconsolidated particles can be subjected to transport by wind and water erosion, potentially resulting in direct human and ecosystem exposure, there is a need to know how the lability and form of metals change in the tailings weathering environment. A combination of chemical extractions, X-ray diffraction, micro-X-ray fluorescence spectroscopy, and micro-Raman spectroscopy were employed to study Pb and Zn contamination in surficial arid mine tailings from the Arizona Klondyke State Superfund Site. Initial site characterization indicated a wide range in pH (2.5-8.0) in the surficial tailings pile. Ligand-promoted (DTPA) extractions, used to assess plant-available metal pools, showed decreasing available Zn and Mn with progressive tailings acidification. Aluminum shows the inverse trend, and Pb and Fe show more complex pH dependence. Since the tailings derive from a common source and parent mineralogy, it is presumed that variations in pH and 'bio-available' metal concentrations result from associated variation in particle-scale geochemistry. Four sub-samples, ranging in pH from 2.6 to 5.4, were subjected to further characterization to elucidate micro-scale controls on metal mobility. With acidification, total Pb (ranging from 5 to 13 g kg -1 ) was increasingly associated with Fe and S in plumbojarosite aggregates. For Zn, both total (0.4-6 g kg -1 ) and labile fractions decreased with decreasing pH. Zinc was found to be primarily associated with the secondary Mn phases manjiroite and chalcophanite. The results suggest that progressive tailings acidification diminishes the overall lability of the total Pb and Zn pools.

  10. Similarity, Clustering, and Scaling Analyses for the Foreign Exchange Market ---Comprehensive Analysis on States of Market Participants with High-Frequency Financial Data---

    Science.gov (United States)

    Sato, A.; Sakai, H.; Nishimura, M.; Holyst, J. A.

    This article proposes mathematical methods to quantify states of marketparticipants in the foreign exchange market (FX market) and conduct comprehensive analysis on behavior of market participants by means of high-frequency financial data. Based on econophysics tools and perspectives we study similarity measures for both rate movements and quotation activities among various currency pairs. We perform also clustering analysis on market states for observation days, and find scaling relationship between mean values of quotation activities and their standard deviations. Using these mathematical methods we can visualize states of the FX market comprehensively. Finally we conclude that states of market participants temporally vary due to both external and internal factors.

  11. Scale-up from microtiter plate to laboratory fermenter: evaluation by online monitoring techniques of growth and protein expression in Escherichia coli and Hansenula polymorpha fermentations

    Directory of Open Access Journals (Sweden)

    Engelbrecht Christoph

    2009-12-01

    Full Text Available Abstract Background In the past decade, an enormous number of new bioprocesses have evolved in the biotechnology industry. These bioprocesses have to be developed fast and at a maximum productivity. Up to now, only few microbioreactors were developed to fulfill these demands and to facilitate sample processing. One predominant reaction platform is the shaken microtiter plate (MTP, which provides high-throughput at minimal expenses in time, money and work effort. By taking advantage of this simple and efficient microbioreactor array, a new online monitoring technique for biomass and fluorescence, called BioLector, has been recently developed. The combination of high-throughput and high information content makes the BioLector a very powerful tool in bioprocess development. Nevertheless, the scalabilty of results from the micro-scale to laboratory or even larger scales is very important for short development times. Therefore, engineering parameters regarding the reactor design and its operation conditions play an important role even on a micro-scale. In order to evaluate the scale-up from a microtiter plate scale (200 μL to a stirred tank fermenter scale (1.4 L, two standard microbial expression systems, Escherichia coli and Hansenula polymorpha, were fermented in parallel at both scales and compared with regard to the biomass and protein formation. Results Volumetric mass transfer coefficients (kLa ranging from 100 to 350 1/h were obtained in 96-well microtiter plates. Even with a suboptimal mass transfer condition in the microtiter plate compared to the stirred tank fermenter (kLa = 370-600 1/h, identical growth and protein expression kinetics were attained in bacteria and yeast fermentations. The bioprocess kinetics were evaluated by optical online measurements of biomass and protein concentrations exhibiting the same fermentation times and maximum signal deviations below 10% between the scales. In the experiments, the widely applied green

  12. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    Science.gov (United States)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  13. Scaled experiments using the helium technique to study the vehicular blockage effect on longitudinal ventilation control in tunnels

    DEFF Research Database (Denmark)

    Alva, Wilson Ulises Rojas; Jomaas, Grunde; Dederichs, Anne

    2015-01-01

    A model tunnel (1:30 compared to a standard tunnel section) with a helium-air smoke mixture was used to study the vehicular blockage effect on longitudinal ventilation smoke control. The experimental results showed excellent agreement with full-scale data and confirmed that the critical velocity...

  14. Defining a minimal clinically important difference for endometriosis-associated pelvic pain measured on a visual analog scale: analyses of two placebo-controlled, randomized trials

    Directory of Open Access Journals (Sweden)

    Schmitz Heinz

    2010-11-01

    Full Text Available Abstract Background When comparing active treatments, a non-inferiority (or one-sided equivalence study design is often used. This design requires the definition of a non-inferiority margin, the threshold value of clinical relevance. In recent studies, a non-inferiority margin of 15 mm has been used for the change in endometriosis-associated pelvic pain (EAPP on a visual analog scale (VAS. However, this value was derived from other chronic painful conditions and its validation in EAPP was lacking. Methods Data were analyzed from two placebo-controlled studies of active treatments in endometriosis, including 281 patients with laparoscopically-confirmed endometriosis and moderate-to-severe EAPP. Patients recorded EAPP on a VAS at baseline and the end of treatment. Patients also assessed their satisfaction with treatment on a modified Clinical Global Impression scale. Changes in VAS score were compared with patients' self-assessments to derive an empirically validated non-inferiority margin. This anchor-based value was compared to a non-inferiority margin derived using the conventional half standard deviation rule for minimal clinically important difference (MCID in patient-reported outcomes. Results Anchor-based and distribution-based MCIDs were-7.8 mm and-8.6 mm, respectively. Conclusions An empirically validated non-inferiority margin of 10 mm for EAPP measured on a VAS is appropriate to compare treatments in endometriosis.

  15. Repeatability of riparian vegetation sampling methods: how useful are these techniques for broad-scale, long-term monitoring?

    Science.gov (United States)

    Marc C. Coles-Ritchie; Richard C. Henderson; Eric K. Archer; Caroline Kennedy; Jeffrey L. Kershner

    2004-01-01

    Tests were conducted to evaluate variability among observers for riparian vegetation data collection methods and data reduction techniques. The methods are used as part of a largescale monitoring program designed to detect changes in riparian resource conditions on Federal lands. Methods were evaluated using agreement matrices, the Bray-Curtis dissimilarity metric, the...

  16. The use of a resource-based relative value scale (RBRVS) to determine practice expense costs: a novel technique of practice management for the vascular surgeon.

    Science.gov (United States)

    Mabry, C D

    2001-03-01

    Vascular surgeons have had to contend with rising costs while their reimbursements have undergone steady reductions. The use of newer accounting techniques can help vascular surgeons better manage their practices, plan for future expansion, and control costs. This article reviews traditional accounting methods, together with activity-based costing (ABC) principles that have been used in the past for practice expense analysis. The main focus is on a new technique-resource-based costing (RBC)-which uses the widely available Resource-Based Relative Value Scale (RBRVS) as its basis. The RBC technique promises easier implementation as well as more flexibility in determining true costs of performing various procedures, as opposed to more traditional accounting methods. It is hoped that RBC will assist vascular surgeons in coping with decreasing reimbursement. Copyright 2001 by W.B. Saunders Company

  17. Earthquake induced rock shear through a deposition hole. Modelling of three model tests scaled 1:10. Verification of the bentonite material model and the calculation technique

    Energy Technology Data Exchange (ETDEWEB)

    Boergesson, Lennart (Clay Technology AB, Lund (Sweden)); Hernelind, Jan (5T Engineering AB, Vaesteraas (Sweden))

    2010-11-15

    the scale tests, has been used for the copper. Two element models were used. In one of them (model A) the bentonite was divided into three parts with different densities according to the measurements made during dismantling and sampling. In the other one (model B) the same density, corresponding to the weighted mean value, was used for all bentonite in the test. The reason for using both these models was to investigate whether the simplification done in SR-Site, where only one density was modelled and thus no consideration was taken to the incomplete homogenisation that remains after water saturation and swelling, would affect the results significantly. The results show a remarkable agreement between modelled and measured results, in spite of the complexity of the models and the difficulties to measure stresses and strains under the very fast tests. In addition there was less than two per cent difference between the results of the simplified model with one density and the model with three densities. Figure 1 shows an example of results from Test 3 with the shear rate 160 mm/sec. i.e. the entire test took only 13/100 of a second. The modelling results of both models were thus found to agree well with the measurements, which validates the SR-Site modelling of the rock shear scenario. It should be emphasized that the calculations have been done without any changes or adaptations of material models or parameter values to test results. The overall conclusion is that the modelling technique, the element mesh and the material models used in these analyses are well fitted and useful for this type of modelling. [Figure 1. Measured total force as function of the shear deformation for Test 3 with the shear rate 160 mm/sec. Results from the calculations with the two models and the results of the measurements are shown.

  18. Techniques for Large-Scale Bacterial Genome Manipulation and Characterization of the Mutants with Respect to In Silico Metabolic Reconstructions.

    Science.gov (United States)

    diCenzo, George C; Finan, Turlough M

    2018-01-01

    The rate at which all genes within a bacterial genome can be identified far exceeds the ability to characterize these genes. To assist in associating genes with cellular functions, a large-scale bacterial genome deletion approach can be employed to rapidly screen tens to thousands of genes for desired phenotypes. Here, we provide a detailed protocol for the generation of deletions of large segments of bacterial genomes that relies on the activity of a site-specific recombinase. In this procedure, two recombinase recognition target sequences are introduced into known positions of a bacterial genome through single cross-over plasmid integration. Subsequent expression of the site-specific recombinase mediates recombination between the two target sequences, resulting in the excision of the intervening region and its loss from the genome. We further illustrate how this deletion system can be readily adapted to function as a large-scale in vivo cloning procedure, in which the region excised from the genome is captured as a replicative plasmid. We next provide a procedure for the metabolic analysis of bacterial large-scale genome deletion mutants using the Biolog Phenotype MicroArray™ system. Finally, a pipeline is described, and a sample Matlab script is provided, for the integration of the obtained data with a draft metabolic reconstruction for the refinement of the reactions and gene-protein-reaction relationships in a metabolic reconstruction.

  19. Pyrolysis as a technique for separating heavy metals from hyperaccumulators. Part II: Lab-scale pyrolysis of synthetic hyperaccumulator biomass

    International Nuclear Information System (INIS)

    Koppolu, Lakshmi; Agblevor, F.A.; Clements, L.D.

    2003-01-01

    Synthetic hyperaccumulator biomass (SHB) impregnated with Ni, Zn, Cu, Co or Cr was used to conduct 11 experiments in a lab-scale fluidized bed reactor. Two runs with blank corn stover, with no metal added, were also conducted. The reactor was operated in an entrained mode in a oxygen-free (N 2 ) environment at 873 K and 1 atm. The apparent gas residence time through the lab-scale reactor was 0.6 s at 873 K. The material balance for the lab-scale experiments on N 2 -free basis varied between 81% and 98%. The presence of a heavy metal in the SHB decreased the char yield and increased the tar yield, compared to the blank. The char and gas yields appeared to depend on the form of the metal salt used to prepare the SHB. However, the metal distribution in the product streams did not seem to be influenced by the chemical form of the metal salt used to prepare the SHB. Greater than 98.5% of the metal in the product stream was concentrated in the char formed by pyrolyzing and gasifying the SHB in the reactor. The metal concentration in the char varied between 0.7 and 15.3% depending on the type of metal in the SHB. However, the metal concentration was increased 4 to 6 times in the char compared to the feed

  20. A review of the processes and lab-scale techniques for the treatment of spent rechargeable NiMH batteries

    Science.gov (United States)

    Innocenzi, Valentina; Ippolito, Nicolò Maria; De Michelis, Ida; Prisciandaro, Marina; Medici, Franco; Vegliò, Francesco

    2017-09-01

    The purpose of this work is to describe and review the current status of the recycling technologies of spent NiMH batteries. In the first part of the work, the structure and characterization of NiMH accumulators are introduced followed by the description of the main scientific studies and the industrial processes. Various recycling routes including physical, pyrometallurgical and hydrometallurgical ones are discussed. The hydrometallurgical methods for the recovery of base metals and rare earths are mainly developed on the laboratory and pilot scale. The operating industrial methods are pyrometallurgical ones and are efficient only on the recovery of certain components of spent batteries. In particular fraction rich in nickel and other materials are recovered; instead the rare earths are lost in the slag and must be further refined by hydrometallurgical process to recover them. Considering the actual legislation regarding the disposal of spent batteries and the preservation of raw materials issues, implementations on laboratory scale and plant optimization studies should be conducted in order to overcome the industrial problems of the scale up for the hydrometallurgical processes.

  1. Composite and case study analyses of the large-scale environments associated with West Pacific Polar and subtropical vertical jet superposition events

    Science.gov (United States)

    Handlos, Zachary J.

    Though considerable research attention has been devoted to examination of the Northern Hemispheric polar and subtropical jet streams, relatively little has been directed toward understanding the circumstances that conspire to produce the relatively rare vertical superposition of these usually separate features. This dissertation investigates the structure and evolution of large-scale environments associated with jet superposition events in the northwest Pacific. An objective identification scheme, using NCEP/NCAR Reanalysis 1 data, is employed to identify all jet superpositions in the west Pacific (30-40°N, 135-175°E) for boreal winters (DJF) between 1979/80 - 2009/10. The analysis reveals that environments conducive to west Pacific jet superposition share several large-scale features usually associated with East Asian Winter Monsoon (EAWM) northerly cold surges, including the presence of an enhanced Hadley Cell-like circulation within the jet entrance region. It is further demonstrated that several EAWM indices are statistically significantly correlated with jet superposition frequency in the west Pacific. The life cycle of EAWM cold surges promotes interaction between tropical convection and internal jet dynamics. Low potential vorticity (PV), high theta e tropical boundary layer air, exhausted by anomalous convection in the west Pacific lower latitudes, is advected poleward towards the equatorward side of the jet in upper tropospheric isentropic layers resulting in anomalous anticyclonic wind shear that accelerates the jet. This, along with geostrophic cold air advection in the left jet entrance region that drives the polar tropopause downward through the jet core, promotes the development of the deep, vertical PV wall characteristic of superposed jets. West Pacific jet superpositions preferentially form within an environment favoring the aforementioned characteristics regardless of EAWM seasonal strength. Post-superposition, it is shown that the west Pacific

  2. Possible Factors Promoting Car Evacuation in the 2011 Tohoku Tsunami Revealed by Analysing a Large-Scale Questionnaire Survey in Kesennuma City

    Directory of Open Access Journals (Sweden)

    Fumiyasu Makinoshima

    2017-11-01

    Full Text Available Excessive car evacuation can cause severe traffic jams that can lead to large numbers of casualties during tsunami disasters. Investigating the possible factors that lead to unnecessary car evacuation can ensure smoother tsunami evacuations and mitigate casualty damages in future tsunami events. In this study, we quantitatively investigated the possible factors that promote car evacuation, including both necessary and unnecessary usages, by statistically analysing a large amount of data on actual tsunami evacuation behaviours surveyed in Kesennuma, where devastating damage occurred during the 2011 Tohoku Tsunami. A straightforward statistical analysis revealed a high percentage of car evacuations (approx. 50%; however, this fraction includes a high number of unnecessary usage events that were distinguished based on mode choice reasons. In addition, a binary logistic regression was conducted to quantitatively evaluate the effects of several factors and to identify the dominant factor that affected evacuation mode choice. The regression results suggested that the evacuation distance was the dominant factor for choosing car evacuation relative to other factors, such as age and sex. The cross-validation test of the regression model demonstrated that the considered factors were useful for decision making and the prediction of evacuation mode choice in the target area.

  3. In-depth analyses of organic matters in a full-scale seawater desalination plant and an autopsy of reverse osmosis membrane

    KAUST Repository

    Jeong, Sanghyun; Naidu, Gayathri; Vollprecht, Robert; Leiknes, TorOve; Vigneswaran, Saravanamuthu

    2016-01-01

    In order to facilitate the global performance of seawater reverse osmosis (SWRO) systems, it is important to improve the feed water quality before it enters the RO. Currently, many desalination plants experience production losses due to incidents of organic and biofouling. Consequently, monitoring or characterizing the pretreatment step using more advanced organic and biological parameters are required for better operation to lessen fouling issues. In this study, the performance of pretreatment processes (including coagulation, dual media filtration (DMF), polishing with cartridge filter (CF) coupled with anti-scalant) used at Perth Seawater Desalination Plant (PSDP) located in Western Australia were characterized in terms of organic and biological fouling parameters. These analyses were carried out using liquid chromatography with organic carbon detector (LC-OCD), three dimensional-fluorescence excitation emission matrix (3D-FEEM) and assimilable organic carbon (AOC). Furthermore, the used (exhausted) RO membrane and CF were autopsied so that the fates and behaviors of organic foulants in these treatment systems could be better understood.

  4. Discovery of Novel Antimicrobial Peptides from Varanus komodoensis (Komodo Dragon) by Large-Scale Analyses and De-Novo-Assisted Sequencing Using Electron-Transfer Dissociation Mass Spectrometry.

    Science.gov (United States)

    Bishop, Barney M; Juba, Melanie L; Russo, Paul S; Devine, Megan; Barksdale, Stephanie M; Scott, Shaylyn; Settlage, Robert; Michalak, Pawel; Gupta, Kajal; Vliet, Kent; Schnur, Joel M; van Hoek, Monique L

    2017-04-07

    Komodo dragons are the largest living lizards and are the apex predators in their environs. They endure numerous strains of pathogenic bacteria in their saliva and recover from wounds inflicted by other dragons, reflecting the inherent robustness of their innate immune defense. We have employed a custom bioprospecting approach combining partial de novo peptide sequencing with transcriptome assembly to identify cationic antimicrobial peptides from Komodo dragon plasma. Through these analyses, we identified 48 novel potential cationic antimicrobial peptides. All but one of the identified peptides were derived from histone proteins. The antimicrobial effectiveness of eight of these peptides was evaluated against Pseudomonas aeruginosa (ATCC 9027) and Staphylococcus aureus (ATCC 25923), with seven peptides exhibiting antimicrobial activity against both microbes and one only showing significant potency against P. aeruginosa. This study demonstrates the power and promise of our bioprospecting approach to cationic antimicrobial peptide discovery, and it reveals the presence of a plethora of novel histone-derived antimicrobial peptides in the plasma of the Komodo dragon. These findings may have broader implications regarding the role that intact histones and histone-derived peptides play in defending the host from infection. Data are available via ProteomeXChange with identifier PXD005043.

  5. In-depth analyses of organic matters in a full-scale seawater desalination plant and an autopsy of reverse osmosis membrane

    KAUST Repository

    Jeong, Sanghyun

    2016-02-17

    In order to facilitate the global performance of seawater reverse osmosis (SWRO) systems, it is important to improve the feed water quality before it enters the RO. Currently, many desalination plants experience production losses due to incidents of organic and biofouling. Consequently, monitoring or characterizing the pretreatment step using more advanced organic and biological parameters are required for better operation to lessen fouling issues. In this study, the performance of pretreatment processes (including coagulation, dual media filtration (DMF), polishing with cartridge filter (CF) coupled with anti-scalant) used at Perth Seawater Desalination Plant (PSDP) located in Western Australia were characterized in terms of organic and biological fouling parameters. These analyses were carried out using liquid chromatography with organic carbon detector (LC-OCD), three dimensional-fluorescence excitation emission matrix (3D-FEEM) and assimilable organic carbon (AOC). Furthermore, the used (exhausted) RO membrane and CF were autopsied so that the fates and behaviors of organic foulants in these treatment systems could be better understood.

  6. Millennial-scale climate variations in western Mediterranean during late Pleistocene-early Holocene: multi-proxy analyses from Padul peatbog (southern Iberian Peninsula)

    Science.gov (United States)

    Camuera, Jon; Jiménez-Moreno, Gonzalo; José Ramos-Román, María; García-Alix, Antonio; Jiménez-Espejo, Francisco; Toney, Jaime L.; Anderson, R. Scott; Kaufman, Darrell; Bright, Jordon; Sachse, Dirk

    2017-04-01

    Padul peatbog, located in southern Iberian Peninsula (western Mediterranean region) is a unique area for palaeoenvironmental studies due to its location, between arid and temperate climates. Previous studies showed that the Padul peatbog contains a continuous record of the last ca. 0.8-1 Ma, so it is an extraordinary site to identify glacial-interglacial phases as well as Heinrich and D-O events, linked to orbital- and suborbital-scale variations. In 2015, a new 42 m long core was taken from this area, providing an excellent sediment record probably for the last ca. 300,000 years. This study is focused on the paleoenvironmental and climatic reconstruction of the late Pleistocene and the early Holocene (ca. from 50,000 to 9,500 cal. yrs BP), using AMS 14C and AAR dating, high-resolution pollen analysis, lithology, continuous XRF-scanning, X-ray diffraction, magnetic susceptibility and organic geochemistry. These different proxies provide information not only about the regional environment change but also about local changes in the conditions of the Padul lake/peatbog due to variations in water temperature, pH or nutrients.

  7. Large-scale brain network associated with creative insight: combined voxel-based morphometry and resting-state functional connectivity analyses.

    Science.gov (United States)

    Ogawa, Takeshi; Aihara, Takatsugu; Shimokawa, Takeaki; Yamashita, Okito

    2018-04-24

    Creative insight occurs with an "Aha!" experience when solving a difficult problem. Here, we investigated large-scale networks associated with insight problem solving. We recruited 232 healthy participants aged 21-69 years old. Participants completed a magnetic resonance imaging study (MRI; structural imaging and a 10 min resting-state functional MRI) and an insight test battery (ITB) consisting of written questionnaires (matchstick arithmetic task, remote associates test, and insight problem solving task). To identify the resting-state functional connectivity (RSFC) associated with individual creative insight, we conducted an exploratory voxel-based morphometry (VBM)-constrained RSFC analysis. We identified positive correlations between ITB score and grey matter volume (GMV) in the right insula and middle cingulate cortex/precuneus, and a negative correlation between ITB score and GMV in the left cerebellum crus 1 and right supplementary motor area. We applied seed-based RSFC analysis to whole brain voxels using the seeds obtained from the VBM and identified insight-positive/negative connections, i.e. a positive/negative correlation between the ITB score and individual RSFCs between two brain regions. Insight-specific connections included motor-related regions whereas creative-common connections included a default mode network. Our results indicate that creative insight requires a coupling of multiple networks, such as the default mode, semantic and cerebral-cerebellum networks.

  8. Factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition: Exploratory factor analyses with the 16 primary and secondary subtests.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C

    2016-08-01

    The factor structure of the 16 Primary and Secondary subtests of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample was examined with exploratory factor analytic methods (EFA) not included in the WISC-V Technical and Interpretive Manual (Wechsler, 2014b). Factor extraction criteria suggested 1 to 4 factors and results favored 4 first-order factors. When this structure was transformed with the Schmid and Leiman (1957) orthogonalization procedure, the hierarchical g-factor accounted for large portions of total and common variance while the 4 first-order factors accounted for small portions of total and common variance; rendering interpretation at the factor index level less appropriate. Although the publisher favored a 5-factor model where the Perceptual Reasoning factor was split into separate Visual Spatial and Fluid Reasoning dimensions, no evidence for 5 factors was found. It was concluded that the WISC-V provides strong measurement of general intelligence and clinical interpretation should be primarily, if not exclusively, at that level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Extending Structural Analyses of the Rosenberg Self-Esteem Scale to Consider Criterion-Related Validity: Can Composite Self-Esteem Scores Be Good Enough?

    Science.gov (United States)

    Donnellan, M Brent; Ackerman, Robert A; Brecheen, Courtney

    2016-01-01

    Although the Rosenberg Self-Esteem Scale (RSES) is the most widely used measure of global self-esteem in the literature, there are ongoing disagreements about its factor structure. This methodological debate informs how the measure should be used in substantive research. Using a sample of 1,127 college students, we test the overall fit of previously specified models for the RSES, including a newly proposed bifactor solution (McKay, Boduszek, & Harvey, 2014 ). We extend previous work by evaluating how various latent factors from these structural models are related to a set of criterion variables frequently studied in the self-esteem literature. A strict unidimensional model poorly fit the data, whereas models that accounted for correlations between negatively and positively keyed items tended to fit better. However, global factors from viable structural models had similar levels of association with criterion variables and with the pattern of results obtained with a composite global self-esteem variable calculated from observed scores. Thus, we did not find compelling evidence that different structural models had substantive implications, thereby reducing (but not eliminating) concerns about the integrity of the self-esteem literature based on overall composite scores for the RSES.

  10. Graph Theory-Based Technique for Isolating Corrupted Boundary Conditions in Continental-Scale River Network Hydrodynamic Simulation

    Science.gov (United States)

    Yu, C. W.; Hodges, B. R.; Liu, F.

    2017-12-01

    Development of continental-scale river network models creates challenges where the massive amount of boundary condition data encounters the sensitivity of a dynamic nu- merical model. The topographic data sets used to define the river channel characteristics may include either corrupt data or complex configurations that cause instabilities in a numerical solution of the Saint-Venant equations. For local-scale river models (e.g. HEC- RAS), modelers typically rely on past experience to make ad hoc boundary condition adjustments that ensure a stable solution - the proof of the adjustment is merely the sta- bility of the solution. To date, there do not exist any formal methodologies or automated procedures for a priori detecting/fixing boundary conditions that cause instabilities in a dynamic model. Formal methodologies for data screening and adjustment are a critical need for simulations with a large number of river reaches that draw their boundary con- dition data from a wide variety of sources. At the continental scale, we simply cannot assume that we will have access to river-channel cross-section data that has been ade- quately analyzed and processed. Herein, we argue that problematic boundary condition data for unsteady dynamic modeling can be identified through numerical modeling with the steady-state Saint-Venant equations. The fragility of numerical stability increases with the complexity of branching in river network system and instabilities (even in an unsteady solution) are typically triggered by the nonlinear advection term in Saint-Venant equations. It follows that the behavior of the simpler steady-state equations (which retain the nonlin- ear term) can be used to screen the boundary condition data for problematic regions. In this research, we propose a graph-theory based method to isolate the location of corrupted boundary condition data in a continental-scale river network and demonstrate its utility with a network of O(10^4) elements. Acknowledgement

  11. Provider risk factors for medication administration error alerts: analyses of a large-scale closed-loop medication administration system using RFID and barcode.

    Science.gov (United States)

    Hwang, Yeonsoo; Yoon, Dukyong; Ahn, Eun Kyoung; Hwang, Hee; Park, Rae Woong

    2016-12-01

    To determine the risk factors and rate of medication administration error (MAE) alerts by analyzing large-scale medication administration data and related error logs automatically recorded in a closed-loop medication administration system using radio-frequency identification and barcodes. The subject hospital adopted a closed-loop medication administration system. All medication administrations in the general wards were automatically recorded in real-time using radio-frequency identification, barcodes, and hand-held point-of-care devices. MAE alert logs recorded during a full 1 year of 2012. We evaluated risk factors for MAE alerts including administration time, order type, medication route, the number of medication doses administered, and factors associated with nurse practices by logistic regression analysis. A total of 2 874 539 medication dose records from 30 232 patients (882.6 patient-years) were included in 2012. We identified 35 082 MAE alerts (1.22% of total medication doses). The MAE alerts were significantly related to administration at non-standard time [odds ratio (OR) 1.559, 95% confidence interval (CI) 1.515-1.604], emergency order (OR 1.527, 95%CI 1.464-1.594), and the number of medication doses administered (OR 0.993, 95%CI 0.992-0.993). Medication route, nurse's employment duration, and working schedule were also significantly related. The MAE alert rate was 1.22% over the 1-year observation period in the hospital examined in this study. The MAE alerts were significantly related to administration time, order type, medication route, the number of medication doses administered, nurse's employment duration, and working schedule. The real-time closed-loop medication administration system contributed to improving patient safety by preventing potential MAEs. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    Science.gov (United States)

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com

  13. Stomatal conductance at Duke FACE: Leveraging the lessons from 11 years of scaled sap flux measurements for region-wide analyses

    Science.gov (United States)

    Ward, E. J.; Bell, D.; Clark, J. S.; McCarthy, H. R.; Kim, H.; domec, J.; Noormets, A.; McNulty, D.; Sun, G.; Oren, R.

    2013-12-01

    A network of thermal dissipation probes (TDPs) monitoring sap flux density was used to estimate leaf-specific transpiration (EL) and canopy-averaged stomatal conductance (GS) in Pinus taeda (L.) exposed to +200 ppm atmospheric CO2 levels (eCO2) and nitrogen fertilization as part of the Duke FACE study. Data from scaling half-hourly measurements from hundreds of sensors over 11 years indicated that P. taeda in eCO2 intermittently (49% of monthly values) decreased stomatal conductance relative to the control, with a mean reduction of 13% in both total EL and mean daytime GS. This intermittent response was related to changes in a hydraulic allometry index (AH), defined as sapwood area per unit leaf area per unit canopy height, which was linearly related to GS at reference conditions (GSR) during the growing season across years (R2=0.67). Overall, AH decreased a mean of 15% with eCO2 over the course of the study, due mostly to a mean 19% increase in leaf area. Throughout the southeastern U.S., other P. taeda stands have been monitored with TDPs, such as the US-NC2 Ameriflux site and four fertilizer × throughfall displacement studies recently begun as part of the PINEMAP research network in VA, GA, FL and OK. We will also discuss the challenges and benefits of using a common modeling platform to combine FACE TDP data with that from a diversity of sites and treatments to draw inferences about EL and GS responses to environmental drivers and climate change, as well as their relation to AH, across the range of P. taeda.

  14. Some Examples of Residence-Time Distribution Studies in Large-Scale Chemical Processes by Using Radiotracer Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bullock, R. M.; Johnson, P.; Whiston, J. [Imperial Chemical Industries Ltd., Billingham, Co., Durham (United Kingdom)

    1967-06-15

    The application of radiotracers to determine flow patterns in chemical processes is discussed with particular reference to the derivation of design data from model reactors for translation to large-scale units, the study of operating efficiency and design attainment in established plant and the rapid identification of various types of process malfunction. The requirements governing the selection of tracers for various types of media are considered and an example is given of the testing of the behaviour of a typical tracer before use in a particular large-scale process operating at 250 atm and 200 Degree-Sign C. Information which may be derived from flow patterns is discussed including the determination of mixing parameters, gas hold-up in gas/liquid reactions and the detection of channelling and stagnant regions. Practical results and their interpretation are given in relation to an define hydroformylation reaction system, a process for the conversion of propylene to isopropanol, a moving bed catalyst system for the isomerization of xylenes and a three-stage gas-liquid reaction system. The use of mean residence-time data for the detection of leakage between reaction vessels and a heat interchanger system is given as an example of the identification of process malfunction. (author)

  15. A radiochemical technique for the establishment of a solvent-independent scale of ion activities in amphiprotic solvents

    International Nuclear Information System (INIS)

    Kim, J.I.; Duschner, H.; Born, H.J.

    1975-01-01

    The radiochemical determination of solubilities of hardly soluble compounds of silver (Ph 4 BAg, AgCl), by means of Ag-110m in amphiprotic solutions is used for setting-up a solvent-independent scale of ion activities based on the concept of the media effect. The media effects of the salts are calculated from the solubility data of the Ag compounds in question. The splitting into the media effects of single ions takes place with the extrathermodynamic assumption of the same media effects for large ions, such as Ph 4 B - = Ph 4 As - . A standardized ion activity scale in connection with the activity coefficients for the solvent in question can be established with water as the basic state of the chemical potential. As the sum of the media effects of the single ions gives the media effect of the salt concerned, which is easily obtained from data which are experimentally accessible (solubility, vapour pressure, ion exchange ect.), this method leads to single ion activities of a large number of ions in a multitude of solvents. (orig./LH) [de

  16. Pyrolysis as a technique for separating heavy metals from hyperaccumulators. Part III: pilot-scale pyrolysis of synthetic hyperaccumulator biomass

    International Nuclear Information System (INIS)

    Koppolu, Lakshmi; Prasad, Ramakrishna; Davis Clements, L.

    2004-01-01

    Synthetic hyperaccumulator biomass (SHB) feed impregnated with Ni, Zn or Cu was used to conduct six experiments in a pilot-scale, spouted bed gasifier. Two runs each using corn stover with no metal added (blank runs) were also conducted. The reactor was operated in an entrained mode in an oxygen free (N 2 ) environment at 873 K and 1 atm. The apparent gas residence time in the heated zone of the pilot-scale reactor was 1.4 s at 873 K. The material balance closure for the eight experiments on an N 2 -free basis varied between 79% and 92%. Nearly 99% of the metal recovered in the product stream was concentrated in the char formed by pyrolyzing the SHB in the reactor. The metal concentration in the char varied between 6.6% and 16.6%, depending on the type of metal and whether the char was collected in the cyclone or ashbox. The metal component was concentrated by 3.2-6 times in the char, compared to the feed

  17. Conditional sampling technique to test the applicability of the Taylor hypothesis for the large-scale coherent structures

    Science.gov (United States)

    Hussain, A. K. M. F.

    1980-01-01

    Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.

  18. Using value stream mapping technique through the lean production transformation process: An implementation in a large-scaled tractor company

    Directory of Open Access Journals (Sweden)

    Mehmet Rıza Adalı

    2017-04-01

    Full Text Available Today’s world, manufacturing industries have to continue their development and continuity in more competitive environment via decreasing their costs. As a first step in the lean production process transformation is to analyze the value added activities and non-value adding activities. This study aims at applying the concepts of Value Stream Mapping (VSM in a large-scaled tractor company in Sakarya. Waste and process time are identified by mapping the current state in the production line of platform. The future state was suggested with improvements for elimination of waste and reduction of lead time, which went from 13,08 to 4,35 days. Analysis are made using current and future states to support the suggested improvements and cycle time of the production line of platform is improved 8%. Results showed that VSM is a good alternative in the decision-making for change in production process.

  19. Measurement of residence time distribution of liquid phase in an industrial-scale continuous pulp digester using radiotracer technique.

    Science.gov (United States)

    Sheoran, Meenakshi; Goswami, Sunil; Pant, Harish J; Biswal, Jayashree; Sharma, Vijay K; Chandra, Avinash; Bhunia, Haripada; Bajpai, Pramod K; Rao, S Madhukar; Dash, A

    2016-05-01

    A series of radiotracer experiments was carried out to measure residence time distribution (RTD) of liquid phase (alkali) in an industrial-scale continuous pulp digester in a paper industry in India. Bromine-82 as ammonium bromide was used as a radiotracer. Experiments were carried out at different biomass and white liquor flow rates. The measured RTD data were treated and mean residence times in individual digester tubes as well in the whole digester were determined. The RTD was also analyzed to identify flow abnormalities and investigate flow dynamics of the liquid phase in the pulp digester. Flow channeling was observed in the first section (tube 1) of the digester. Both axial dispersion and tanks-in-series with backmixing models preceded with a plug flow component were used to simulate the measured RTD and quantify the degree of axial mixing. Based on the study, optimum conditions for operating the digester were proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Spherical nanoindentation of proton irradiated 304 stainless steel: A comparison of small scale mechanical test techniques for measuring irradiation hardening

    Science.gov (United States)

    Weaver, Jordan S.; Pathak, Siddhartha; Reichardt, Ashley; Vo, Hi T.; Maloy, Stuart A.; Hosemann, Peter; Mara, Nathan A.

    2017-09-01

    Experimentally quantifying the mechanical effects of radiation damage in reactor materials is necessary for the development and qualification of new materials for improved performance and safety. This can be achieved in a high-throughput fashion through a combination of ion beam irradiation and small scale mechanical testing in contrast to the high cost and laborious nature of bulk testing of reactor irradiated samples. The current work focuses on using spherical nanoindentation stress-strain curves on unirradiated and proton irradiated (10 dpa at 360 °C) 304 stainless steel to quantify the mechanical effects of radiation damage. Spherical nanoindentation stress-strain measurements show a radiation-induced increase in indentation yield strength from 1.36 GPa to 2.72 GPa and a radiation-induced increase in indentation work hardening rate of 10 GPa-30 GPa. These measurements are critically compared against Berkovich nanohardness, micropillar compression, and micro-tension measurements on the same material and similar grain orientations. The ratio of irradiated to unirradiated yield strength increases by a similar factor of 2 when measured via spherical nanoindentation or Berkovich nanohardness testing. A comparison of spherical indentation stress-strain curves to uniaxial (micropillar and micro-tension) stress-strain curves was achieved using a simple scaling relationship which shows good agreement for the unirradiated condition and poor agreement in post-yield behavior for the irradiated condition. The disagreement between spherical nanoindentation and uniaxial stress-strain curves is likely due to the plastic instability that occurs during uniaxial tests but is absent during spherical nanoindentation tests.

  1. Comparison of three different scales techniques for the dynamic mechanical characterization of two polymers (PDMS and SU8)

    Science.gov (United States)

    Le Rouzic, J.; Delobelle, P.; Vairac, P.; Cretin, B.

    2009-10-01

    In this article the dynamic mechanical characterization of PDMS and SU8 resin using dynamic mechanical analysis, nanoindentation and the scanning microdeformation microscope have been presented. The methods are hereby explained, extended for viscoelastic behaviours, and their compatibility underlined. The storage and loss moduli of these polymers over a wide range of frequencies (from 0.01 Hz to somekHz) have been measured. These techniques are shown fairly matching and the two different viscoelastic behaviours of these two polymers have been exhibited. Indeed, PDMS shows moduli which still increase at 5kHz whereas SU8 ones decrease much sooner. From a material point of view, the Havriliak and Negami model to estimate instantaneous, relaxed moduli and time constant of these materials has been identified.

  2. Tracking techniques for the characteristics method applied to the resolution of the neutrons transport equation in multi scale domains

    International Nuclear Information System (INIS)

    Fevotte, F.

    2008-01-01

    At the various stages of a nuclear reactor's life, numerous studies are needed to guaranty the safety and efficiency of the design, analyse the fuel cycle, prepare the dismantlement, and so on. Due to the extreme difficulty to take extensive and accurate measurements in the reactor core, most of these studies are numerical simulations. The complete numerical simulation of a nuclear reactor involves many types of physics: neutronics, thermal hydraulics, materials, control engineering, Among these, the neutron transport simulation is one of the fundamental steps, since it allows computation - among other things - of various fundamental values such as the power density (used in thermal hydraulics computations) or fuel burn-up. The neutron transport simulation is based on the Boltzmann equation, which models the neutron population inside the reactor core. Among the various methods allowing its numerical solution, much interest has been devoted in the past few years to the Method of Characteristics in unstructured meshes (MOC), since it offers a good accuracy and operates in complicated geometries. The aim of this work is to propose improvements of the calculation scheme bound on the two dimensions MOC, in order to decrease the needed resources number. (A.L.B.)

  3. Investigation of flow dynamics of liquid phase in a pilot-scale trickle bed reactor using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Sharma, V K

    2016-10-01

    A radiotracer investigation was carried out to measure residence time distribution (RTD) of liquid phase in a trickle bed reactor (TBR). The main objectives of the investigation were to investigate radial and axial mixing of the liquid phase, and evaluate performance of the liquid distributor/redistributor at different operating conditions. Mean residence times (MRTs), holdups (H) and fraction of flow flowing along different quadrants were estimated. The analysis of the measured RTD curves indicated radial non-uniform distribution of liquid phase across the beds. The overall RTD of the liquid phase, measured at the exit of the reactor was simulated using a multi-parameter axial dispersion with exchange model (ADEM), and model parameters were obtained. The results of model simulations indicated that the TBR behaved as a plug flow reactor at most of the operating conditions used in the investigation. The results of the investigation helped to improve the existing design as well as to design a full-scale industrial TBR for petroleum refining applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Investigation of flow dynamics of liquid phase in a pilot-scale trickle bed reactor using radiotracer technique

    International Nuclear Information System (INIS)

    Pant, H.J.; Sharma, V.K.

    2016-01-01

    A radiotracer investigation was carried out to measure residence time distribution (RTD) of liquid phase in a trickle bed reactor (TBR). The main objectives of the investigation were to investigate radial and axial mixing of the liquid phase, and evaluate performance of the liquid distributor/redistributor at different operating conditions. Mean residence times (MRTs), holdups (H) and fraction of flow flowing along different quadrants were estimated. The analysis of the measured RTD curves indicated radial non-uniform distribution of liquid phase across the beds. The overall RTD of the liquid phase, measured at the exit of the reactor was simulated using a multi-parameter axial dispersion with exchange model (ADEM), and model parameters were obtained. The results of model simulations indicated that the TBR behaved as a plug flow reactor at most of the operating conditions used in the investigation. The results of the investigation helped to improve the existing design as well as to design a full-scale industrial TBR for petroleum refining applications. - Highlights: • Residence time distributions of liquid phase were measured in a trickle bed reactor. • Bromine-82 as ammonium bromide was used as a radiotracer. • Mean residence times, holdups and radial distribution of liquid phase were quantified. • Axial dispersion with exchange model was used to simulate the measured data. • The trickle bed reactor behaved as a plug flow reactor.

  5. Measurements of liquid phase residence time distributions in a pilot-scale continuous leaching reactor using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Sharma, V K; Shenoy, K T; Sreenivas, T

    2015-03-01

    An alkaline based continuous leaching process is commonly used for extraction of uranium from uranium ore. The reactor in which the leaching process is carried out is called a continuous leaching reactor (CLR) and is expected to behave as a continuously stirred tank reactor (CSTR) for the liquid phase. A pilot-scale CLR used in a Technology Demonstration Pilot Plant (TDPP) was designed, installed and operated; and thus needed to be tested for its hydrodynamic behavior. A radiotracer investigation was carried out in the CLR for measurement of residence time distribution (RTD) of liquid phase with specific objectives to characterize the flow behavior of the reactor and validate its design. Bromine-82 as ammonium bromide was used as a radiotracer and about 40-60MBq activity was used in each run. The measured RTD curves were treated and mean residence times were determined and simulated using a tanks-in-series model. The result of simulation indicated no flow abnormality and the reactor behaved as an ideal CSTR for the range of the operating conditions used in the investigation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Detection of different-time-scale signals in the length of day variation based on EEMD analysis technique

    Directory of Open Access Journals (Sweden)

    Wenbin Shen

    2016-05-01

    Full Text Available Scientists pay great attention to different-time-scale signals in the length of day (LOD variations ΔLOD, which provide signatures of the Earth's interior structure, couplings among different layers, and potential excitations of ocean and atmosphere. In this study, based on the ensemble empirical mode decomposition (EEMD, we analyzed the latest time series of ΔLOD data spanning from January 1962 to March 2015. We observed the signals with periods and amplitudes of about 0.5 month and 0.19 ms, 1.0 month and 0.19 ms, 0.5 yr and 0.22 ms, 1.0 yr and 0.18 ms, 2.28 yr and 0.03 ms, 5.48 yr and 0.05 ms, respectively, in coincidence with the results of predecessors. In addition, some signals that were previously not definitely observed by predecessors were detected in this study, with periods and amplitudes of 9.13 d and 0.12 ms, 13.69 yr and 0.10 ms, respectively. The mechanisms of the LOD fluctuations of these two signals are still open.

  7. Advanced chip designs and novel cooling techniques for brightness scaling of industrial, high power diode laser bars

    Science.gov (United States)

    Heinemann, S.; McDougall, S. D.; Ryu, G.; Zhao, L.; Liu, X.; Holy, C.; Jiang, C.-L.; Modak, P.; Xiong, Y.; Vethake, T.; Strohmaier, S. G.; Schmidt, B.; Zimer, H.

    2018-02-01

    The advance of high power semiconductor diode laser technology is driven by the rapidly growing industrial laser market, with such high power solid state laser systems requiring ever more reliable diode sources with higher brightness and efficiency at lower cost. In this paper we report simulation and experimental data demonstrating most recent progress in high brightness semiconductor laser bars for industrial applications. The advancements are in three principle areas: vertical laser chip epitaxy design, lateral laser chip current injection control, and chip cooling technology. With such improvements, we demonstrate disk laser pump laser bars with output power over 250W with 60% efficiency at the operating current. Ion implantation was investigated for improved current confinement. Initial lifetime tests show excellent reliability. For direct diode applications 96% polarization are additional requirements. Double sided cooling deploying hard solder and optimized laser design enable single emitter performance also for high fill factor bars and allow further power scaling to more than 350W with 65% peak efficiency with less than 8 degrees slow axis divergence and high polarization.

  8. Measurement of residence time distribution of liquid phase in an industrial-scale continuous pulp digester using radiotracer technique

    International Nuclear Information System (INIS)

    Sheoran, Meenakshi; Goswami, Sunil; Pant, Harish J.; Biswal, Jayashree; Sharma, Vijay K.; Chandra, Avinash; Bhunia, Haripada; Bajpai, Pramod K.; Rao, S. Madhukar; Dash, A.

    2016-01-01

    A series of radiotracer experiments was carried out to measure residence time distribution (RTD) of liquid phase (alkali) in an industrial-scale continuous pulp digester in a paper industry in India. Bromine-82 as ammonium bromide was used as a radiotracer. Experiments were carried out at different biomass and white liquor flow rates. The measured RTD data were treated and mean residence times in individual digester tubes as well in the whole digester were determined. The RTD was also analyzed to identify flow abnormalities and investigate flow dynamics of the liquid phase in the pulp digester. Flow channeling was observed in the first section (tube 1) of the digester. Both axial dispersion and tanks-in-series with backmixing models preceded with a plug flow component were used to simulate the measured RTD and quantify the degree of axial mixing. Based on the study, optimum conditions for operating the digester were proposed. - Highlights: • Radiotracer experiments were conducted to measure RTD of liquid phase in a pulp digester • Mean residence times of white liquor were measured • Axial dispersion and tanks-in-series models were used to investigate flow patterns • Parallel flow paths were observed in first section of the digester • Optimized flow rates of biomass and liquor were obtained

  9. Study on development and actual application of scientific crime detection technique using small scale neutron radiation source

    International Nuclear Information System (INIS)

    Suzuki, Yasuhiro; Kishi, Toru; Tachikawa, Noboru; Ishikawa, Isamu.

    1997-01-01

    PGA (Prompt γ-ray Analysis) is an analytic method of γ-ray generated from atomic nuclei of elements in the specimen just after irradiation (within 10(exp-14)sec.) of neutron to it. As using neutron with excellent transmission for an exciting source, this method can be used for inspecting the matters in closed containers non-destructively, and can also detect non-destructively light elements such as boron, nitrogen and others difficult by other non-destructive analysis. Especially, it is found that this method can detect such high concentration of nitrogen, chlorine and others which are characteristic elements for the explosives. However, as there are a number of limitations at the nuclear reactor site, development of an analytical apparatus for small scale neutron radiation source was begun, at first. In this fiscal year, analysis of the light elements such as nitrogen, chlorine and others using PGA was attempted by using 252-Cf as the simplest neutron source in its operation. As the 252-Cf neutron flux was considerably lower than that of nuclear reactor, its analytical sensitivity was also investigated. (G.K.)

  10. Validation of a scale for network therapy: a technique for systematic use of peer and family support in addition treatment.

    Science.gov (United States)

    Keller, D S; Galanter, M; Weinberg, S

    1997-02-01

    Substance abuse treatments are increasingly employing standardized formats. This is especially the case for approaches that utilize an individual psychotherapy format but less so for family-based approaches. Network therapy, an approach that involves family members and peers in the patient's relapse prevention efforts, is theoretically and clinically differentiated in this paper from family systems therapy for addiction. Based on these conceptual differences, a Network Therapy Rating Scale (NTRS) was developed to measure the integrity and differentiability of network therapy from other family-based approaches to addiction treatment. Seven addictions faculty and 10 third- and fourth-year psychiatry residents recently trained in the network approach used the NTRS to rate excerpts of network and family systems therapy sessions. Data revealed the NTRS had high internal consistency reliability when utilized by both groups of raters. In addition, network and nonnetwork subscales within the NTRS rated congruent therapy excerpts significantly higher than noncongruent therapy excerpts, indicating that the NTRS subscales measure what they are designed to measure. Implications for research and training are discussed.

  11. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    Energy Technology Data Exchange (ETDEWEB)

    Oba, T. [SOKENDAI (The Graduate University for Advanced Studies), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan); Riethmüller, T. L.; Solanki, S. K. [Max-Planck-Institut für Sonnensystemforschung (MPS), Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Iida, Y. [Department of Science and Technology/Kwansei Gakuin University, Gakuen 2-1, Sanda, Hyogo, 669–1337 Japan (Japan); Quintero Noda, C.; Shimizu, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan)

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  12. Mechanical microencapsulation: The best technique in taste masking for the manufacturing scale - Effect of polymer encapsulation on drug targeting.

    Science.gov (United States)

    Al-Kasmi, Basheer; Alsirawan, Mhd Bashir; Bashimam, Mais; El-Zein, Hind

    2017-08-28

    Drug taste masking is a crucial process for the preparation of pediatric and geriatric formulations as well as fast dissolving tablets. Taste masking techniques aim to prevent drug release in saliva and at the same time to obtain the desired release profile in gastrointestinal tract. Several taste masking methods are reported, however this review has focused on a group of promising methods; complexation, encapsulation, and hot melting. The effects of each method on the physicochemical properties of the drug are described in details. Furthermore, a scoring system was established to evaluate each process using recent published data of selected factors. These include, input, process, and output factors that are related to each taste masking method. Input factors include the attributes of the materials used for taste masking. Process factors include equipment type and process parameters. Finally, output factors, include taste masking quality and yield. As a result, Mechanical microencapsulation obtained the highest score (5/8) along with complexation with cyclodextrin suggesting that these methods are the most preferable for drug taste masking. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Flood evolution assessment and monitoring using hydrological modelling techniques: analysis of the inundation areas at a regional scale

    Science.gov (United States)

    Podhoranyi, M.; Kuchar, S.; Portero, A.

    2016-08-01

    The primary objective of this study is to present techniques that cover usage of a hydrodynamic model as the main tool for monitoring and assessment of flood events while focusing on modelling of inundation areas. We analyzed the 2010 flood event (14th May - 20th May) that occurred in the Moravian-Silesian region (Czech Republic). Under investigation were four main catchments: Opava, Odra, Olše and Ostravice. Four hydrodynamic models were created and implemented into the Floreon+ platform in order to map inundation areas that arose during the flood event. In order to study the dynamics of the water, we applied an unsteady flow simulation for the entire area (HEC-RAS 4.1). The inundation areas were monitored, evaluated and recorded semi-automatically by means of the Floreon+ platform. We focused on information about the extent and presence of the flood areas. The modeled flooded areas were verified by comparing them with real data from different sources (official reports, aerial photos and hydrological networks). The study confirmed that hydrodynamic modeling is a very useful tool for mapping and monitoring of inundation areas. Overall, our models detected 48 inundation areas during the 2010 flood event.

  14. Development of Somatic Embryo Maturation and Growing Techniques of Norway Spruce Emblings towards Large-Scale Field Testing

    Directory of Open Access Journals (Sweden)

    Mikko Tikkinen

    2018-06-01

    Full Text Available The possibility to utilize non-additive genetic gain in planting stock has increased the interest towards vegetative propagation. In Finland, the increased planting of Norway spruce combined with fluctuant seed yields has resulted in shortages of improved regeneration material. Somatic embryogenesis is an attractive method to rapidly facilitate breeding results, not in the least, because juvenile propagation material can be cryostored for decades. Further development of technology for the somatic embryogenesis of Norway spruce is essential, as the high cost of somatic embryo plants (emblings limits deployment. We examined the effects of maturation media varying in abscisic acid (20, 30 or 60 µM and polyethylene glycol 4000 (PEG concentrations, as well as the effect of cryopreservation cycles on embryo production, and the effects of two growing techniques on embling survival and growth. Embryo production and nursery performance of 712 genotypes from 12 full-sib families were evaluated. Most embryos per gram of fresh embryogenic mass (296 ± 31 were obtained by using 30 µM abscisic acid without PEG in the maturation media. Transplanting the emblings into nursery after one-week in vitro germination resulted in 77% survival and the tallest emblings after the first growing season. Genotypes with good production properties were found in all families.

  15. Multi-scale full-field measurements and near-wall modeling of turbulent subcooled boiling flow using innovative experimental techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hassan, Yassin A., E-mail: y-hassan@tamu.edu

    2016-04-01

    Highlights: • Near wall full-field velocity components under subcooled boiling were measured. • Simultaneous shadowgraphy, infrared thermometry wall temperature and particle-tracking velocimetry techniques were combined. • Near wall velocity modifications under subcooling boiling were observed. - Abstract: Multi-phase flows are one of the challenges on which the CFD simulation community has been working extensively with a relatively low success. The phenomena associated behind the momentum and heat transfer mechanisms associated to multi-phase flows are highly complex requiring resolving simultaneously for multiple scales on time and space. Part of the reasons behind the low predictive capability of CFD when studying multi-phase flows, is the scarcity of CFD-grade experimental data for validation. The complexity of the phenomena and its sensitivity to small sources of perturbations makes its measurements a difficult task. Non-intrusive and innovative measuring techniques are required to accurately measure multi-phase flow parameters while at the same time satisfying the high resolution required to validate CFD simulations. In this context, this work explores the feasible implementation of innovative measuring techniques that can provide whole-field and multi-scale measurements of two-phase flow turbulence, heat transfer, and boiling parameters. To this end, three visualization techniques are simultaneously implemented to study subcooled boiling flow through a vertical rectangular channel with a single heated wall. These techniques are listed next and are used as follow: (1) High-speed infrared thermometry (IR-T) is used to study the impact of the boiling level on the heat transfer coefficients at the heated wall, (2) Particle Tracking Velocimetry (PTV) is used to analyze the influence that boiling parameters have on the liquid phase turbulence statistics, (3) High-speed shadowgraphy with LED illumination is used to obtain the gas phase dynamics. To account

  16. Meter-scale Urban Land Cover Mapping for EPA EnviroAtlas Using Machine Learning and OBIA Remote Sensing Techniques

    Science.gov (United States)

    Pilant, A. N.; Baynes, J.; Dannenberg, M.; Riegel, J.; Rudder, C.; Endres, K.

    2013-12-01

    US EPA EnviroAtlas is an online collection of tools and resources that provides geospatial data, maps, research, and analysis on the relationships between nature, people, health, and the economy (http://www.epa.gov/research/enviroatlas/index.htm). Using EnviroAtlas, you can see and explore information related to the benefits (e.g., ecosystem services) that humans receive from nature, including clean air, clean and plentiful water, natural hazard mitigation, biodiversity conservation, food, fuel, and materials, recreational opportunities, and cultural and aesthetic value. EPA developed several urban land cover maps at very high spatial resolution (one-meter pixel size) for a portion of EnviroAtlas devoted to urban studies. This urban mapping effort supported analysis of relations among land cover, human health and demographics at the US Census Block Group level. Supervised classification of 2010 USDA NAIP (National Agricultural Imagery Program) digital aerial photos produced eight-class land cover maps for several cities, including Durham, NC, Portland, ME, Tampa, FL, New Bedford, MA, Pittsburgh, PA, Portland, OR, and Milwaukee, WI. Semi-automated feature extraction methods were used to classify the NAIP imagery: genetic algorithms/machine learning, random forest, and object-based image analysis (OBIA). In this presentation we describe the image processing and fuzzy accuracy assessment methods used, and report on some sustainability and ecosystem service metrics computed using this land cover as input (e.g., carbon sequestration from USFS iTREE model; health and demographics in relation to road buffer forest width). We also discuss the land cover classification schema (a modified Anderson Level 1 after the National Land Cover Data (NLCD)), and offer some observations on lessons learned. Meter-scale urban land cover in Portland, OR overlaid on NAIP aerial photo. Streets, buildings and individual trees are identifiable.

  17. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  18. Advanced Fabrication Techniques for Precisely Controlled Micro and Nano Scale Environments for Complex Tissue Regeneration and Biomedical Applications

    Science.gov (United States)

    Holmes, Benjamin

    As modern medicine advances, it is still very challenging to cure joint defects due to their poor inherent regenerative capacity, complex stratified architecture, and disparate biomechanical properties. The current clinical standard for catastrophic or late stage joint degradation is a total joint implant, where the damaged joint is completely excised and replaced with a metallic or artificial joint. However, these procedures still only lasts for 10-15 years, and there are hosts of recovery complications which can occur. Thus, these studies have sought to employ advanced biomaterials and scaffold fabricated techniques to effectively regrow joint tissue, instead of merely replacing it with artificial materials. We can hypothesize here that the inclusion of biomimetic and bioactive nanomaterials with highly functional electrospun and 3D printed scaffold can improve physical characteristics (mechanical strength, surface interactions and nanotexture) enhance cellular growth and direct stem cell differentiation for bone, cartilage and vascular growth as well as cancer metastasis modeling. Nanomaterial inclusion and controlled 3D printed features effectively increased nano surface roughness, Young's Modulus and provided effective flow paths for simulated arterial blood. All of the approaches explored proved highly effective for increasing cell growth, as a result of increasing micro-complexity and nanomaterial incorporation. Additionally, chondrogenic and osteogenic differentiation, cell migration, cell to cell interaction and vascular formation were enhanced. Finally, growth-factor(gf)-loaded polymer nanospheres greatly improved vascular cell behavior, and provided a highly bioactive scaffold for mesenchymal stem cell (MSC) and human umbilical vein endothelial cell (HUVEC) co-culture and bone formation. In conclusion, electrospinning and 3D printing when combined effectively with biomimetic and bioactive nanomaterials (i.e. carbon nanomaterials, collagen, nHA, polymer

  19. Screening for depressed mood in an adolescent psychiatric context by brief self-assessment scales -- testing psychometric validity of WHO-5 and BDI-6 indices by latent trait analyses

    DEFF Research Database (Denmark)

    Blom, Eva Henje; Bech, Per; Högberg, Göran

    2012-01-01

    of two such scales, which may be used in a two-step screening procedure, the WHO-Five Well-being Index (WHO-5) and the six-item version of Beck's Depression Inventory (BDI-6). METHOD: 66 adolescent psychiatric patients with a clinical diagnosis of major depressive disorder (MDD), 60 girls and 6 boys......, aged 14--18 years, mean age 16.8 years, completed the WHO-5 scale as well as the BDI-6. Statistical validity was tested by Mokken and Rasch analyses. RESULTS: The correlation between WHO-5 and BDI-6 was -0.49 (p=0.0001). Mokken analyses showed a coefficient of homogeneity for the WHO-5 of 0.......52 and for the BDI-6 of 0.46. Rasch analysis also accepted unidimensionality when testing males versus females (p > 0.05). CONCLUSIONS: The WHO-5 is psychometrically valid in an adolescent psychiatric context including both genders to assess the wellness dimension and applicable as a first step in screening for MDD...

  20. Measurements of the effectiveness of conservation agriculture at the field scale using radioisotopic techniques and runoff plots

    Science.gov (United States)

    Mabit, L.; Klik, A.; Toloza, A.; Benmansour, M.; Geisler, A.; Gerstmann, U. C.

    2009-04-01

    Growing evidence of the cost of soil erosion on agricultural land and off site impact of associated processes has emphasized the needs for quantitative assessment of erosion rates to develop and assess erosion control technology and to allocate conservation resources and development of conservation regulation, policies and programmes. Our main study goal was to assess the magnitude of deposition rates using Fallout Radionuclides ‘FRNs' (137-Cs and 210-Pb) and the mid-term (13 years) erosion rates using conventional runoff plot measurements in a small agricultural watershed under conventional and conservation tillage practices. The tillage treatments were conventional tillage system (CT), mechanical plough to 30 cm depth (the most common tillage system within the watershed); conservation tillage (CS) with cover crops during winter; and direct seeding (DS) no tillage with cover crops during winter. The experimental design - located in Mistelbach watershed 60 km north of Vienna/Austria - consists of one 3-metre-wide and 15-metre-long runoff plot (silt loam - slope of 14%) for each tillage system (CT, CS and DS) with the plots placed in the upper part of an agricultural field. 76 soil samples were collected to evaluate the initial fallout of 137-Cs and 210-Pb in a small forested area close to the experimental field, along a systematic multi-grid design,. In the sedimentation area of the watershed and down slope the agricultural field, 2 additional soil profiles were collected to 1 m depth. All soil samples were air dried, sieved to 2mm and analysed for their 137-Cs and 210-Pb contents using gamma detector. The main results and conclusion can be summarised as following: i) The initial 137-Cs fallout as measured in the 76 forested soil samples ranged from 1123 to 3354 Bq/m2 for an average of 1954 Bq/m2 with a coefficient of variation of 20.4 %. ii) Long-term erosion measurements (1994-2006) from runoff plots located in the upper part of the agricultural field just up

  1. Analysing motivation to do medicine cross-culturally: The International Motivation to do Medicine Scale - Análisis transcultural de la motivación para estudiar medicina: La Escala Internacional de Motivación para Estudiar Medicina

    Directory of Open Access Journals (Sweden)

    Salvador Sánchez Sánchez

    2009-05-01

    Full Text Available Vaglum, Wiers-Jensen, & Ekeberg (1999 developed an instrument to assess motivation to study medicine. This instrument has been applied in different countries but it has not been studied cross-culturally. Our aims were to develop a Motivation to do Medicine Scale for use in international studies and to compare motivations of UK and Spanish medical students (UK: n= 375; Spain: n= 149. A cross-sectional and cross-cultural study was conducted. The Vaglum et al. (1999 Motivation to do Medicine Scale (MMS was used. The original MMS factor structure was not supported by the Confirmatory Factor Analysis. Exploratory Factor Analyses within each country identified four factors: “People”, “Status”, “Natural Science” and “Research”. Students scored higher on the “People” and “Natural Science” than on the other factors. The UK sample scored higher than the Spanish sample on the “Research” factor and there were greater difference between genders in Spain for both “People” and “Research” factors. The scale is suitable for use in cross-cultural studies of medical students’ motivation. It can be used to investigate differences between countries and may be used to examine changes in motivation over time or over medical disciplines.

  2. Developing Techniques for Small Scale Indigenous Molybdenum-99 Production Using LEU Fission at Tajoura Research Center-Libya [Country report: Libya

    International Nuclear Information System (INIS)

    Alwaer, Sami M.

    2015-01-01

    The object of this work was to assist the IAEA by providing the Libyan country report about the Coordination Research Project (CRP), on the subject of “Developing techniques for small scale indigenous Mo-99 production using LEU-foil” which took place over five years and four RCMs. A CRP on this subject was approved in early 2005. The objectives of this CRP are to: transfer know-how in the area of 99 Mo production using LEU targets based on reference technologies from leading laboratories in the field to the participating laboratories in the CRP; develop national work plans based on various stages of technical development and objectives in this field; establish the procedures and protocols to be employed, including quality control and assurance procedures; establish the coordinated activities and programme for preparation, irradiation, and processing of LEU targets [a]; and to compare results obtained in the implementation of the technique in order to provide follow up advice and assistance. Technetium-99m ( 99m Tc), the daughter product of molybdenum-99 ( 99 Mo), is the most commonly utilized medical radioisotope in the world, used for approximately 20-25 million medical diagnostic procedures annually, comprising some 80% of all diagnostic nuclear medicine procedures. National and international efforts are underway to shift the production of medical isotopes from highly enriched uranium (HEU) to low enriched uranium (LEU) targets. A small but growing amount of the current global 99 Mo production is derived from the irradiation of LEU targets. The IAEA became aware of the interest of a number of developing Member States that are seeking to become small scale, indigenous producers of 99 Mo to meet local nuclear medicine requirements. The IAEA initiated Coordinated Research Project (CRP) T.1.20.18 “Developing techniques for small-scale indigenous production of Mo-99 using LEU or neutron activation” in order to assist countries in this field. The more

  3. Detecting Neolithic Burial Mounds from LiDAR-Derived Elevation Data Using a Multi-Scale Approach and Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Alexandre Guyot

    2018-02-01

    Full Text Available Airborne LiDAR technology is widely used in archaeology and over the past decade has emerged as an accurate tool to describe anthropomorphic landforms. Archaeological features are traditionally emphasised on a LiDAR-derived Digital Terrain Model (DTM using multiple Visualisation Techniques (VTs, and occasionally aided by automated feature detection or classification techniques. Such an approach offers limited results when applied to heterogeneous structures (different sizes, morphologies, which is often the case for archaeological remains that have been altered throughout the ages. This study proposes to overcome these limitations by developing a multi-scale analysis of topographic position combined with supervised machine learning algorithms (Random Forest. Rather than highlighting individual topographic anomalies, the multi-scalar approach allows archaeological features to be examined not only as individual objects, but within their broader spatial context. This innovative and straightforward method provides two levels of results: a composite image of topographic surface structure and a probability map of the presence of archaeological structures. The method was developed to detect and characterise megalithic funeral structures in the region of Carnac, the Bay of Quiberon, and the Gulf of Morbihan (France, which is currently considered for inclusion on the UNESCO World Heritage List. As a result, known archaeological sites have successfully been geo-referenced with a greater accuracy than before (even when located under dense vegetation and a ground-check confirmed the identification of a previously unknown Neolithic burial mound in the commune of Carnac.

  4. Development of Techniques for Small Scale Indigenous 99Mo Production Using LEU Targets at ICN Pitesti-Romania [Country report: Romania

    International Nuclear Information System (INIS)

    2015-01-01

    Initiation of the IAEA Coordinated Research Project (CRP) “Development Techniques for Small Scale Indigenous 99 Mo Production Using LEU Fission or Neutron Activation” during 2005 allowed Member States to participate through their research organization on contractor arrangement to accomplish the CRP objectives. Among these, the participating research organization Institute for Nuclear Research Pitesti Romania (ICN), was the beneficiary of financial support and Argonne National Laboratory assistance provided by US Department of Energy to the CRP for development of techniques for fission 99 Mo production based on LEU modified CINTICHEM process. The Agency’s role in this field was to assist in the transfer and adaptation of existing technology in order to disseminate a technique, which advances international non-proliferation objectives and promotes sustainable development needs, while also contributing to extend the production capacity for addressing supply shortages from the latest years. The Institute for Nuclear Research, considering the existing good conditions of infrastructure of the research reactor with suitable irradiation conditions for radioisotopes, a post irradiation laboratory with direct transfer of irradiated targets from the reactor and handling of high radioactive sources, and simultaneously the existence of an expanding internal market, decided to undertake the necessary steps in order to produce fission molybdenum. The Institute intends to develop the capability to respond to the domestic needs in cooperation with the IFINN–HH from Bucharest, which is able to perform the last step consisting in the loading of fission molybdenum on chromatography generators and dispensing to the final client. The primary scope of the project is the development of the necessary technological steps and chemical processing steps in order to be able to cover the entire process for fission molybdenum production at the required standard of purity

  5. Far-Field Acoustic Power Level and Performance Analyses of F31/A31 Open Rotor Model at Simulated Scaled Takeoff, Nominal Takeoff, and Approach Conditions: Technical Report I

    Science.gov (United States)

    Sree, Dave

    2015-01-01

    Far-field acoustic power level and performance analyses of open rotor model F31/A31 have been performed to determine its noise characteristics at simulated scaled takeoff, nominal takeoff, and approach flight conditions. The nonproprietary parts of the data obtained from experiments in 9- by 15-Foot Low-Speed Wind Tunnel (9?15 LSWT) tests were provided by NASA Glenn Research Center to perform the analyses. The tone and broadband noise components have been separated from raw test data by using a new data analysis tool. Results in terms of sound pressure levels, acoustic power levels, and their variations with rotor speed, angle of attack, thrust, and input shaft power have been presented and discussed. The effect of an upstream pylon on the noise levels of the model has been addressed. Empirical equations relating model's acoustic power level, thrust, and input shaft power have been developed. The far-field acoustic efficiency of the model is also determined for various simulated flight conditions. It is intended that the results presented in this work will serve as a database for comparison and improvement of other open rotor blade designs and also for validating open rotor noise prediction codes.

  6. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    Directory of Open Access Journals (Sweden)

    S. Ars

    2017-12-01

    Full Text Available This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping

  7. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    Science.gov (United States)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances

  8. The micro-scale synthesis of (117)Sn-enriched tributyltin chloride and its characterization by GC-ICP-MS and NMR techniques.

    Science.gov (United States)

    Peeters, Kelly; Iskra, Jernej; Zuliani, Tea; Ščančar, Janez; Milačič, Radmila

    2014-07-01

    Organotin compounds (OTCs) are among the most toxic substances ever introduced to the environment by man. They are common pollutants in marine ecosystems, but are also present in the terrestrial environment, accumulated mainly in sewage sludge and landfill leachates. In investigations of the degradation and methylation processes of OTC in environmental samples, the use of enriched isotopic tracers represents a powerful analytical tool. Sn-enriched OTC are also necessary in application of the isotope dilution mass spectrometry technique for their accurate quantification. Since Sn-enriched monobutyltin (MBT), dibutyltin (DBT) and tributyltin (TBT) are not commercially available as single species, "in house" synthesis of individual butyltin-enriched species is necessary. In the present work, the preparation of the most toxic butyltin, namely TBT, was performed via a simple synthetic path, starting with bromination of metallic Sn, followed by butylation with butyl lithium. The tetrabutyltin (TeBT) formed was transformed to tributyltin chloride (TBTCl) using concentrated hydrochloric acid (HCl). The purity of the synthesized TBT was verified by speciation analysis using the techniques of gas chromatography coupled to inductively coupled plasma mass spectrometry (GC-ICP-MS) and nuclear magnetic resonance (NMR). The results showed that TBT had a purity of more than 97%. The remaining 3% corresponded to DBT. TBT was quantified by reverse isotope dilution GC-ICP-MS. The synthesis yield was around 60%. The advantage of this procedure over those previously reported lies in its possibility to be applied on a micro-scale (starting with 10mg of metallic Sn). This feature is of crucial importance, since enriched metallic Sn is extremely expensive. The procedure is simple and repeatable, and was successfully applied for the preparation of (117)Sn-enriched TBTCl from (117)Sn-enriched metal. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Channeling contrast microscopy: a new technique for microanalysis of semiconductors

    International Nuclear Information System (INIS)

    McCallum, J.C.

    1985-01-01

    The technique of channeling contrast microscopy has been developed over the past few years for use with the Melbourne microprobe. It has been used for several profitable analyses of small-scale structures in semiconductor materials. This paper outlines the basic features of the technique and examples of its applications are given

  10. Predicting the effectiveness of different mulching techniques in reducing post-fire runoff and erosion at plot scale with the RUSLE, MMF and PESERA models.

    Science.gov (United States)

    Vieira, D C S; Serpa, D; Nunes, J P C; Prats, S A; Neves, R; Keizer, J J

    2018-08-01

    Wildfires have become a recurrent threat for many Mediterranean forest ecosystems. The characteristics of the Mediterranean climate, with its warm and dry summers and mild and wet winters, make this a region prone to wildfire occurrence as well as to post-fire soil erosion. This threat is expected to be aggravated in the future due to climate change and land management practices and planning. The wide recognition of wildfires as a driver for runoff and erosion in burnt forest areas has created a strong demand for model-based tools for predicting the post-fire hydrological and erosion response and, in particular, for predicting the effectiveness of post-fire management operations to mitigate these responses. In this study, the effectiveness of two post-fire treatments (hydromulch and natural pine needle mulch) in reducing post-fire runoff and soil erosion was evaluated against control conditions (i.e. untreated conditions), at different spatial scales. The main objective of this study was to use field data to evaluate the ability of different erosion models: (i) empirical (RUSLE), (ii) semi-empirical (MMF), and (iii) physically-based (PESERA), to predict the hydrological and erosive response as well as the effectiveness of different mulching techniques in fire-affected areas. The results of this study showed that all three models were reasonably able to reproduce the hydrological and erosive processes occurring in burned forest areas. In addition, it was demonstrated that the models can be calibrated at a small spatial scale (0.5 m 2 ) but provide accurate results at greater spatial scales (10 m 2 ). From this work, the RUSLE model seems to be ideal for fast and simple applications (i.e. prioritization of areas-at-risk) mainly due to its simplicity and reduced data requirements. On the other hand, the more complex MMF and PESERA models would be valuable as a base of a possible tool for assessing the risk of water contamination in fire-affected water bodies and

  11. Framing scales and scaling frames

    NARCIS (Netherlands)

    van Lieshout, M.; Dewulf, A.; Aarts, N.; Termeer, K.

    2009-01-01

    Policy problems are not just out there. Actors highlight different aspects of a situation as problematic and situate the problem on different scales. In this study we will analyse the way actors apply scales in their talk (or texts) to frame the complex decision-making process of the establishment

  12. An ultra-clean technique for accurately analysing Pb isotopes and heavy metals at high spatial resolution in ice cores with sub-pg g{sup -1} Pb concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Burn, Laurie J. [Department of Imaging and Applied Physics, Curtin University of Technology, GPO Box U1987, Perth 6845, Western Australia (Australia); Rosman, Kevin J.R. [Department of Imaging and Applied Physics, Curtin University of Technology, GPO Box U1987, Perth 6845, Western Australia (Australia)], E-mail: K.Rosman@curtin.edu.au; Candelone, Jean-Pierre [Department of Imaging and Applied Physics, Curtin University of Technology, GPO Box U1987, Perth 6845, Western Australia (Australia); Vallelonga, Paul [Department of Imaging and Applied Physics, Curtin University of Technology, GPO Box U1987, Perth 6845, Western Australia (Australia); Istituto per la Dinamica dei Processi Ambientali (IDPA-CNR), Dorsoduro 2137, 30123 Venice (Italy); Burton, Graeme R. [Department of Imaging and Applied Physics, Curtin University of Technology, GPO Box U1987, Perth 6845, Western Australia (Australia); Smith, Andrew M. [Australian Nuclear Science and Technology Organisation (ANSTO), PMB 1, Menai, NSW 2234 (Australia); Morgan, Vin I. [Australian Antarctic Division and Antarctic Climate and Ecosystems CRC, Private Bag 80, Hobart, Tasmania 7001 (Australia); Barbante, Carlo [Istituto per la Dinamica dei Processi Ambientali (IDPA-CNR), Dorsoduro 2137, 30123 Venice (Italy); Hong, Sungmin [Korea Polar Research Institute, Songdo Techno Park, 7-50, Songdo-dong, Yeonsu-gu, Incheon 406-840 (Korea, Republic of); Boutron, Claude F. [Laboratoire de Glaciologie et Geophysique de l' Environnement du CNRS, 54, rue Moliere, B.P. 96, 3840.2 St Martin d' Heres Cedex (France)

    2009-02-23

    Measurements of Pb isotope ratios in ice containing sub-pg g{sup -1} concentrations are easily compromised by contamination, particularly where limited sample is available. Improved techniques are essential if Antarctic ice cores are to be analysed with sufficient spatial resolution to reveal seasonal variations due to climate. This was achieved here by using stainless steel chisels and saws and strict protocols in an ultra-clean cold room to decontaminate and section ice cores. Artificial ice cores, prepared from high purity water were used to develop and refine the procedures and quantify blanks. Ba and In, two other important elements present at pg g{sup -1} and fg g{sup -1} concentrations in Polar ice, were also measured. The final blank amounted to 0.2 {+-} 0.2 pg of Pb with {sup 206}Pb/{sup 207}Pb and {sup 208}Pb/{sup 207}Pb ratios of 1.16 {+-} 0.12 and 2.35 {+-} 0.16, respectively, 1.5 {+-} 0.4 pg of Ba and 0.6 {+-} 2.0 fg of In, most of which probably originates from abrasion of the steel saws by the ice. The procedure was demonstrated on a Holocene Antarctic ice core section and was shown to contribute blanks of only {approx}5%, {approx}14% and {approx}0.8% to monthly resolved samples with respective Pb, Ba and In concentrations of 0.12 pg g{sup -1}, 0.3 pg g{sup -1} and 2.3 fg g{sup -1}. Uncertainties in the Pb isotopic ratio measurements were degraded by only {approx}0.2%.

  13. Genetic sexing strains in Mediterranean fruit fly, an example for other species amenable to large-scale rearing for the sterile insect technique

    International Nuclear Information System (INIS)

    Franz, G.

    2005-01-01

    Through genetic and molecular manipulations, strains can be developed that are more suitable for the sterile insect technique (SIT). In this chapter the development of genetic sexing strains (GSSs) is given as an example. GSSs increase the effectiveness of area-wide integrated pest management (AW-IPM) programmes that use the SIT by enabling the large-scale release of only sterile males. For species that transmit disease, the removal of females is mandatory. For the Mediterranean fruit fly Ceratitis capitata (Wiedemann), genetic sexing systems have been developed; they are stable enough to be used in operational programmes for extended periods of time. Until recently, the only way to generate such strains was through Mendelian genetics. In this chapter, the basic principle of translocation-based sexing strains is described, and Mediterranean fruit fly strains are used as examples to indicate the problems encountered in such strains. Furthermore, the strategies used to solve these problems are described. The advantages of following molecular strategies in the future development of sexing strains are outlined, especially for species where little basic knowledge of genetics exists. (author)

  14. Effects of processing parameters on the caffeine extraction yield during decaffeination of black tea using pilot-scale supercritical carbon dioxide extraction technique.

    Science.gov (United States)

    Ilgaz, Saziye; Sat, Ihsan Gungor; Polat, Atilla

    2018-04-01

    In this pilot-scale study supercritical carbon dioxide (SCCO 2 ) extraction technique was used for decaffeination of black tea. Pressure (250, 375, 500 bar), extraction time (60, 180, 300 min), temperature (55, 62.5, 70 °C), CO 2 flow rate (1, 2, 3 L/min) and modifier quantity (0, 2.5, 5 mol%) were selected as extraction parameters. Three-level and five-factor response surface methodology experimental design with a Box-Behnken type was employed to generate 46 different processing conditions. 100% of caffeine from black tea was removed under two different extraction conditions; one of which was consist of 375 bar pressure, 62.5 °C temperature, 300 min extraction time, 2 L/min CO 2 flow rate and 5 mol% modifier concentration and the other was composed of same temperature, pressure and extraction time conditions with 3 L/min CO 2 flow rate and 2.5 mol% modifier concentration. Results showed that extraction time, pressure, CO 2 flow rate and modifier quantity had great impact on decaffeination yield.

  15. Two large-scale analyses of Ty1 LTR-retrotransposon de novo insertion events indicate that Ty1 targets nucleosomal DNA near the H2A/H2B interface

    Directory of Open Access Journals (Sweden)

    Bridier-Nahmias Antoine

    2012-12-01

    Full Text Available Abstract Background Over the years, a number of reports have revealed that Ty1 integration occurs in a 1-kb window upstream of Pol III-transcribed genes with an approximate 80-bp periodicity between each integration hotspot and that this targeting requires active Pol III transcription at the site of integration. However, the molecular bases of Ty1 targeting are still not understood. Findings The publications by Baller et al. and Mularoni et al. in the April issue of Genome Res. report the first high-throughput sequencing analysis of Ty1 de novo insertion events. Their observations converge to the same conclusion, that Ty1 targets a specific surface of the nucleosome at he H2A/H2B interface. Conclusion This discovery is important, and should help identifying factor(s involved in Ty1 targeting. Recent data on transposable elements and retroviruses integration site choice obtained by large-scale analyses indicate that transcription and chromatin structure play an important role in this process. The studies reported in this commentary add a new evidence of the importance of chromatin in integration selectivity that should be of interest for everyone interested in transposable elements integration.

  16. Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques - project status and first results

    Science.gov (United States)

    Schmidt, M.; Hugentobler, U.; Jakowski, N.; Dettmering, D.; Liang, W.; Limberger, M.; Wilken, V.; Gerzen, T.; Hoque, M.; Berdermann, J.

    2012-04-01

    Near real-time high resolution and high precision ionosphere models are needed for a large number of applications, e.g. in navigation, positioning, telecommunications or astronautics. Today these ionosphere models are mostly empirical, i.e., based purely on mathematical approaches. In the DFG project 'Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques (MuSIK)' the complex phenomena within the ionosphere are described vertically by combining the Chapman electron density profile with a plasmasphere layer. In order to consider the horizontal and temporal behaviour the fundamental target parameters of this physics-motivated approach are modelled by series expansions in terms of tensor products of localizing B-spline functions depending on longitude, latitude and time. For testing the procedure the model will be applied to an appropriate region in South America, which covers relevant ionospheric processes and phenomena such as the Equatorial Anomaly. The project connects the expertise of the three project partners, namely Deutsches Geodätisches Forschungsinstitut (DGFI) Munich, the Institute of Astronomical and Physical Geodesy (IAPG) of the Technical University Munich (TUM) and the German Aerospace Center (DLR), Neustrelitz. In this presentation we focus on the current status of the project. In the first year of the project we studied the behaviour of the ionosphere in the test region, we setup appropriate test periods covering high and low solar activity as well as winter and summer and started the data collection, analysis, pre-processing and archiving. We developed partly the mathematical-physical modelling approach and performed first computations based on simulated input data. Here we present information on the data coverage for the area and the time periods of our investigations and we outline challenges of the multi-dimensional mathematical-physical modelling approach. We show first results, discuss problems

  17. Generation of future potential scenarios in an Alpine Catchment by applying bias-correction techniques, delta-change approaches and stochastic Weather Generators at different spatial scale. Analysis of their influence on basic and drought statistics.

    Science.gov (United States)

    Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio

    2017-04-01

    and drought statistic of the historical data. A multi-objective analysis using basic statistics (mean, standard deviation and asymmetry coefficient) and droughts statistics (duration, magnitude and intensity) has been performed to identify which models are better in terms of goodness of fit to reproduce the historical series. The drought statistics have been obtained from the Standard Precipitation index (SPI) series using the Theory of Runs. This analysis allows discriminate the best RCM and the best combination of model and correction technique in the bias-correction method. We have also analyzed the possibilities of using different Stochastic Weather Generators to approximate the basic and droughts statistics of the historical series. These analyses have been performed in our case study in a lumped and in a distributed way in order to assess its sensibility to the spatial scale. The statistic of the future temperature series obtained with different ensemble options are quite homogeneous, but the precipitation shows a higher sensibility to the adopted method and spatial scale. The global increment in the mean temperature values are 31.79 %, 31.79 %, 31.03 % and 31.74 % for the distributed bias-correction, distributed delta-change, lumped bias-correction and lumped delta-change ensembles respectively and in the precipitation they are -25.48 %, -28.49 %, -26.42 % and -27.35% respectively. Acknowledgments: This research work has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 and CORDEX projects for the data provided for this study and the R package qmap.

  18. Fracture analyses of WWER reactor pressure vessels

    International Nuclear Information System (INIS)

    Sievers, J.; Liu, X.

    1997-01-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab

  19. Fracture analyses of WWER reactor pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Sievers, J; Liu, X [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    1997-09-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab.

  20. Lack of association between PKLR rs3020781 and NOS1AP rs7538490 and type 2 diabetes, overweight, obesity and related metabolic phenotypes in a Danish large-scale study: case-control studies and analyses of quantitative traits

    Directory of Open Access Journals (Sweden)

    Almind Katrine

    2008-12-01

    Full Text Available Abstract Background Several studies in multiple ethnicities have reported linkage to type 2 diabetes on chromosome 1q21-25. Both PKLR encoding the liver pyruvate kinase and NOS1AP encoding the nitric oxide synthase 1 (neuronal adaptor protein (CAPON are positioned within this chromosomal region and are thus positional candidates for the observed linkage peak. The C-allele of PKLR rs3020781 and the T-allele of NOS1AP rs7538490 are reported to strongly associate with type 2 diabetes in various European-descent populations comprising a total of 2,198 individuals with a combined odds ratio (OR of 1.33 [1.16–1.54] and 1.53 [1.28–1.81], respectively. Our aim was to validate these findings by investigating the impact of the two variants on type 2 diabetes and related quantitative metabolic phenotypes in a large study sample of Danes. Further, we intended to expand the analyses by examining the effect of the variants in relation to overweight and obesity. Methods PKLR rs3020781 and NOS1AP rs7538490 were genotyped, using TaqMan allelic discrimination, in a combined study sample comprising a total of 16,801 and 16,913 individuals, respectively. The participants were ascertained from four different study groups; the population-based Inter99 cohort (nPKLR = 5,962, nNOS1AP = 6,008, a type 2 diabetic patient group (nPKLR = 1,873, nNOS1AP = 1,874 from Steno Diabetes Center, a population-based study sample (nPKLR = 599, nNOS1AP = 596 from Steno Diabetes Center and the ADDITION Denmark screening study cohort (nPKLR = 8,367, nNOS1AP = 8,435. Results In case-control studies we evaluated the potential association between rs3020781 and rs7538490 and type 2 diabetes and obesity. No significant associations were observed for type 2 diabetes (rs3020781: pAF = 0.49, OR = 1.02 [0.96–1.10]; rs7538490: pAF = 0.84, OR = 0.99 [0.93–1.06]. Neither did we show association with overweight or obesity. Additionally, the PKLR and the NOS1AP genotypes were demonstrated not

  1. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  2. Charge-storage techniques for pulse-height analysis; Techniques de stockage electrostatique pour l'analyse en amplitude d'impulsion; Metod nakopleniya zaryada dlya amplitudnogo analiza impul'sov; Tecnicas de almacenamiento de cargas para analisis de amplitud de impulsos

    Energy Technology Data Exchange (ETDEWEB)

    Costrell, L; Brueckmann, R E [National Bureau of Standards, Washington, DC (United States)

    1962-04-15

    The low-duty cycles of many pulsed accelerators make high pulse rates necessary within the bursts in order to accumulate adequate data in a reasonable time. Not only do economic factors, as influenced by expensive machine time, dictate the use of high pulse rates, but purely technical considerations often make experiments unfeasible unless the pulses per burst are so numerous as to exclude the use of conventional pulse-height analysers. For these reasons much effort has been devoted to the development of high-speed pulse-height analysers for use with pulsed accelerators. Much of this work has been directed toward producing what we term a ''charge-storage analyser'' based on work conducted at the National Bureau of Standards. We have developed a charge-storage analyser that operates with the NBS 180-MeV synchrotron on a nuclear-absorption experiment for which the obstacles would otherwise be formidable. The analyser uses temporary electrostatic storage for the accumulation of pulse-height data during the machine bursts. During the dead intervals between bursts the contents of the temporary storage are analysed and transferred into a conventional magnetic core memory. The use of this technique for nanosecond pulses is discussed and data is presented to show its feasibility. (author) [French] Comme la partie du cycle de travail effectiv